

That’s pretty damn clever
That’s pretty damn clever
I try to slap anything I’d face the Internet with with the read_only to further restrict exploit possibilities, would be abs great if you could make it work! I just follow all reqs on the security cheat sheet, with read_only
being one of them: https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
With how simple it is I guessed that running as a user
and restricting cap_drop: all
wouldn’t be a problem.
For read_only
many containers just need tmpfs: /tmp
in addition to the volume for the db. I think many containers just try to contain temporary file writing to one directory to make applying read_only
easier.
So again, I’d abs use it with read_only
when you get the time to tune it!!
Looks awesome and very efficient, does it also run with read_only: true
(with a db volume provided, of course!)? Many containers just need a /tmp, but not always
I trust the check restic -r '/path/to/repo' --cache-dir '/path/to/cache' check --read-data-subset=2000M --password-file '/path/to/passfile' --verbose
. The --read-data-subset
also does the structural integrity while also checking an amount of data. If I had more bandwidth, I’d check more.
When I set up a new repo, I restore some stuff to make sure it’s there with restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' restore latest --target /tmp/restored --include '/some/folder/with/stuff'
.
You could automate that and make sure some essential-but-not-often-changing files match regularly by restoring them and comparing them. I would do that if I wasn’t lazy I guess, just to make sure I’m not missing some key-but-slowly-changing files. Slowly/not often changing because a diff would fail if the file changes hourly and you backup daily, etc.
Or you could do as others have suggested and mount it locally and just traverse it to make sure some key stuff works and is there sudo mkdir -p '/mnt/restic'; sudo restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' mount '/mnt/restic'
.
I have my router (opnsense) redirect all DNS requests to pihole/adguardhome. AdGuard home is easier for this since you can have it redirect wildcard *.local.domain while pihole wants every single one individually (uptime.local.domain, dockage.local.domain). With that combo of router not letting DNS out to upstream servers and my local DNS servers set up to redirect *.local.domain to the correct location(s), my DNS requests inside my local network never get out where an upstream DNS can tell you to kick rocks.
I combined the above with a (hella cheap for 10yr) paid domain, wildcard certified the domain without exposure to the wan (no ip recorded, but accepted by devices), and have all *.local.domain requests redirect to a single server caddy instance that does the final redirecting to specific services.
I’m not fully sure what you’ve got cooking but I hope typing out what works for me can help you figure it out on your end! Basically the router doesn’t let anything DNS get by to be fucked with by the ISP.
I’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.
Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.
Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.
On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.
So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.
If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.
Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.
I wish too for an in-depth blog post, but the github answer is at least succinct enough
This answers all of your questions: https://github.com/containers/podman/discussions/13728 (link was edited, accidentally linked a redhat blog post that didn’t answer your Q directly but does make clear that specifying a user in rootless podman is important for security for the user running the rootless podman container if that user does more than just run the rootless podman container).
So the best defense plus ease of use is podman root assigning non-root UIDs to the containers. You can do the same with Docker, but Docker with non-root UIDs assigned still caries the risk of the root-level Docker daemon being hacked and exploited. Podman does not have a daemon to be hacked and exploited, meaning root Podman with non-root UIDs assigned has no downsides!
Odd, I’ll try to deploy this when I can and see!
I’ve never had a problem with a volume being on the host system, except with user permissions messed up. But if you haven’t given it a user parameter it’s running as root and shouldn’t have a problem. So I’ll see sometime and get back to you!