I know Ceph would work for this use case, but it’s not a lighthearted choice, kind of an investment and a steep learning curve (at least it was, and still is, for me).
- 1 Post
- 9 Comments
notfromhere@lemmy.mlto Selfhosted@lemmy.world•Tailscale MagicDNS issues since 1.84.1 mac?English52·13 days agoI went and edited my hosts file and added all of my devices, but I only have a handful. Tailscale on macOS has a lot of bugs, this being one of many.
notfromhere@lemmy.mlto Selfhosted@lemmy.world•Self-hosting is having a moment. Ethan Sholly knows why.English1·1 month agoIt depends on the container I suppose. There are some that are very difficult to rebuild depending on what’s in it and what it does. Some very complex software can be ran in containers.
notfromhere@lemmy.mlto Selfhosted@lemmy.world•Self-hosting is having a moment. Ethan Sholly knows why.English1·1 month agoI’ve been wanting to tinker with NixOS. I’ve stuck in the stone ages automating VM deployments on my Proxmox cluster using ansible. One line and about 30 minutes (cuda install is a beast) to build a reproducible VM running llama.cpp with llama-swap.
notfromhere@lemmy.mlto Selfhosted@lemmy.world•Self-hosting is having a moment. Ethan Sholly knows why.English1·1 month agoTypically, the container image maintainer will provide environment variables which can override the database connection. This isn’t always the case but usually it’s as simple as updating those and ensuring network access between your containers.
notfromhere@lemmy.mlto Selfhosted@lemmy.world•Self-hosting is having a moment. Ethan Sholly knows why.English2·1 month agoA lot of times it is necessary to build the container oneself, e.g., to fix a bug, satisfy a security requirement, or because the container as-built just isn’t compatible with the environment. So in that case would you contract an expert to rebuild it, host it on a VM, look for a different solution, or something else?
notfromhere@lemmy.mlto Selfhosted@lemmy.world•Self-hosting is having a moment. Ethan Sholly knows why.English1·1 month agoreproducing those installs from scratch + restoring backups would be a single command plus waiting 5 minutes.
Is that with Ansible or your own tooling or something else?
notfromhere@lemmy.mlto Games@lemmy.world•List of Fan (OpenSource) Ports/Remakes of GamesEnglish1·1 month agoLegacy of Kane Blood Omen fan remake - https://omnicide.razorwind.ru/en/
I’m really not sure. I’ve heard of people using Ceph across datacenters. Presumably that’s with a fast-ish connection, and it’s like joining separate clusters, so you’d likely need local ceph cluster at each site then replicate between datacenters. Probably not what you’re looking for.
I’ve heard good things about Garbage S3 and that it’s usable across the internet on slow-ish connections. Combined with JuiceFS is what I was looking at using before I landed on Ceph.