That’s just crazy talk. If we don’t listen to the billionaires the line might not keep going up quite so fast. For the purposes of this argument, please ignore TSLA, the climatologists obviously got to that one.
That’s just crazy talk. If we don’t listen to the billionaires the line might not keep going up quite so fast. For the purposes of this argument, please ignore TSLA, the climatologists obviously got to that one.
That’s interesting. I’ve often wondered what it must be like programing or using the CLI if you aren’t familiar with the English language, but I hadn’t considered the dyslexia/graphia type issues.
Now that’s a better reason for looking for a GUI solution than the OP had. I hadn’t really considered how dyslexia would affect CLI usage.
Ah, ok. You’ll want to specify two allowedip ranges on the clients, 192.168.178.0/24 for your network, and 10.0.0.0/24 for the other clients. Then your going to need to add a couple of routes:
You’ll also need to ensure IP forwarding is enabled on both the VPS and your home machine.
The allowed IP ranges on the server indicate what private addresses the clients can use, so you should have a separate one for each client. They can be /32 addresses as each client only needs one address and, I’m assuming, doesn’t route traffic for anything else.
The allowed IP range on each client indicates what private address the server can use, but as the server is also routing traffic for other machines (the other client for example) it should cover those too.
Apologies that this isn’t better formatted, but I’m away from my machine. For example, on your setup you might use:
On home server: AllowedIPs 192.168.178.0/24 Address 192.168.178.2
On phone: AllowedIPs 192.168.178.0/24 Address 192.168.178.3
On VPS: Address 192.168.178.1 Home server peer: AllowedIPs 192.168.178.2/32
Phone peer: AllowedIPs 192.168.178.3/32
I manage all my homelab infra stuff via ansible and run services via kubenetes. All the ansible playbooks are in git, so I can roll back if I screw something up, and I test it on a sacrificial VM first when I can. Running services in kubenetes means I can spin up new instances and test them before putting them live.
Working like that makes it all a lot more relaxing as I can be confident in my changes, and back them out if I still get it wrong.