This is a story about something that happened just now. I’m not sure if this is the right place to post it, so I’m sorry in advance if it’s not!
I self-host some services on an old laptop at home. Mainly Jellyfin and Nextcloud, which I use to text with some close friends.
I left this morning to spend some days with my parents, and I jokingly told one of my friends that “I hope nothing bad happens to the server, since I’ll be gone for a week and I won’t have physical access to it”.
I’ve had problems with power cuts in the past, since I don’t have a UPS (and my laptop’s battery is dead), but they were mostly due to some faulty power connector that has been replaced, so I don’t expect any weird stuff happening. My IP is dynamic, but I run a cron script to regularly check it and change the DNS records if it changes. So, I was pretty sure everything would actually be fine.
But if you’ve read the title of the post, you probably know where this is going.
I’ve used let’s encrypt SSL certificates in the past with nginx proxy manager, and it was great! They automatically got renewed so I didn’t have to really pay attention to that. Except after a year or so, they just stopped working. Nginx gives me a nondescript error when trying to connect to my domain registrar to create a new certificate, and after trying -and failing- to fix it, I decided to just use the SSL certificates my domain registrar provides.
That worked great! The only problem is they don’t automatically update anymore; it just takes me 5 minutes to update them and I only have to do it once every 3-4 months, so it’s fine…
A couple hours ago, I was trying to send a meme to my friend via nextcloud and… Failed to establish connection
panic.jpg
I try to open sonarr on my web browser. I get an EXPIRED_CERTIFICATE error. Date of today. Oh no.
You’ll be thinking “What’s the problem?”, right? “Just update the certificate again!” Well, the problem is I need access to nginx proxy manager to do that. And I don’t have its port forwarded (since I didn’t want to expose it to the internet, because I didn’t think I needed to).
I thought that was it. I was going to have to wait for a week until I got back home to fix it. But I still had ssh access to the server!
yes, I know, this is probably a very bad idea, don’t expose your services and your ssh to the internet without a VPN like tailscale, but to be fair I don’t know what I’m doing! At least I use a nonstandard port, and I use cert login instead of a password.
At first I tried replacing the cert files, but I realized that wasn’t going to work. So I decided to do some googling web searching, and thankfully I found exactly what I needed: SSH tunneling.
What does that mean? Well, for the people like me that had no idea this was possible: you can use your SSH connection as a tunnel to access the server’s local network (kind of like a vpn?). So I used the command:
ssh -NL LOCAL_PORT:DESTINATION:DESTINATION_PORT USER@SSH_SERVER -p SSH_PORT
I typed localhost:DESTINATION_PORT on my web browser… and nothing happened.
“Oops, actually it’s localhost:LOCAL_PORT”
And… BAM! There it was, the nginx web interface! I typed my credentials, created a new cert, uploaded the cert files, changed the cert for all the services… and it worked! Crisis averted.
So, what did I learn from this? Well, that my server is never safe from failing to work lol. But I won this time!
If you want a backup cert checker, https://iam.redsift.cloud/ does it for free. They basically took over the LetsEncrypt email notifications.
SSH tunneling is absolutely amazing, glad you figured it out.
We had a similar issue at work. Basically, we had a corporate laptop (Windows) that we couldn’t install anything to, and we needed to set up local development against a service running on the laptop. But since we couldn’t actually install anything w/o going through the IT dept (nobody wants to do that), I remembered that they had gotten git installed, and that comes w/ a shell which has SSH available. So I used that to SSH tunnel to the dev laptop (running macOS) and they were able to continue working.
SSH tunnels are a fantastic tool to have in your toolbox. :)
I once had to use both remote and local forwarding to update a remote Linux server from my windows laptop, from a virtual machine on my laptop.
uptime-kuma will monitor your https availability and automatically check your cert expiration.
I’m guessing that it notifies you before the cert expires, because I already have a great notification system for the server not working: I’ll always get a text from a friend within minutes asking why it’s not working :P
How do you get alerted from uptime Kuma if you can’t access the site though?
I run uptime-kuma on a cheap VPS and have it ping my external ports on the home server. If the house loses internet I still get alerts.
It has a built in alerting mechanism that integrates with demo communication services. Also
Uptime Kuma is integrated Apprise which supports up to 78+ notification services.
Right, I guess I meant if it necessarily requires internet access to notify, how does it send the notification when it can not reach the internet.
I have uptime Kuma and use ntfy to alert myself for various things, but if I can’t access my server for any reason, the likelihood I’d be alerted first is very low.
I’m not aware of a way for it to notify if the internet is down. An expired certificate would not create that failure scenario though.
Also the notification would have gone out well before the certificate expired.