💾 Archived View for jacksonchen666.com › posts › 2023-09-03 › 23-08-24 › index.gmi captured on 2023-09-28 at 15:52:29. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-09-08)
-=-=-=-=-=-=-
2023-09-03T23:08:24Z
I have a server. It's a laptop. I have said that many times, and it's true. It's just a random laptop that I had laying around, and decided it would be a good idea to run some stuff on it.
I use Fedora Linux. The package manager dnf has `dnf history`, which lists the history of your package management all the way back to when you installed. My server was first installed on 2021-09-04T12:37:26Z and it took
8 minutes to install the distro. It's even got logs of what happened during the install!
However, that's a feature of dnf, and not really much other package managers.
It's almost been 2 years of running now. After stumbling upon many issues, I have brought the server into stability with a lot of work (and shell scripts).
I upgraded the internal drive from a hard drive (spinning rust) to a solid state drive (electrons of something). Highly suggested unless you like slow performance. Like pretty slow.
The laptop-server has a very limited amount of RAM. I also run probably too many things on that too. Also, nginx config reloads take like 10 seconds, which is a significant of time.
It also hosts my website.
Self explanatory. It's an iMac.
Currently, it runs Alpine Linux. It's kind of an experimental box to do experimental thingies.
Both laptop-server and imac run Prometheus. They also federate with each other, to share data and to make alerting non-functioning things easier.
It also runs a Tor bridge. No, I am not going to make details about it available.
This is specific to my machine, not iMacs in general.
First, it started running with Debian.
With 4 gigabytes of RAM, I did... something. I'm not sure, maybe running an IPFS node (to have my website on IPFS when I still did that).
Then some disk fuckery happened and the install got destroyed or something. Then I abandon it.
Probably later, I try to get some stuff off of it, then I try again with Debian again. Later, I would distrohop to Alpine Linux and now I'm on Alpine Linux for imac.
Previously, I used Uptime Kuma.
Running a giant nodejs app didn't really seem like a good idea for reliability, so I started using Prometheus, inspired by whatever sourcehut was doing (using Prometheus).
Over time, I shifted everything to Prometheus. That included the blackbox prober, which is running on imac (for package management reasons), federates with the laptop-server's Prometheus, and the laptop-server Prometheus sends alerts of when things are wrong or something.
Now onto the alerts. Example of alerts I setup:
And also, for good measure:
Alerts just go to Alertmanager. If it's "interesting" or "important", it goes through ntfy (with a bridge). If it's "urgent", it goes through both ntfy and email, just in case.
I do not have Alertmanager redundancy. Both imac and laptop-server are in the same room, and connect to the same power and internet, so if either power or internet goes down, there's not really a way to alert me.
I rarely get any alerts if any. My aim for alerts is when something is probably about to go wrong, or something has went wrong. If it's fine, doesn't really matter.
I have a status page! Although, I rarely update it, only when some unplanned outage happens and not when I upgrade a piece of software (at least, not anymore).
I rarely update it because... well, it's a hassle. I tried to make it not a hassle (filling in potential details), but I just don't use it unless something has gone terribly wrong where it is really obvious that "something has gone wrong". That's usually downtime from external factors, or downtime that lasts a noticeably long time (5-10 minutes is roughly the threshold).
My backups are a bit messy. It goes back and forth with laptop-server and imac, and with my other devices as well.
Backups of laptop-server and imac are made with borg backup. Backups are encrypted, compressed, and deduplicated. Backups are then stored at the following places:
That is a lot of different places. For backing up to my phone and main computer, I use Syncthing.
My backups take the easy route: Backup the entire root filesystem (and the boot filesystem too). Partial backups are a hassle, and you could get holes which could be fatal to your restore process. An easier/lazier backup method is an easier restore process. Also test your entire backups by doing a test restore.
Currently, I have no off-site backup. I only have on-site backups, and that's about it. I know of one provider and I'll take my time to do nothing about considering it.
I have used my backups multiple times. It includes that one time that I accidentally deleted the configuration for Mastodon with `git clean`. Restored that file by using the backups to restore that specific file.
I would say my setup is a bit complicated. It is only me doing this after all, and it doesn't really matter to anyone else because it's just for me and nobody else.