💾 Archived View for alaskalinuxuser.ddns.net › 2024-03-28.gmi captured on 2024-12-17 at 09:49:09. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2024-09-29)
-=-=-=-=-=-=-
">
42w, https://alaskalinuxuser3.ddns.net/wp-content/uploads/2024/03/nc-
300x182.png 300w" sizes="(max-width: 742px) 100vw, 742px" />
I don’t wear a tinfoil hat. But I also don’t trust Google. Or any big tech, to
be honest. There are a lot of advantages to using something like Google to sync
your pictures, your documents, your life. There is also the disadvantage that
potentially any Google employee, or some other third party with access, is
getting your data (digital life) for their own purposes.
Some people don’t believe that. I used to be one of those people. Then I
started working on custom ROMs for cell phones, and started realizing just how
much unparalleled access you freely grant by using services from your carrier,
phone manufacturer, and big tech companies like Google. I’m not a criminal, I’m
not a psycho, I’m just a regular guy, but I’d like to keep my life to myself,
and only share it with people I specifically choose to share it with.
Some people also believe like me, and yet they say, “What can I do?” and drive
on with life. I get that, it is tough to break the digital chains that are so
thoroughly part of our culture. Some take small steps, like using apps off of
F-Droid, or drop certain, if not all, social media. Some take it farther, and
install or even build their own custom ROMs for their cell phone, or buy de-
Googled products. And then, some believe it so much that they host their own
Nextcloud server in their laundry room.
At least, mine is in the laundry room. Almost my entire digital life is in my
laundry room, where my own de-Googled phones connect to my own self hosted
Nextcloud server to sync and back up all of my pictures, my documents, my
music, my movies (through Jellyfin), and my notes, passwords, etc. I felt very
good about this. I felt my data was more private, and it is.
Then one day, the upstairs bathtub sprang a leak and poured water all over my
server.
My digital life barely survived. I made backups, of course, but only every 3
months. My standard backup method was an external hard drive, to which I copied
a Clonezilla image of my server. This worked great, as far as a backup goes,
but as my picture collection, and movie collection, and music collection, and
family grew, so did the time it takes to do backup. Not to mention I had to buy
even bigger external hard drives, since I had multiple Terabyte sized drive
raid arrays. It often took over 6 hours just to make the backup, not to mention
any maintenance to the machines.
This excess time also slowed down upgrades, because I would wait to do upgrades
until I had performed a backup. Hey, it works now, let’s back it up before we
perform an upgrade that hopefully doesn’t screw up any configurations and cause
my server to not serve me….
I needed a better option. And, I think I found it in rsync. Rsync is, of
course, an old tool that has been around for quite a while, but there were some
hurdles that I had to overcome to use it. For instance, you can’t use rsync to
sync some other user’s stuff, unless you are root. Further, you can’t rsync
something like your www-data user’s stuff because the www-data user doesn’t
have a login, unless you are root. Yet, you can’t rsync or ssh as root to
another machine, or at least, you can’t by default.
In my case, I have server A (for active) and server B (for backup). Server A
and B do not have the same hardware, but do have the same partitions on their
drives and are the same architecture. In the past, I literally used Clonezilla
on server A, and restored that image onto server B to have a ready backup, and
then kept the Clonezilla image hard drive off site. This gave me a mirrored
server, that if server A ever had a hardware failure, I could plug the network
cable into server B and only lose the data between then and the last backup,
presumably less than 3 months ago.
In my case, server A and server B are on the same network, they are mirrors of
each other (at the time of backup), and they both have two NICs, one configured
as the server IP address, the other as the backup IP address, so I can just
plug them in to whichever of the two NICs and use them as whatever I plugged
them in as. Some details will remain ambiguous here, and for this discussion,
we will use the IP addresses of 192.168.50.51 for server A, and 192.168.50.28
for server B.
It is generally a bad idea to permit root login over SSH. to mitigate that, I
only allow the root login over SSH to come from one allowed IP address, that of
server A to server B. Thus, no outside entity can log in as root over SSH, and
no inside entity can log in as root over SSH unless they happen to be the
exactly correct IP address. Granted, if someone is in your network, you have
some big issues already, but I try to be as secure as possible, within reason.
I configured both server A’s and B’s /etc/ssh/sshd_config file by adding these
two lines at the end of the file:
PermitRootLogin yes\nDenyUsers root@"!192.168.50.51,*"
This allows root to login, and denies all attempts to log in as root unless it
comes from 192.168.50.51, or server A. You only need to do this on server B,
but in my use case, I can technically swap server A and B by simply moving
their network cable to the other NIC, so I did this on both. Make sure you
restart the service after doing this….
$ sudo systemctl restart ssh
If this is the first time you’ve ever done this, and you are using a server
running Ubuntu, you never have had a root password before, so you need to set
one now:
$ sudo passwd
It will prompt you for the sudo user password, then the new root password,
twice, to verify it.
Then you can simply run rsync from server A to put the data on server B. In my
use case, I am syncing Jellyfin and Nextcloud. I always start with a “dry run”
to make sure it is going to do what I want before I do it for real:
$ sudo rsync -av --dry-run --progress --delete /var/jellyfin/
root@192.168.50.28:/var/jellyfin/\n$ sudo rsync -av --dry-run --progress --
delete /var/www/ root@192.168.50.28:/var/www/
It will ask for your sudo password, then the root password of server B. If that
looks like it is going to work, then I actually run the rsync.
$ sudo rsync -av --progress --delete /var/jellyfin/ root@192.168.50.28:/var/
jellyfin/\n$ sudo rsync -av --progress --delete /var/www/ root@192.168.50.28:/
var/www/
And then I thought I was all done. But I was wrong. You see, I synced all the
data, but these programs use an SQL database that is not stored in these
folders. In my case it is mariaDB, which stores it’s database in the /var/lib/
mysql/ folder, so you will need to sync that as well.
$ sudo rsync -av --progress --delete /var/lib/mysql/ root@192.168.50.28:/var/
lib/mysql/
Technically, you could also use the mysql syntax and program to export your
database from server A and then import it on server B, but I found that took a
lot more time, typing, and interaction on my part. Of course you could script
it, but since both machines (in my case) are mirrors of each other, the only
database on there is the same database, and I just copy the whole database from
machine to machine with this one command.
And for Jellyfin, you also need the lib/jellyfin folder:
$ sudo rsync -av --progress --delete /var/lib/jellyfin/ root@192.168.50.28:/
var/lib/jellyfin/
As a side note, if the two machines are mirrors of each other, you could simply
copy all of the /var folder, like so:
$ sudo rsync -av --progress --delete /var/ root@192.168.50.28:/var/
This makes a nice one liner, but it does copy a bit more “extra” stuff, much of
it not being related to the subject in question. But I did try this, and aside
from taking longer and more network traffic, it worked rather well.
In any event, after having run rsync, moving both the data and the SQL
database, you now need to restart several services on server B. The SQL
service, jellyfin, the web service, etc., but I found it much easier to simply
reboot server B. Then you can’t forget any particular service, and you know all
of them were restarted. If you don’t do this, as I have proved on occasion,
then when you go to your Nextcloud instance, it will give you a popup about
maintenance mode and updating.
Now I can put this into a cron job if I like, or run on a whim if I so choose,
such as when I add new shows or movies to my Jellyfin server, or daily for my
Nextcloud instance. I can also perform this rsync, then run an upgrade anytime
I want, check it out on server A, and if it doesn’t work, fallback to server B
while I figure it out. Another great thing about this is I can still do a full
Clonezilla backup, but of the offline server, such as server B. This allows me
to still have an offsite backup option, while having an up to date onsite
backup option.
In theory, you could also use a VPN and do this rsync abroad. Unfortunately for
me, my internet is limited and slow, so this is not a great option for me.
There are also some inherent security risks you take as you open yourself up
further to the internet at large, but probably not more than you already face
by having your own internet facing Nextcloud instance.
You can also use SSH keys as well, and set SSH to only allow root login with
keys, and even with keys that are keyed to a specific IP address, too. What I
don’t like about SSH keys is that if someone gets into your computer, and you
have SSH keys allowing you to go to another computer, you don’t need a
password. But, then again, if someone is already into your first computer, then
they probably have access to everything anyway, but I still like to be careful.
In this case they would already have access to your server with all of your
data anyhow.
Nothing profound here, but I hope it is helpful to others who want to run their
own Nextcloud instance and have a mirrored backup. This works for me, but may
not be for everyone due to the root/ssh issue regarding security.
Linux – keep it simple.