💾 Archived View for chirale.org › 2021-02-13_6634.gmi captured on 2024-05-12 at 14:52:55. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
3-2-1 backup is not a timer to backup. It’s a well covered backup strategy to avoid data loss storing 3 copies of valuable data in different places. At least 3 backups, 2 of which are locally stored in different devices, and 1 is stored externally, for example a cloud storage service.
Here’s a recipe of a variant of a 3-2-1 backup in a real case scenario.
Hardware
Software
Time
1 hour
The real world scenario involves a remote system on a VPN. First, use public-private keys pair to configure the connection. Since it’s a local linux box, user interface connection is very quick and easy to configure (1). Once it is set up, you can check if connection is active or activate the connection to VPN programmatically (2) as in this groovy:
connection is very quick and easy to configure
Find and replace:
node ("local") { stage("vpn up") { // Disconnect if already connected sh ''' IS_ACTIVE=`nmcli con show --active | grep your-vpn-connection-name | wc -l` if [ $IS_ACTIVE -eq "1" ] then echo 'already connected, continue'; else # Connect to VPN set up with https://chirale.org/2018/03/27/how-to-import-ovpn-files-on-ubuntu-linux-network-manager/ sudo nmcli con up id your-vpn-connection-name fi ''' } }
To use nmcli, jenkins user on local must have access to the nmcli command. Grant these permissions (3) using:
sudo visudo
add those lines to sudoers:
Find and replace:
# on Cmnd alias specification Cmnd_Alias VPN_CONNECT = /usr/bin/nmcli ... # on bottom # jenkinsuser can activate / deactivate network interfaces using nmcli and use rsync jenkinsuser ALL=(ALL) NOPASSWD: /usr/bin/rsync, VPN_CONNECT
When adding or removing something from visudo, or even when you add the jenkinsuser to some groups, restart the Jenkins agent (4) via interface to force a logout-login.
Running pipeline (2), now local should have access to remote VPN.
Suppose you’ve already a couple of directories under /var/backup periodically synced using a custom cron script or something like rsnapshot. This backup is the first copy of a 3-2-backup set. Now, 3-2-1 approach will make you duplicate the copy on a different device on the same machine. The following commands, run on your local linux box, will pull the copy from remote to local. This is the last 1 copy of 3-2-1 backup.
To create the copy locally, you have to download from remote to local using rsync (5). Here’s a very simple command:
Find and replace:
export BKDIR=/opt/backup/remotehost cd $BCKDIR rsync -rltvz --no-o --no-g user@remotehost:/var/backup/db . rsync -rltvz --no-o --no-g user@remotehost:/var/backup/images .
Command explained via explainshell
A more classical approach is to use rsync -av to preserve all permissions, but since resources are plain (compressed) SQL files or images there’s no reason to keep user, group or permission. Change the above command as you like it for your use case.
Starting this article I’ve cited I used a variant of a 3-2-1 approach: call it 4-1-2-1 since it doesn’t formally exist.
There are 4 total copies:
A reliable approach to create a second copy on local is to redo the same commands on step (5) on a different $BCKDIR where the external HDD on local is mounted. However, if you want to unify all backups in a single pipeline to periodically update the whole backup you can do something like this:
node ("local") { echo "Copy to external HDD" stage("redundant backup") { sh "rsync -rltvzO /opt/backup/ /mnt/myexternalhdd/backup/secondcopy/" } }
Now you’ve all valuable data locally, you can use some external service as backup of last resort. I’ll cover this topic soon on part
https://web.archive.org/web/20210213000000*/https://rsnapshot.org/