💾 Archived View for tilde.pink › ~maria › log › 2021-08-03_code_backup_headaches.gmi captured on 2024-07-09 at 00:30:30. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2023-01-29)

-=-=-=-=-=-=-

code backup headaches

Back in the day when github became a thing (10 years ago? dunno) I started to push everything "servicable" to github. I followed the footsteps of others, attempted to write a popular library, become an open source contributor. Soon github became more than just a nice README markdown renderer. They had issue tracking, wikis, project management, release management and CI/CD integration. While this may work for some people, and not for others, I just wanted to host my stuff somewhere.

Then came the headaches. At home a NAS had another git server to store all those funny scripts I use to maintain and setup remote machines and local machines. That included passwords, keyfiles and everything. On github I'd make this public. I replicated all repos from github to local and had more repos in local. When I sold my NAS, reduced the amount of virtual "goods" to a fraction (I did the same with real life) I could move to a cloud storage provider. That had massive advantages, as photo sharing with family and friends is still a thing. My phone needs to put them somewhere.

I ended up creating a single bare git folder in my cloud storage containing all of my projects. Something I haven't once cloned. Not because I didn't want to, but because on my RPI or on some remote server I had no cloud client. Cloning locally, then scp'ing it up, doing something, scp'ing the changes down, commiting, pushing and uploading the changes to cloud storage is pretty ridiculous when you think about it. Especially when most repos are used in a single place. One machine.

After a lot of helpful responses on station (link near the end if interested) I came up with something brutally simple for dropbox (also doable for other storage providers with an API):

  --------------------                                   -----------------
  | <foobar> project |                                   | cloud storage |
  |  |               | tar.gz                  upload    |  |            |
  |  --> .git/       | =======> foorbar.tar.gz ========> |  --> /git/    |
  --------------------                                   -----------------

I compress the `.git` folder and send it named as `basename $(pwd)` to cloud storage. Existing files get overwritten. When I need it somewhere, I create a folder with the name of the project, and inside I download and extract my compressed `.git` folder. After a `git reset --hard` all files are present. All done.

I get that this won't work if you're collaborating or have a moving target across multiple machines with uncommitted work. But it certainly works for single users.

Here's part of the script I use. Should be functional as is. You need your own AUTH token.

#!/bin/bash

AUTH=<your secret token here>
UPLOAD_URL="https://content.dropboxapi.com/2/files/upload"
DOWNLOAD_URL="https://content.dropboxapi.com/2/files/download"

AUTH_HEADER="Authorization: Bearer $AUTH"
BINARY_HEADER="Content-Type: application/octet-stream"
CURL="curl -k"

HERE=$(pwd)
NAME=$(basename $HERE)
case $1 in
  upload)
    if [ ! -d ".git" ]; then
      echo ".git folder doesn't exist"
      exit 1
    fi
    if [ -f "$NAME.tar.gz" ]; then
      echo "$NAME.tar.gz already exists"
      exit 1
    fi
    tar zcf $NAME.tar.gz .git
    $CURL -X POST -H "$AUTH_HEADER" -H "$BINARY_HEADER" --data-binary @"$NAME.tar.gz" -H "Dropbox-API-Arg: {\"path\":\"/git/$NAME.tar.g
    rm $NAME.tar.gz
    ;;

  download)
    if [ -d ".git" ]; then
      echo ".git folder exists"
      exit 1
    fi
    $CURL -H "$AUTH_HEADER" -H "Dropbox-API-Arg: {\"path\":\"/git/$NAME.tar.gz\"}" -o "$NAME.tar.gz" $DOWNLOAD_URL
    tar zxf $NAME.tar.gz
    rm $NAME.tar.gz
    git reset --hard
    ;;
esac

In case I could've solved this easier, smarter or with less code, tell me :)

station