💾 Archived View for samsai.eu › gemlog › 2022-08-10-upgrading-servers-2020.gemini captured on 2023-03-20 at 17:37:56. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-01-29)
-=-=-=-=-=-=-
ew0k wrote about annoyances with handling software updates on your server in a world where programming languages have their own dependency management schemes and how this requires sysadmins to jump through extra hoops to keep things up-to-date.
ew0k's post:
gemini://warmedal.se/~bjorn/posts/2022-08-09-let-s-update-our-server-2022-edition.gmi
There's one separation that I think is worth highlighting here, namely that of whether you are producing the software or just consuming the software. If you are producing software then you are kind of SOL but that's kind of the bullet you need to bite and you just need to do your due diligence with your dependencies and keeping them updated, just like you do when making sure you don't crash your server or worse with sloppy coding.
If you are just consuming software then in the best case your life for software upgrades is just the "sudo apt update && sudo apt upgrade -y" spell you throw at your server every now and then to keep things in check. In practice this isn't always possible because you may need something more modern than what gets packaged, but striving for that ideal is not a bad goal. Stuff like Node and Python apps (and even the dreaded Rust ones) can be packaged in a way that keeps them and their dependencies up-to-date via apt and dnf, so ultimately this issue mostly boils down to which software gets packaged and when. The packaging delay is probably going to remain unless Linux packaging process gets simpler and devs start actually packaging their own software.
On the Docker front things aren't too different: if you are producing the images then try to do it well and if you are consuming images then keep them updated. It is easy to ship outdated libraries and software with Docker, so things aren't exactly rosy in there, but there are at least solutions for keeping software deployed as containers updated. One way that I know of is podman auto-update labeling and the associated systemd service to automatically pull new images from a registry and restart them for updates. If you do have to build stuff, you probably should set up automation to pull in new base images and automatically build and deploy the images rather than trying to manually handle this for every Dockerfile you might want to use. You obviously avoid all of that if you only consume maintained images from a registry.
I also don't really know if we can realistically say that things are particularly bad in 2022 with regards to this. People have been using software from outside their distro repos forever and likely have had to deal with basically the same issues. Managing tarballs of some scripting spaghetti originating from some obscure website probably isn't a significantly more elegant experience than juggling language-specific dependency managers.