💾 Archived View for gemini.hitchhiker-linux.org › gemlog › capsule_deployment_pipeline.gmi captured on 2024-06-16 at 12:28:11. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-06-14)
-=-=-=-=-=-=-
In my previous post I espoused some thoughts on Docker and CI. In short: I hate Docker and think it's a huge resource waste and many projects abuse CI. So just to give an example of a different way, here's the Makefile that builds and deploys my own personal capsule.
all: build capsule.tar upload build: zond build capsule.tar: public/index.gmi cd public && tar cf ../capsule.tar * upload: capsule.tar scp capsule.tar gimli:/home/nathan/capsule.tar ssh gimli tar xf capsule.tar -C /srv/gemini clean: rm -rf capsule.tar public .PHONY: all build clean upload
Now granted, this only works because I have control over the server (a Raspberry Pi 4 running OpenSuse, which also serves a Gitea instance and my Finger server). That said, after adding a new post all I have to do to publish it is type `make` on the command line. Going through the Makefile, it builds the capsule with my site generator, Zond. It then creates a tar file of the capsule. That tar file is then copied to the server using scp, and finally the tar file is extracted on the server by running the command remotely via ssh. The entire process finishes in less than a second. All of the tooling is installed locally, and I just need ssh access to the server. Since I also have ssh-agent running there isn't usually even a password prompt.
Current industry practice would have you push to a remote git repo, triggering a CI build which pulls down some Docker shiite to build a full operating system that includes your static site generator, builds the site, creates a git commit and pushes it to a different repo, where it is served via someone else's server, which might very well be another Docker container, one amongst thousands of server instances. Then the Docker shiite that was used to build the site gets torn down and deleted, only to be pulled down over the netwrok and rebuilt the next time the pipeline runs.
I would submit that the CI deployment pipeline is no easier to set up and offers no added convenience over my simple little Makefile. The drawbacks should be hugely obvious though. There's an enormous resource wastage involved in the name of convenience.
Someone posted on Fedi earlier about how they're using Woodpecker CI along with Codeberg pages to conveniently deploy their site. I applaud that they're using Codeberg rather than GitHub, but ironically, their .woodpecker.yaml file is 21 lines compared with my 11 line deployment script. I'm not really impressed.
I want to suggest a slogan to the Docker project: Resource wastage at scale.
All content for this site is licensed as CC BY-SA.
© 2022 by JeanG3nie