💾 Archived View for perso.pw › blog › rss.xml captured on 2023-07-22 at 16:18:16.
View Raw
More Information
⬅️ Previous capture (2023-07-10)
➡️ Next capture (2023-09-08)
🚧 View Differences
-=-=-=-=-=-=-
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Solene'%</title>
<description></description>
<link>gemini://perso.pw/blog/</link>
<atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" />
<item>
<title>Old Computer Challenge v3: postmortem</title>
<description>
<![CDATA[
<pre># Challenge report
Hi! I've not been very communicative about my week during the Old Computer Challenge v3, the reason is that I failed it. Time for a postmortem (analysis of what happened) to understand the failure!
For the context, the last time I was using a restricted hardware was for the first edition of the challenge two years ago. Last year challenge was about reducing Internet connectivity.
# Wasn't prepared
I have to admit, I didn't prepare anything. I thought I could simply limit the requirements on my laptop, either on OpenBSD or openSUSE and enjoy the challenge. It turned out it was more complicated than that.
- OpenBSD memory limitation code wasn't working on my system for some reason (I should report this issue)
- openSUSE refused to boot with 512 MB of memory under 30 minutes, even by adding swap, and I couldn't log in through GDM once there
I had to figure a backup plan, which turned to be using Alpine Linux installed on a USB memory stick, memory and core number restriction worked out of the box, figuring how to effectively reduce the frequency was hard, but I did it finally.
From this point, I had a non-encrypted Alpine Linux on a poor storage medium. What would I do with this? Nothing much.
# Memory limitation
It turns out that in 2 years, my requirements evolved a bit. 512 MB wasn't enough to use a web browser with JavaScript, and while I thought it wouldn't be such a big deal, it WAS.
I regularly need to go on some websites, doing it on my non-trusted smartphone is a no-go, so I need a computer, and Firefox on 512 MB just doesn't work. Chromium almost work, but it depends on the page, and WebKit browser often didn't work well enough.
Here is a sample of websites I needed to visit:
- OVH web console
- Patreon web page
- Bank service
- Some online store
- Mastodon (I have such a huge flow that CLI tools doesn't work well for me)
- Kanban tool
- Deepl for translation
- Replying to people on some open source project Discourse forums
- Managing stuff in GitHub (gh tool isn't always on-par with the web interface)
For this reason, I often had to use my "work" computer to do the tasks, and ended up inadvertently continuing on this computer :(
In addition to web browsing, some programs like LanguageTool (a java GUI spellcheck program) required too much memory to be started, so I couldn't even spell check my blog posts (Aspell is not as complete as LanguageTool).
# CPU limitation
At first when I thought about the rules for the 3rd edition, the CPU frequency seemed to be the worst part. In practice, the system was almost swapping continuously but wasn't CPU bound. Hardware acceleration was fast enough to play videos smoothly.
If you can make good use of the 512 MB of memory, you certainly won't have CPU problems.
# Security issues
This is not related to the challenge itself, but I felt a bit stuck with my untrusted Alpine Linux, I have some ssh / GPG keys that are secured on two systems and my passwords, I almost can't do anything without them, and I didn't want to take the risk of compromising my security chain for the challenge.
In fact, since I started using Qubes OS, I started being reluctant to mix all my data on a single system, even the other one I'm used to being working with (which has all the credentials too), but Qubes OS is the anti-oldcomputerchallenge as you need to throw the more hardware you can to make it useful.
# Not a complete failure
However, the challenge wasn't such a complete failure for me. While I can't say I played by the rules, it definitely helped me to realize the changes in my computer use over the last years. This was the point when I started the "offline laptop" project three years ago, which transformed into the old computer challenge the year after.
I tried to use less the computer as I wasn't able to fulfill the challenge requirements, and did some stuff IRL at home and outside, the week went SUPER FAST, I was astonished to realize it's already over. This also forced me to look for solutions, so I spent *a LOT* of time trying to make Firefox fit in 512 MB, TLDR it didn't work.
The LEAST memory I'd need nowadays is 1Â GB of memory, it's still not much compared to what we have nowadays (my main system has 32 GB), but it's twice the first requirements I've set.
# Conclusion
It seems everyone had a nice week with the challenge, I'm very happy to see the community enjoying this every year. I may not be the challenge paragon for this year, but it was useful to me, and since then I couldn't stop thinking about how to improve my computer usage.
Next challenge should be two weeks long :)
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/old-computer-challenge-v3-part2.gmi</guid>
<link>gemini://perso.pw/blog//articles/old-computer-challenge-v3-part2.gmi</link>
<pubDate>Mon, 17 Jul 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>How-to install Alpine Linux in full ram with persistency</title>
<description>
<![CDATA[
<pre># Introduction
In this guide, I'd like to share with you how to install Alpine Linux, so it runs entirely from RAM, but using its built-in tool to handle persistency. Perfect setup for a NAS or router, so you don't waste a disk for the system, and this can even be used for a workstation.
=> https://www.alpinelinux.org Alpine Linux official project website
=> https://wiki.alpinelinux.org/wiki/Alpine_local_backup Alpine Linux wiki: Alpine local backup
# The plan
Basically, we want to get the Alpine installer on a writable disk formatted in FAT instead of a read only image like official installers, then we will use the command `lbu` to handle persistency, and we will see what need to be configured to have a working system.
This is only a list of steps, they will be detailed later:
1. boot from an Alpine Installer (if you are using Alpine, you don't need too)
2. format an usb memory drive with an ESP partition and make it bootable
3. run `setup-bootloader` to copy the bootloader from the installer to the freshly formatted drive
4. reboot on the usb drive
5. run `setup-alpine`
6. you are on your new Alpine system
7. run `lbu commit` to make changes persistent across reboot
8. make changes, run `lbu commit` again
=> static/solene-lbu.png A mad scientist Girl with a t-shirt labeled "rare t-shirt" is looking at a penguin strapped on a Frankenstein like machine, with his head connected to a huge box with LBU written on it.
=> https://merveilles.town/@prahou Artwork above by Prahou
# The setup
## Booting Alpine
For this step you have to download an Alpine Linux installer, take the one that suits your needs, if unsure, take the "Extended" one. Don't forget to verify the file checksum.
=> https://www.alpinelinux.org/downloads/
Once you have the ISO file, create the installation media:
=> https://docs.alpinelinux.org/user-handbook/0.1a/Installing/medium.html#_using_the_image Alpine Linux documentation: Using the image
Now, boot your system using your brand-new installer.
## Writable boot media creation
In this step, we will need to boot on the Alpine installer to create a new Alpine installer, but writable.
You need another USB media for this step, the one that will keep your system and data.
On Alpine Linux, you can use `setup-alpine` to configure your network, key map and a few things for the current system. You only have to say "none" when you are asked what you want to install, where, and if you want to store the configuration somewhere.
Run the following commands on the destination USB drive (networking is required to install a package), this will format it and use all the space as a FAT32 partition. In the example below, the drive is `/dev/sdc`.
apk add parted
parted /dev/sdc -- mklabel gpt
parted /dev/sdc -- mkpart ESP fat32 1MB 100%
parted /dev/sdc -- set 1 esp on
This creates a GPT table on `/dev/sdc`, then creates a first partition as FAT32 from the first megabyte up to the full disk size, and finally marks it bootable. This guide is only for UEFI compatible systems.
We actually have to format the drive as FAT32, otherwise it's just a partition type without a way to mount it as FAT32:
mkfs.vfat /dev/sdc1
modprobe vfat
Final step, we use an Alpine tool to copy the bootloader from the installer to our new disk. In the example below, your installer may be `/media/usb` and the destination `/dev/sdc1`, you could figure the first one using `mount`.
setup-bootable /media/usb /dev/sdc1
At this step, you made a USB disk in FAT32 containing the Alpine Linux installer you were using live. Reboot on the new one.
## System installation
On your new installation media, run `setup-alpine` as if you were installing Alpine Linux, but answer "none" when you are asked which disk you want to use. When asked "Enter where to store configs", you should be prompted your new device by default, accept. Immediately, after, you will be prompted for an APK cache, accept.
At this point, we can say Alpine is installed! Don't reboot yet, you are already on your new system!
Just use it, and run `lbu commit` when you need to save changes done to packages or `/etc/`. `lbu commit` creates a new tarball in your USB disk containing a list of files configured in `/etc/apk/protected_paths.d/`, and this tarball is loaded at boot time, and will install your package list quickly from the local cache.
=> https://wiki.alpinelinux.org/wiki/Alpine_local_backup Alpine Linux wiki: Alpine local backup (lbu command documentation)
Please take extra care that if you include more files, everything you commit the changes, they have to be stored on your USB media. You could modify the fstab to add an extra disk/partition for persistent data on a performant drive.
## Extra configuration
Here is a list of tweaks to improve your experience!
### keep last n configuration
By default, lbu will only keep the last version you save, by setting`BACKUP_LIMIT` to a number n, you will always have the last n versions of your system stored in the boot media, this is practical if you want to roll back a change.
### apk repositories
Edit `/etc/apk/repositories` to uncomment the community repository.
### fstab check
Edit `/etc/fstab` to make sure the disk you are using is explicitly configured using a UUID entry, if you only have this:
/dev/cdrom /media/cdrom iso9660 noauto,ro 0 0
/dev/usbdisk /media/usb vfat noauto,ro 0 0
This mean your system may have troubles if you use it on a different computer or that you plug another USB disk in it. Fix by using the UUID of your partition, you can find it using the program `blkid` from the eponym package, and fix the fstab like this:
UUID=61B2-04FA /media/persist vfat noauto,ro 0 0
/dev/cdrom /media/cdrom iso9660 noauto,ro 0 0
/dev/usbdisk /media/usb vfat noauto,ro 0 0
This will ALWAYS mount your drive as `/media/persist`.
If you had to make the change, you need to make some extra changes to keep things coherent:
- set `LBU_MEDIA=persist` into `/etc/lbu/lbu.conf`
- umount the drive in `/media` and run `mkdir -p /media/persist && mount -a`, you should have `/media/persist` with data in it
- run `lbu commit` to save the changes
### desktop setup
You can install a graphical desktop, this can easily be done with these commands:
setup-desktop xfce
setup-xorg-base
Due to a bug, we have to re-enable some important services, otherwise you would not have networking at the next boot:
rc-update add hwdrivers sysinit
=> https://gitlab.alpinelinux.org/alpine/aports/-/issues/9653 Alpine bug report #9653
You may want to enable the display manager at boot, which may be lightdm, gdm or sddm depending on your desktop:
rc-update add lightdm
### user persistency
If you added a user during `setup-alpine`, its home directory has been automatically added to `/etc/apk/protected_paths.d/lbu.list`, when you run `lbu commit`, its whole home is stored. This may not be desired.
If you don't want to save the whole home directory, but only a selection of files/directories, here is how to proceed:
1. edit `/etc/apk/protected_paths.d/lbu.list` to remove the line adding your user directory
2. you need to create the user directory at boot with the correct permissions: `echo "install -d -o solene -g solene -m 700 /home/solene" | doas tee /etc/local.d/00-user.start`
3. in case you have some persistency set at least one user sub directories, it's important to fix the permissions of all the user data after the boot: `echo "chown -R solene:solene /home/solene | doas tee -a /etc/local.d/00-user.start`
4. you need to mark this script as executable: `doas chmod +x /etc/local.d/00-user.start`
5. you need to run the local scripts at boot time: `doas rc-update add local`
6. save the changes: `doas lbu commit`
I'd recommend the use of a directory named `Persist` and adding it to the lbu list. Doing so, you have a place to store some important data without having to save all your home directory (including garbage such as cache). This is even nicer if you use ecryptfs as explained below.
### extra convenience
Because Alpine Linux is packaged in a minimalistic manner, you may have to install a lot of extra packages to have all the fonts, icons, emojis, cursors etc... working correctly as you would expect for a standard Linux desktop.
Fortunately, there is a community guide explaining each section you may want to configure.
=> https://wiki.alpinelinux.org/wiki/Post_installation Alpine Linux wiki: Post installation
### Set X default keyboard layout
Alpine insists of you using a qwerty desktop for X until you log into your session, this can be complicated to type passwords.
You can create a file `/etc/X11/xorg.conf.d/00-keyboard.conf` like in the linked example and choose your default keyboard layout. You will have to create the directories `/etc/X11/xorg.conf.d` first.
=> https://wiki.archlinux.org/title/Xorg/Keyboard_configuration#Using_X_configuration_files Arch Linux wiki: Keyboard configuration
### encrypted personal directory
You could use ecryptfs to either encrypt the home partition of your user, or just give it a Private directory that could be unlocked on demand AND made persistent without pulling all the user files at every configuration commit.
$ doas apk add ecryptfs-utils
$ doas modprobe ecryptfs
$ ecryptfs-setup-private
Enter your login passphrase [solene]:
Enter your mount passphrase [leave blank to generate one]:
[...]
$ doas lbu add $HOME/.Private
$ doas lbu add $HOME/.ecryptfs
$ echo "install -d -o solene -g solene -m 700 /home/solene/Private" | doas tee /etc/local.d/50-ecryptfs.start
$ doas chmod +x /etc/local.d/50-ecryptfs.start
$ doas rc-update add local
$ doas lbu commit
Now, when you need to access your private directory, run `ecryptfs-mount-private` and you have your `$HOME/Private` directory which is encrypted.
You could use ecryptfs to encrypt the whole user directory, this requires extra steps and changes into `/etc/pam.d/base-auth`, don't forget to add `/home/.ecryptfs` to the lbu include list.
=> https://dataswamp.org/~solene/2023-03-12-encrypt-with-ecryptfs.html Using ecryptfs guide
# Security
Let's be clear, this setup isn't secure! The weak part is the boot media, which doesn't use secure boot, could easily be modified, and has nothing encrypted (except the local backups, but NOT BY DEFAULT).
However, once the system has booted, if you remove the boot media, nothing can be damaged as everything lives in memory, but you should still use passwords for your users.
# Conclusion
Alpine is a very good platform for this kind of setup, and they provide all the tools out of the box! It's a very fun setup to play with.
Don't forget that by default everything runs from memory without persistency, so be careful if you generate data you don't want to lose (passwords, downloads, etc...).
# Going further
The lbu configuration can be encrypted, this is recommended if you plan to carry your disk around, especially if it contains sensitive data.
You can use the fat32 partition only for the bootloader and the local backup files, but you could have an extra partition that could be mounted for /home or something, and why not a layer of LUKS for encryption.
You may want to use zram if you are tight on memory, this creates a compressed block device that could be used for swap, it's basically compressed RAM, it's very efficient but less useful if you have a slow CPU.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/alpine-linux-from-ram-but-persistent.gmi</guid>
<link>gemini://perso.pw/blog//articles/alpine-linux-from-ram-but-persistent.gmi</link>
<pubDate>Tue, 18 Jul 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Introduction to immutable Linux systems</title>
<description>
<![CDATA[
<pre># Introduction
If you reach this page, you may be interested into this new category of Linux distributions labeled "immutable".
In this category, one can find by age (oldest → youngest) NixOS, Guix, Endless OS, Fedora Silverblue, OpenSUSE MicroOS, Vanilla OS and many new to come.
I will give examples of immutability implementation, then detail my thoughts about immutability, and why I think this naming can be misleading. I spent a few months running all of those distributions on my main computers (NAS, Gaming, laptop, workstation) to be able to write this text.
# What's immutability?
The word immutability itself refers to an object that can't change.
However, when it comes to an immutable operating system, the definition immediately become vague. What would be an operating system that can't change? What would you be supposed to do with it?
We could say that a Linux LIVE-CD is immutable, because every time you boot it, you get the exact same programs running, and you can't change anything as the disk media is read only. But while the LIVE-CD is running, you can make changes to it, you can create files and directories, install packages, it's not stuck in an immutable state.
Unfortunately, this example was nice but the immutability approach by those Linux distribution is totally different, so we need to think a bit further.
There are three common principles in these systems:
- system upgrades aren't done on the live system
- packages changes are applied on the next boot
- you can roll back a change
Depending on the implementation, a system may offer more features. But this list is what a Linux distribution should have to be labelled "immutable" at the moment.
# Immutable systems comparison
Now we found what are the minimum requirements to be called immutable, let's go through each implementation, by their order of appearance.
## NixOS / Guix
In this section, I'm mixing NixOS and Guix as they both rely on the same implementation. NixOS is based on Nix (first appearance in 2003), which has been forked into early 2010s into the Guix package manager to be 100% libre, which gave birth to an eponym operating system also 100% free.
=> https://nixos.org/ NixOS official project website
=> https://guix.gnu.org/ Guix official project website
=> https://jonathanlorimer.dev/posts/nix-thesis.html Jonathan Lorimer's blog post explaining Eelco Dolstra's thesis about Nix
These two systems are really different than a traditional Unix like system we are used to, and immutability is a main principle. To make it quick, they are based on their package manager (being Nix or Guix) that contains every package or built file into a special read-only directory (where only the package manager can write) where each package has its own unique entry, and the operating system itself is a byproduct of the package manager.
What does that imply? If the operating system is built, this is because it's made of source code, you literally describe what you want your system to be in a declarative way. You have to list users, their shells, installed packages, running services and their configurations, partitions to mount with which options etc... Fortunately, it's made a lot easier by the use of modules which provide sane defaults, so if you create a user, you don't have to specify its UID, GID, shell, home etc...
So, as the system is built and stored in the special read-only directory, all your system is derived from that (using symbolic links), so all the files handled by the package manager are read-only. A concrete example is that /etc/fstab or /bin/sh ARE read-only, if you want to make a change in those, you have to do it through the package manager.
I'm not going into details, because this store based package manager is really different than everything else but:
- you can switch between two configurations on the fly as it's just a symlink dance to go from a configuration to another
- you can select your configuration at boot time, so you can roll back to a previous version if something is wrong
- you can't make change to a package file or system file as they are read only
- the mount points except the special store directory are all mutable, so you can write changes in /home or /etc or /var etc... You can remove the system symlinks by a modified version, but you can't modify the symlink source itself.
This is the immutability as seen through the Nix lens.
I've spent a few years running NixOS systems, this is really a blast for me, and the best "immutable" implementation around, but unfortunately it's too different, so its adoption rate is very low, despite all the benefits.
=> https://discourse.nixos.org/t/my-issues-when-pushing-nixos-to-companies/28629/1 NixOS forum: My issues when pushing NixOS to companies
## Endless OS
While this one is not the oldest immutable OS around, it's the first one to be released for the average user, while NixOS and Guix are older but for a niche user category. The company behind Endless OS is trying to offer a solid and reliable system, free and open source, that can works without Internet, to be used in countries with a low Internet / powergrid coverage. They even provide a version with "offline internet included" containing Wikipedia dumps, class lessons and many things to make a computer useful while offline (I love their work).
=> https://www.endlessos.org/ Endless OS official project website
Endless OS is based on Debian, but uses the OSTree tool to make it immutable. OSTree allows you to manage a core system image, and add layers on top of it, think of packages as layers. But it can also prepare a new system image for the next boot.
With OSTree, you can apply package changes in a new version of the system that will be available at next boot, and revert to a previous version at boot time.
The partitions are mounted writable, except for `/usr`, the land of packages handled by OSTree, which is mounted read-only. There are no rollbacks possible for `/etc`.
Programs meant to be for the user (not the packages to be used by the system like grub, X display or drivers) are installed from Flatpak (which also uses OSTree, but unrelated to the system), this avoids the need to reboot each time you install a new package.
My experience with Endless OS is mixed, it is an excellent and solid operating system, it's working well, never failed, but I'm just not the target audience. They provide a modified GNOME desktop that looks like a smartphone menu, because this is what most non-tech users are comfortable with (but I hate it). And installing DevOps tools isn't practical but not impossible, so I keep Endless OS for my multimedia netbook and I really enjoy it.
## Fedora Silverblue
This linux distribution is the long descendant of Project Atomic, an old initiative to make Fedora / CentOS/ RHEL immutable. It's now part of the Fedora releases along with Fedora Workstation.
=> https://projectatomic.io/ Project Atomic website
=> https://fedoraproject.org/silverblue/ Fedora Silverblue project website
Fedora Silverblue is also using OSTree, but with a twist. It's using rpm-OSTree, a tool built on top of OSTree to let your RPM packages apply the changes through OSTree.
The system consists of a single core image for the release, let's say fedora-38, and for each package installed, a new layer is added on top of the core. At anytime, you can list all the layers to know what packages have been installed on top of the core, if you remove a package, the whole stack is generated again (which is terribly SLOW) without the package, there is absolutely no leftover after a package removal.
On boot, you can choose an older version of the system, in case something broke after an upgrade. If you install a package, you need to reboot to have it available as the change isn't applied on the current booted system, however rpm-OSTree received a nice upgrade, you can temporarily merge the changes of the next boot into the live system (using a tmpfs overlay) to use the changes.
The mountpount management is a bit different, everything is read-only except `/etc/`, `/root` and `/var`, but your home directory is by default in `/var/home` which sometimes breaks expectations. There are no rollbacks possible for `/etc`.
As installing a new package is slow due to rpm-OSTree and requires a reboot to be fully usable (the live change back port store the extra changes in memory), they recommend to use Flatpak for programs, or `toolbox`, some kind of wrapper that create a rootless fedora container where you can install packages and use it in your terminal. toolbox is meant to provide development libraries or tool you wouldn't have in Flatpak, but that you wouldn't want to install in your base Fedora system.
=> https://docs.fedoraproject.org/en-US/fedora-silverblue/toolbox/ toolbox website
My experience with Fedora Silverblue has been quite good, it's stable, the updates are smooth even if they are slow. `toolbox` was working fine despite I don't find this practical.
## OpenSUSE MicroOS
This spin of OpenSUSE Tumbleweed (rolling-release OpenSUSE) features immutability, but with its own implementation. The idea of MicroOS is really simple, the whole system except a few directories like `/home` or `/var` lives on a btrfs snapshot, if you want to make a change to the system, the current snapshot is forked into a new snapshot, and the changes are applied there, ready for the next boot.
=> https://microos.opensuse.org/ OpenSUSE MicroOS official project website
What's interesting here is that `/etc` IS part of the snapshots, and can be roll backed, which wasn't possible in the OSTree based systems. It's also possible to make changes to any file of the file system (in a new snapshot, not the live one) using a shell, which can be very practical for injecting files to solve a driver issue. The downside it's not guaranteed that your system is "pure" if you start making changes, because they won't be tracked, the snapshots are just numbered, and you don't know what changes were made in each of them.
Changes must be done through the command `transactional-update` which do all the snapshot work for you, and you could either manipulate package by adding/removing a package, or just start a shell in the new snapshot to make all the changes you want. I said `/etc` is part of the snapshots, it's true, but it's never read-only, so you could make a change live in `/etc`, then create a new snapshot, the change would be immediately inherited. This can create troubles if you roll back to a previous state after an upgrade if you also made changes to `/etc` just before.
The default approach of MicroOS is disturbing at first, a reboot is planned every day after a system update, this is because it's a rolling-release system and there are updates every day, and you won't benefit from them until you reboot. While you can disable this automatic reboot, it makes sense to use the newest packages anyway, so it's something to consider if you plan to use MicroOS.
There is currently no way to apply the changes into the live system (like Silverblue is offering), it's still experimental, but I'm confident this will be doable soon. As such, it's recommended to use `distrobox` to use rootless containers of various distributions to install your favorite tools for your users, instead of using the base system packages. I don't really like this because this adds maintenance, and I often had issues of distrobox refusing to start a container after a reboot, I had to destroy and recreate it entirely to solve.
=> https://github.com/89luca89/distrobox distrobox GitHub project page
My experience with OpenSUSE MicroOS has been wonderful, it's in dual-boot with OpenBSD on my main laptop, it's my Linux Gaming OS, and it's also my NAS operating system, so I don't have to care about updates. I like that the snapshots system doesn't restrict me, while OSTree systems just doesn't allow you to make changes without installing a package.
## Vanilla OS
Finally, the really new (but mature enough to be usable) system in the immutable family is Vanilla OS based on Ubuntu (but soon on Debian), using ABroot for immutability. With Vanilla OS, we have another implementation that really differs from what we saw above.
=> https://vanillaos.org/ Vanilla OS project website
ABroot named is well thought, the idea is to have a root partition A, another root partition B, and a partition for persistent data like `/home` or `/var`.
Here is the boot dance done by ABroot:
- first boot is done on A, it's mounted in read-only
- changes to the system like new packages or file changes in `/etc` are done on B (and can be applied live using a tmpfs overlay)
- upon reboot, if previous boot was A, you boot on B, then if the boot is successful, ABroot scan for all the changes between A and B, and apply all the changes from B to A
- when you are using your system, until you make a change, A and B are always identical
This implementation has downsides, you can only roll back a change until you boot on the new version, then the changes are also applied on the previous boot, and you can't roll back. This implementation mostly protects you from a failing upgrade, or if you made changes and tried them live, but you prefer to rollback.
Vanilla OS features the package manager apx, written by distrobox author. That's for sure an interesting piece of software, allowing your non-root user to install packages from many distributions (arch linux, fedora, ubuntu, nix, etc...) and integrates them into the system as if they were installed locally. I suppose it's some kind of layer on top of distrobox.
=> https://github.com/Vanilla-OS/apx apx package manager GitHub project page
My experience wasn't very good, I didn't find ABroot to be really useful, and the version 22.10 I tried was using an old Ubuntu LTS release which didn't make my gaming computer really happy. The overall state of Vanilla OS, ABroot and apx is that they are young, I think it can become a great distribution, but it still has some rough edges.
## Alpine Linux (with LBU)
I've been told that it was possible to achieve immutability on Alpine Linux using the "lbu" command.
=> https://wiki.alpinelinux.org/wiki/Alpine_local_backup Alpine Linux wiki: Local backup
I don't want to go much into details, but here is the short version: you can use Alpine Linux installer as a base system to boot from, and create tarballs of "saved configurations" that are automatically applied upon boot (it's just tarred directories and some automation to install packages). At every boot, everything is untarred again, and packages are installed again (you should use an apk cache directory), everything in live memory, fully writable.
What does this achieve? You always start from a clean state, changes are applied on top of it at every boot, you can roll back the changes and start fresh again. Immutability as we defined above here isn't achieved because changes are applied on the base system, but it's quite close to fulfill (my own) requirements.
I've been using it a few days only, not as my main system, and it requires a very good understanding of what you are doing because the system is fully in memory, and you need to take care about what you want to save/restore, which can create big archives.
On top of that, it's poorly documented.
# Pros, Cons and Facts
Now I gave some details about all the major immutable systems (Linux based) around, I think it's time to list the real pros and cons I found from my experimentation.
## Pros
- you can roll back changes if something went wrong.
- transactional-updates allows you to keep the system running correctly during packages changes.
## Cons
- configuration management tool (ansible, salt, puppet etc..) integrate VERY badly, they received updates to know how to apply package changes, but you will mostly hit walls if you want to manage those like regular systems.
- having to reboot after a change is annoying (except for NixOS and Guix which don't require rebooting for each change).
- OSTree based systems aren't flexible, my netbook requires some extra files in alsa directories to get sound (fortunately Endless OS have them!), you just can't add the files without making a package deploying them.
- blind rollbacks, it's hard to figure what was done in each version of the system, so when you roll back it's hard to figure what you are doing exactly.
- it can be hard to install programs like Nix/Guix which require a directory at the root of the file system, or install non-packaged software system-wide (this is often bad practice, but sometimes a necessary evil).
## Facts
- immutability is a lie, many parts of the systems are mutable, although I don't know how to describe this family with a different word (transactional something?).
- immutable doesn't imply stateless.
- NixOS / Guix are doing it right in my opinion, you can track your whole system through a reliable package manager, and you can use a version control system on the sources, it has the right philosophy from the ground up.
- immutability is often associated with security benefits, I don't understand why. If someone obtains root access on your system, they can still manipulate the live system and have fun with the `/boot` partition, nothing prevent them to install a backdoor for the next boot.
- immutability requires discipline and maintenance, because you have to care about the versioning, you have extra programs like apx / distrobox / devbox that must be updated in parallel of the system (while this is all integrated into NixOS/Guix).
# Conclusion
Immutable operating systems are making the news in our small community of open source systems, but behind this word lies many implementations with different use cases. The word immutable certainly creates expectations from users, but it's really nothing more than transactional updates for your operating system, and I'm happy we can have this feature now.
But transactional updates aren't new, I think it started a with Solaris and ZFS allowing you to select a system snapshot at boot time, then I'm quite sure FreeBSD implemented this a decade ago, and it turns out that on any linux distribution with regular btrfs snapshots you could select a snapshot at boot time.
=> https://dataswamp.org/~solene/2023-01-04-boot-on-btrfs-snapshot.html Previous blog post about booting on a BTRFS snapshot without any special setup
In the end, what's REALLY new is the ability to apply a transactional change on a non-live environment, integrates this into the bootloader, and give the user the tooling to handle this easily.
# Going further
I recommend reading the blog post "“Immutable” → reprovisionable, anti-hysteresis" by Colin Walters.
=> https://blog.verbum.org/2020/08/22/immutable-%E2%86%92-reprovisionable-anti-hysteresis/ “Immutable” → reprovisionable, anti-hysteresis
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/intro-to-immutable-os.gmi</guid>
<link>gemini://perso.pw/blog//articles/intro-to-immutable-os.gmi</link>
<pubDate>Fri, 14 Jul 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Easily use your remote scanner on Linux (Qubes OS guide)</title>
<description>
<![CDATA[
<pre># Introduction
Hi, this is a quick guide explaining how to use a network scanner on Qubes OS (or Linux/BSD in general).
I'll be using a network printer / scanner Brother MFC-1910W in the example.
# Setup
## Specific Qubes OS
For Qubes OS, the simplest way to proceed is to use the qube sys-net (which is UNTRUSTED) to proceed with the scanner operations. Scanning in it isn't less secure than having a dedicated qube as the network traffic isn't encrypted toward the scanner, this also ease a lot the network setup.
All the instructions below will be done in sys-net, with the root user.
Note that sys-net should be either an AppVM with persistent /home or a fully disposable system, so you will have to do all the commands every time you need your scanner. If you need it really often (I use mine once in a while), you may want to automate this in the template used by sys-net.
## Instructions
We need to install the program `sane-airscan` used to discover network scanners, and also all the backends/drivers for devices. On Fedora, this can be done using the following command, the package list may differ for other systems.
dnf install sane-airscan sane-backends sane-backends-drivers-cameras sane-backends-drivers-scanners
Make sure the service `avahi-daemon` is installed and running, the default Qubes OS templates have it, but not running. It is required for network devices discovery.
systemctl start avahi-daemon
An extra step is required, avahi requires the port UDP/5353 to be opened on the system to receive discovery replies, if you don't do that, you won't find your network scanner (this is also required for printers).
You need to figure the network interface name of your network, open a console and type `ip -4 -br a | grep UP`, the first column is the interface name, the lines starting by vif can be discarded. Run the following command, and make sure to replace INTERFACE_NAME by the real name you just found.
iptables -I INPUT 1 -i INTERFACE_NAME -p udp --dport 5353 -j ACCEPT
Now, we should be able to discover the scanner, the following command should output a line with a device name and network address:
airscan-discover
For me, the output looks like this:
[devices]
Brother MFC-1910W series = http://10.42.42.133:80/WebServices/ScannerService, WSD
If you have a similar output, this mean it's working, then you can use airscan-discover output to configure the detected scanner:
airscan-discover | tee /etc/sane.d/home.conf
Now, your scanner should be usable!
# Using the scanner
You can run the command `scanimage` as a regular user to use your remote scanner, by default, it selects the first device available, so if you have a single scanner, you don't need to specify its long and complicated name/address.
You can scan and save as a PDF file using this command:
$ scanimage --format pdf > my_document.pdf
On Qubes OS, you can open a file manager in sys-net and right-click on the file to move it to the qube where you want to keep the document.
# Disabling avahi
If you are done with your scanner, you can remove the firewall rule allowing device discovery.
iptables -D INPUT -i INTERFACE_NAME -p udp --dport 5353 -j ACCEPT
# Conclusion
Using a network scanner is quite easy when it's supported by SANE, but you need direct access to the network because of the avahi discovery requirement, which is not practical when you have a firewall or use virtual machines in sub networks.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/qubes-os-network-scanner.gmi</guid>
<link>gemini://perso.pw/blog//articles/qubes-os-network-scanner.gmi</link>
<pubDate>Thu, 13 Jul 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Old Computer Challenge v3: day 1</title>
<description>
<![CDATA[
<pre># Day 1
Hi! Today, I started the 3rd edition of the Old Computer Challenge. And it's not going well, I didn't prepare a computer before, because I wanted to see how easy it would be.
=> https://dataswamp.org/~solene/2023-06-04-old-computer-challenge-v3.html Old Computer Challenge v3
- main computer (Ryzen 5 5600X with 32 GB of memory) running Qubes OS: well, Qubes OS may be the worse OS for that challenge because it needs so much memory as everything is done in virtual machines, just handling USB devices requires 400 MB of memory
- main laptop (a t470) running OpenBSD 7.3: for some reasons, the memory limitation isn't working, maybe it's due to the hardware or the 7.3 kernel
- main laptop running OpenSUSE MicroOS (in dual boot): reducing the memory to 512MB prevent the system to unlock the LUKS drive!
The thing is that I have some other laptops around, but I'd have to prepare them with full disk encryption and file synchronization to have my passwords, GPG and SSH keys around.
With this challenge, in its first hour, I realized my current workflows don't allow me to use computers with 512 MB of memory, this is quite sad. A solution would be to use the iBook G4 laptop that I've been using since the beginning of the challenges, or my T400 running OpenBSD -current, but they have really old hardware, and the challenge is allowing some more fancy systems.
I'd really like to try Alpine Linux for this challenge, let's wrap something around this idea.
# Extra / Tips
If you joined the challenge, here is a previous guide to limit the memory of your system:
=> https://occ.deadnet.se/how/ occ.deadnet.se: Tips & Tricks
For this challenge, you also need to use a single core at lowest frequency.
On OpenBSD, limiting the CPU frequency is easy:
- stop obsdfreqd if you use it: rcctl stop obsdfreqd && rcctl disable obsdfreqd
- rcctl enable apmd
- rcctl set apmd flags -L
- rcctl restart apmd
Still on OpenBSD, limiting your system to a single core can be done by booting on the bsd.sp kernel, which doesn't support multithreading.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/old-computer-challenge-v3-part1.gmi</guid>
<link>gemini://perso.pw/blog//articles/old-computer-challenge-v3-part1.gmi</link>
<pubDate>Mon, 10 Jul 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>How to install Kanboard on OpenBSD</title>
<description>
<![CDATA[
<pre># Introduction
Let me share an installation guide on OpenBSD for a product I like: kanboard. It's a Kanban board written in PHP, it's easy of use, light, effective, the kind of software I like.
While there is a docker image for easy deployment on Linux, there is no guide to install it on OpenBSD. I did it successfuly, including httpd for the web server.
=> https://kanboard.org/ Kanboard official project website
# Setup
We will need a fairly simple stack:
- httpd for the web server (I won't explain how to do TLS here)
- php 8.2
- database backed by sqlite, if you need postgresql or mysql, adapt
## Kanboard files
Prepare a directory where kanboard will be extracted, it must be owned by root:
install -d -o root -g wheel -m 755 /var/www/htdocs/kanboard
Download the latest version of kanboard, prefer the .tar.gz file because it won't require an extra program.
=> https://github.com/kanboard/kanboard/releases Kanboard GitHub releases
Extract the archive, and move the extracted content into `/var/www/htdocs/kanboard`; the file `/var/www/htdocs/kanboard/cli` should exists if you did it correctly.
Now, you need to fix the permissions for a single directory inside the project to allow the web server to write persistent data.
install -d -o www -g www -m 755 /var/www/htdocs/kanboard/data
## PHP configuration
For kanboard, we will need PHP and a few extensions. They can be installed and enabled using the following command: (for the future, 8.2 will be obsolete, adapt to the current PHP version)
pkg_add php-zip--%8.2 php-curl--%8.2 php-zip--%8.2 php-pdo_sqlite--%8.2
for mod in pdo_sqlite opcache gd zip curl
do
ln -s /etc/php-8.2.sample/${mod}.ini /etc/php-8.2/
done
rcctl enable php82_fpm
rcctl start php82_fpm
Now you have the service php82_fpm (chrooted in /var/www/) ready to be used by httpd.
## HTTPD configuration
Configure the web server httpd, you can use nginx or apache if you prefer, with the following piece of configuration:
server "kanboard.my.domain" {
listen on * port 80
location "*.php" {
fastcgi socket "/run/php-fpm.sock"
}
# don't rewrite for assets (fonts, images)
location "/assets/*" {
root "/htdocs/kanboard/"
pass
}
location match "/(.*)" {
request rewrite "/index.php%1"
}
location "/*" {
root "/htdocs/kanboard"
}
}
Now, enable httpd if not already done, and (re)start httpd:
rcctl enable httpd
rcctl restart httpd
From now, Kanboard should be reachable and usable. The default credentials are admin/admin.
## Sending emails
If you want to send emails, you have three choices:
- use php mail() which just use the local relay
- use sendmail command, which will also use the local relay
- configure an smtp server with authentication, can be a remote server
### Local email
If you want to use one of the first two methods, you will have to add a few files to the chroot like `/bin/sh`; you can find accurate and up to date information about the specific changes in the file `/usr/local/share/doc/pkg-readms/php-8.2`.
### Using a remote smtp server
If you want to use a remote server with authentication (I made a dedicated account for kanboard on my mail server):
Copy `/var/www/htdocs/kanboard/config.default.php` as `/var/www/htdocs/kanboard/config.php`, and changes the variables below accordingly:
define('MAIL_TRANSPORT', 'smtp');
define('MAIL_SMTP_HOSTNAME', 'my-server.local');
define('MAIL_SMTP_PORT', 587);
define('MAIL_SMTP_USERNAME', 'YOUR_SMTP_USER');
define('MAIL_SMTP_PASSWORD', 'XXXXXXXXXXXXXXXXXXXx');
define('MAIL_SMTP_HELO_NAME', null);
define('MAIL_SMTP_ENCRYPTION', "tls");
Your kanboard should be able to send emails now. You can check by creating a new task, and click on "Send by email".
NOTE: Your user also NEED to enable email notifications.
## Cronjob configuration
For some tasks like reminding emails or stats computation, Kanboard requires to run a daily job by running a the CLI version.
You can do it as the www user in root crontab:
0 1 * * * -ns su -m www -c 'cd /var/www/htdocs/kanboard && /usr/local/bin/php-8.2 cli cronjob'
# Conclusion
Kanboard is a fine piece of software, I really like the kanban workflow to organize. I hope you'll enjoy it as well.
I'd also add that installing software without docker is still a thing, this requires you to know exactly what you need to make it run, and how to configure it, but I'd consider this a security bonus point. Think that it will also have all its dependencies updated along with your system upgrades over time.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/kanboard-on-openbsd.gmi</guid>
<link>gemini://perso.pw/blog//articles/kanboard-on-openbsd.gmi</link>
<pubDate>Tue, 11 Jul 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Using anacron to run periodic tasks</title>
<description>
<![CDATA[
<pre># Introduction
When you need to regularly run a program on your workstation that isn't powered 24/7 or even not every day, you can't rely on cronjob for that task.
Fortunately, there is a good old tool for this job (first release June 2000), it's called anacron and it will track when was the last time each configured tasks have been running.
I'll use OpenBSD as an example for the setup, but it's easily adaptable to any other Unix-like system.
=> https://anacron.sourceforge.net Anacron official website
# Installation
The first step is to install the package `anacron`, this will provide the program `/usr/local/sbin/anacron` we will use later. You can also read OpenBSD specific setup instructions in `/usr/local/share/doc/pkg-readmes/anacron`.
Configure root's crontab to run anacron at system boot, we will use the flag `-d` to not run anacron as a daemon, and `-s` to run each task in a sequence instead of in parallel.
The crontab entry would look like this:
@reboot /usr/local/sbin/anacron -ds
If your computer is occasionally on for a few days, anacron won't run at all after the boot, so it would make sense to run it daily too just in case:
at each boot
@reboot /usr/local/sbin/anacron -ds
at 01h00 if the system is up
0 1 * * * /usr/local/sbin/anacron -ds
# Anacron file format
Now, you will configure the tasks you want to run, and at which frequency. This is configured in the file `/etc/anacrontab` using a specific format, different from crontab.
There is a man page named `anacrontab` for official reference.
The format consists of the following ordered fields:
- the frequency in days at which the task should be started
- the delay in minutes after which the task should be started
- a readable name (used as an internal identifier)
- the command to run
I said it before but it's really important to understand, the purpose of anacron is to run daily/weekly/monthly scripts on a system that isn't always on, where cron wouldn't be reliable.
Usually, anacron is started at the system boot and run each task from its anacrontab file, this is why a delay field is useful, you may not want your backup to start immediately upon reboot, while the system is still waiting to have a working network connection.
Some variables can be used like in crontab, the most important are `PATH` and `MAILTO`.
Anacron keeps the last run date of each task in the directory `/var/spool/anacron/` using the identifier field as a filename, it will contain the last run date in the format YYYYMMDD.
# Example for OpenBSD periodic maintenance
I really like the example provided in the OpenBSD package. By default, OpenBSD has some periodic tasks to run every day, week and month at night, we can use anacron to run those maintenance scripts on our workstations.
Edit `/etc/anacrontab` with the following content:
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin
MAILTO=""
1 5 daily_maintenance /bin/sh /etc/daily
7 5 weekly_maintenance /bin/sh /etc/weekly
30 5 monthly_maintenance /bin/sh /etc/monthly
You can manually run anacron if you want to check it's working instead of waiting for a reboot, just type `doas anacron -ds`.
What does the example mean?
- every day, after 5 minutes (after anacron invokation) run `/bin/sh /etc/daily`
- every 7 days, after 5 minutes, run `/bin/sh /etc/weekly`
- every 30 days, after 5 minutes, run `/bin/sh /etc/monthly`
# Useful examples
Here is a list of tasks I think useful to run regularly on a workstation, that couldn't be handled by a cron job.
- Backups: you may want to have a backup every day, or every few days
- OpenBSD snapshot upgrade: use `sysupgrade -ns` every n days to download the sets, they will be installed at the next boot
- OpenBSD packages update: use `pkg_add -u` every day
- OpenBSD system update: use `syspatch` every day
- Repositories update: keep your cloned git / fossil / cvs / svn repository up to date without doing it aggressively
# Conclusion
Anacron is a simple and effective way to keep your periodic tasks done even if you don't use your computer very often.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/anacron.gmi</guid>
<link>gemini://perso.pw/blog//articles/anacron.gmi</link>
<pubDate>Fri, 30 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Ban scanners IPs from OpenSMTP logs</title>
<description>
<![CDATA[
<pre># Introduction
If you are an OpenBSD running an OpenSMTP email server, you may want to ban IPs used by bots trying to bruteforce logins. OpenBSD doesn't have fail2ban available in packages, and sshguard isn't extensible enough to support the multiline log format used by OpenSMTP.
Here is a short script that looks for authentication failures in `/var/mail/maillog` and will add the IPs into the PF table `bot` after too many failed login.
# Setup
## PF
Add this rule to your PF configuration:
block in quick on egress from <bot> to any
This will block any connection from banned IPs, on all ports, not only smtp. I see no reason to allow them to try other doors.
## Script
Write the following content in an executable file, this could be `/usr/local/bin/ban_smtpd` but this doesn't really matter.
!/bin/sh
TRIES=10
EXPIRE_DAYS=5
awk -v tries="$TRIES" '
/ smtp connected / {
ips[$6]=substr($9, 9)
}
/ smtp authentication / && /result=permfail/ {
seen[ips[$6]]++
}
END {
for(ip in seen) {
if(seen[ip] > tries) {
print ip
}
}
}' /var/log/maillog | xargs pfctl -T add -t bot
if the file exists, remove IPs listed there
if [ -f /etc/mail/ignore.txt ]
then
cat /etc/mail/ignore.txt | xargs pfctl -T delete -t bot
fi
remove IPs from the table after $EXPIRE_DAYS days
pfctl -t bot -T expire "$(( 60 * 60 * 24 * $EXPIRE_DAYS ))"
This parses the maillog file, so by default it has a rotation every day, you could adapt the script to your log rotation policy to match what you want, users failing with permfail are banned after some tries, configurable with `$TRIES`.
I added support for an ignore list, to avoid blocking yourself out, just add IP addresses in `/etc/mail/ignore.txt`.
Finally, banned IPs are unbanned after 5 days, you can change it using the variable EXPIRE_DAYS.
## Cronjob
Now, edit root's crontab, you want to run this script at least every hour, and get a log if it fails.
~ * * * * -sn /usr/local/bin/ban_smtpd
This cron job will run every hour at a random minute (defined each time crond restarts, so it stays consistent for a while). The periodicity may depend on the number of scan your email server receives and also the log size vs the CPU power.
# Conclusion
This would be better to have an integrated banning system supporting multiple logfiles / daemons, such as fail2ban, but in the current state it's not possible. This script is simple, fast, extensible and does the job.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/opensmtpd-block-attempts.gmi</guid>
<link>gemini://perso.pw/blog//articles/opensmtpd-block-attempts.gmi</link>
<pubDate>Sun, 25 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Why one would use Qubes OS?</title>
<description>
<![CDATA[
<pre># Intro
Hello, I've been talking a lot about Qubes OS lately but I never explained why I got hooked to its offer. It's time to tell why I like it.
=> https://www.qubes-os.org/ Qubes OS official project website
=> static/solene-qubesos.png Puffy asks Solene to babysit the girl. Solene presents her latest creation. (artwork by Prahou)
=> https://merveilles.town/@prahou Artwork by Prahou
# Presentation
Qubes OS is like a meta system emphasizing on security and privacy. You start on an almost empty XFCE interface on a system called dom0 (Xen hypervisor) with no network access: this is your desktop from which you will start virtual machines integrating into dom0 display in order to do what you need to do with your computer.
Virtual Machines in Qubes OS are called qubes, most of the time, you want them to be using a template (Debian or Fedora for the official ones). If you install a program in the template, it will be available in a Qube using that template. When a Qube is set to only have a persistent /home directory, it's called an AppVM. In that case, any change done outside /home will be discarded upon reboot.
By default, the system network devices are assigned to a special Qube named sys-net which is special in that it gets the physical network devices attached to the VM. sys-net purpose is to be disposable and provide network access to the outside to the VM named sys-firewall which will be doing some filtering.
All your qubes using Internet will have to use sys-firewall as their network provider. A practical use case if you want to use a VPN but not globally is to create a sys-vpn Qube (pick the name you want), connect it to the Internet using sys-firewall, and now you can use sys-vpn as the network source for qubes that should use your VPN, it's really effective.
If you need to use an USB device like a microphone and webcam in a Qube, you have a systray app to handle USB pass-through, from the special Qube sys-usb managing the physical USB controllers, to attach the USB device into a Qube. This allows you to plug anything USB into the computer, and if you need to analyze it, you can start a disposable VM and check what's in there.
=> https://www.qubes-os.org/attachment/site/qubes-trust-level-architecture.png Qubes OS trust level architecture diagram
## Pros
- Efficient VM management due to the use of templates.
- Efficient resource usage due to Xen (memory ballooning, para-virtualization).
- Built for being secure.
- Disposables VMs.
- Builtin integration with Tor (using whonix).
- Secure copy/paste between VMs.
- Security (network is handled by a VM which gets the physical devices attached, hypervisor is not connected).
- Practical approach: if you need to run a program you can't trust because you have too (this happens sometimes), you can do that in a disposable VM and not worry.
- Easy update management + rollback ability in VMs.
- Easy USB pass-through to VMs.
- Easy file transfer between VMs.
- Incredible VM windows integration into the host.
- Qubes-rpc to setup things like split-ssh where the ssh key is stored in an offline VM, with user approval for each use.
- Modular networking, I can make a VPN in a VPN and assign it to other VM but not all.
- Easily extensible as all templates and VMs are managed by Salt Stack.
## Cons
- No GPU acceleration for rendering (no 3D programs, high CPU usage for video/conferencing).
- Limited hardware support due to Xen.
- Requires a powerful system (high CPU requirement + the more RAM the better).
- Qubes OS could be a choice by default because there is no competitor (yet).
- The project seems a bit understaffed.
- Hard learning curve.
- Limited templates offer: Fedora, Debian and whonix are officials. The community provides extra templates based on Gentoo, Kali or Cent OS 8.
- It's meant for a single person use only for a workstation.
# My use case
I tried Qubes OS early 2022, it felt very complicated and not efficient so I abandoned it only after a few hours. This year, I did want to try again for a longer time, reading documentation, trying to understand everything.
The more I used it, the more I got hooked by the idea, and how clean it was. I basically don't really want to use a different workflow anymore, that's why I'm currently implementing OpenKuBSD to have a similar experience on OpenBSD (even if I don't plan to have as many features as Qubes OS).
My workflow is the following, this doesn't mean it's the best one, but it fits my mindset and the way I want to separate things:
- a Qube for web browsing with privacy plugins and Arkenfox user.js, this is what I use to browse websites in general
- a Qube for communication: emails, XMPP and Matrix
- a Qube for development which contains my projects source code
- a Qube for each work client which contains their projects source code
- an OpenBSD VM to do ports work (it's not as integrated as the other though)
- a Qube without network for the KeePassXC databases (personal and per-client), SSH and GPG keys
- a Qube using a VPN for some specific network tasks, it can be connected 24/7 without having all the programs going through the VPN (or without having to write complicated ip rules to use this route only in some case)
- disposable VMs at hand to try things
I've configured my system to use split-SSH and split-GPG, so some qubes can request the use of my SSH key in the dom0 GUI, and I have to manually accept that one-time authorization on each use. It may appear annoying, but at least it gives me a visual indicator that the key is requested, from which VM, and it's not automatically approved (I only have to press Enter though).
I'm not afraid of mixing up client work with my personal projects due to different VM use. If I need to make experimentation, I can create a new Qube or use a disposable one, this won't affect my working systems. I always feel dirty and unsafe when I need to run a package manager like npm to build a program in a regular workstation...
Sometimes I want to experiment a new program, but I have no idea if it's safe when installing it manually or with "curl | sudo bash". In a dispoable, I just don't care, everything is destroyed when I close its terminal, and it doesn't contain any information.
What I really like is that when I say I'm using Qubes OS, for real I'm using Fedora, OpenBSD and NixOS in VMs, not "just" Qubes OS.
However, Qubes OS is super bad for multimedia in general. I have a dual boot with a regular Linux if I want to watch videos or use 3D programs (like Stellarium or Blender).
=> https://www.qubes-os.org/news/2022/10/28/how-to-organize-your-qubes/ Qubes OS blog: how to organize your qubes: different users share their workflows
# Why would you use Qubes OS?
This is a question that seems to pop quite often on the project forum. It's hard to reply because Qubes OS has an important learning curve, it's picky with regard to hardware compatibility and requirements, and the pros/cons weight can differ greatly depending on your usage.
When you want important data to be kept almost physically separated from running programs, it's useful.
When you need to run programs you don't trust, it's useful.
When you prefer to separate contexts to avoid mixing up files / clipboard, like sharing some personal data in your workplace Slack, this can be useful.
When you want to use your computer without having to think about security and privacy, it's really not for you.
When you want to play video games, use 3D programs, benefit from GPU hardware acceleration (for machine learning, video encoding/decoding), this won't work, although with a second GPU you could attach it to a VM, but it requires some time and dedication to get it working fine.
# Security
Qubes OS security model relies on a virtualization software (currently XEN), however they are known to regularly have security issues. It can be debated whether virtualization is secure or not.
=> https://www.qubes-os.org/security/xsa/ Qubes OS security advisory tracker
# Conclusion
I think Qubes OS has an unique offer with its compartmentalization paradigm. However, the required mindset and discipline to use it efficiently makes me warn that it's not for everyone, but more for a niche user base.
The security achieved here is relatively higher than in other systems if used correctly, but it really hinders the system usability for many common tasks. What I like most is that Qubes OS gives you the tools to easily solve practical problems like having to run proprietary and untrusted software.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/qubes-os-why.gmi</guid>
<link>gemini://perso.pw/blog//articles/qubes-os-why.gmi</link>
<pubDate>Tue, 20 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Using git bundle to synchronize a repository between Qubes OS dom0 and an AppVM</title>
<description>
<![CDATA[
<pre># Introduction
In a previous article, I explained how to use Fossil version control system to version the files you may write in dom0 and sync them against a remote repository.
I figured how to synchronize a git repository between an AppVM and dom0, then from the AppVM it can be synchronized remotely if you want. This can be done using the git feature named bundle, which bundle git artifacts into a single file.
=> https://qubes-os.org Qubes OS project official website
=> https://git-scm.com/docs/git-bundle Git bundle documentation
=> https://dataswamp.org/~solene/2023-06-04-qubes-os-version-control-dom0.html Using fossil to synchronize data from dom0 with a remote fossil repository
# What you will learn
In this setup, you will create a git repository (this could be a clone of a remote repository) in an AppVM called Dev, and you will clone it from there into dom0.
Then, you will learn how to send and receive changes between the AppVM repo and the one in dom0, using git bundle.
# Setup
The first step is to have git installed in your AppVM and in dom0.
For the sake of simplicity for the guide, the path `/tmp/repo/` refers to the git repository location in both dom0 and the AppVM, don't forget to adapt to your setup.
In the AppVM Dev, create a git repository using `cd /tmp/ && git init repo`. We need a first commit for the setup to work because we can't bundle commits if there is nothing. So, commit at least one file in that repo, if you have no idea, you can write a short README.md file explaining what this repository is for.
In dom0, use the following command:
qvm-run -u user --pass-io Dev "cd /tmp/repo/ && git bundle create - master" > /tmp/git.bundle
cd /tmp/ && git clone -b master /tmp/git.bundle repo
Congratulations, you cloned the repository into dom0 using the bundle file, the path `/tmp/git.bundle` is important because it's automatically set as URL for the remote named "origin". If you want to manage multiple git repositories this way, you should use a different name for this exchange file for each repo.
[solene@dom0 repo]$ git remote -v
origin /tmp/git.bundle (fetch)
origin /tmp/git.bundle (push)
Back to the AppVM Dev, run the following command in the git repository, this will configure the bundle file to use for the remote dom0. Like previously, you can pick the name you prefer.
git remote add dom0 /tmp/dom0.bundle
# Workflow
Now, let's explain the workflow to exchange data between the AppVM and dom0. From here, we will only use dom0.
Create a file `push.sh` in your git repository with the content:
!/bin/sh
REPO="/tmp/repo/"
BRANCH=master
setup on the AppVM
git remote add dom0 /tmp/dom0.bundle
git bundle create - origin/master..master | \
qvm-run -u user --pass-io Dev "cat > /tmp/dom0.bundle"
qvm-run -u user --pass-io Dev "cd ${REPO} && git pull -r dom0 ${BRANCH}"
Create a file `pull.sh` in your git repository with the content:
!/bin/sh
REPO="/tmp/repo/"
BRANCH=master
init the repo on dom0
git clone -b ${BRANCH} /tmp/git.bundle
qvm-run -u user --pass-io Dev "cd ${REPO} && git bundle create - dom0/master..${BRANCH}" > /tmp/git.bundle
git pull -r
Make the files `push.sh` and `pull.sh` executable.
If you don't want to have the files committed in your repository, add their names to the file `.gitignore`.
Now, you are able to send changes to the AppVM repo using `./push.sh`, and receive changes using `./pull.sh`.
If needed, those scripts could be made more generic and moved in a directory in your PATH instead of being used from within the git repository.
## Explanations
Here are some explanations about those two scripts.
### Push.sh
In the script `push.sh`, `git bundle` is used to send a bundle file over stdout containing artifacts from the remote AppVM last known commit up to the latest commit in the current repository, hence origin/master..master range. This data is piped into the file `/tmp/dom0.bundle` in the AppVm, and was configured earlier as a remote for the repository.
Then, the command `git pull -r dom0 master` is used to fetch the changes from the bundle, and rebase the current repository, exactly like you would do with a "real" remote over the network.
### Pull.sh
In the script `pull.sh`, we run the `git bundle` from within the AppVM Dev to generate on stdout the bundle from the last known state of dom0 up to the latest commit in the branch master, and pipe into the dom0 file `/tmp/git.bundle`, remember that this file is the remote origin in dom0's clone.
After the bundle creation, a regular `git pull -r` is used to fetch the changes, and rebase the repository.
### Using branches
If you use different branches, this could require adding an extra parameter to the script to make the variable BRANCH configurable.
# Conclusion
I find this setup really elegant, the safe `qvm-run` is used to exchange static data between dom0 and the AppVM, no network is involved in the process. Now there is no reason to have dom0 configuration file not properly tracked within a version control system :)
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/qubes-os-git-bundle.gmi</guid>
<link>gemini://perso.pw/blog//articles/qubes-os-git-bundle.gmi</link>
<pubDate>Mon, 19 Jun 2023 00:00:00 GMT</pubDate>
</item>
</channel>
</rss>