💾 Archived View for perso.pw › blog › rss.xml captured on 2023-03-20 at 17:54:31.
⬅️ Previous capture (2023-01-29)
➡️ Next capture (2023-04-19)
-=-=-=-=-=-=-
<?xml version="1.0" encoding="UTF-8"?> <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"> <channel> <title>Solene'%</title> <description></description> <link>gemini://perso.pw/blog/</link> <atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" /> <item> <title>L'État m'impose Google (ou Apple)</title> <description> <![CDATA[ <pre># Introduction C'est rare, mais ceci est un message de ras-le-bol. Ayant besoin d'une formation, pour finir les procédures en lignes sur un compte CPF (Compte Formation Professionnelle), j'ai besoin d'avoir une "identité numérique +". Sur le principe, c'est cool, c'est un moyen de créer un compte en validant l'identité de la personne via une pièce d'identité, jusque là c'est normal et plutot bien pensé. # Le problème Le gros soucis, c'est qu'une fois les formalités terminées, il faut installer l'application Android / iOS sur son téléphone, et là soucis. => https://play.google.com/store/apps/details?id=fr.laposte.idn&hl=fr&pli=1 Google Play : L'Identité Numérique La Poste Ayant libéré mon téléphone Android de Google grâce à LineageOS, j'ai choisi de ne pas installer Google Play pour être 100% dégooglisé, et j'installe mes applications depuis le dépôt F-droid qui couvre tous mes besoins. => https://f-droid.org/en/ Site du projet F-droid => https://lineageos.org/ Site du projet LineageOS Dans ma situation, il existe une solution pour installer des applications (heuresement très rares) nécessaires pour certains services, qui consiste à utiliser "Aurora Store" depuis mon téléphone pour télécharger un APK de Google Play (le fichier d'installation d'application) et l'installer. Pas de soucis, j'ai pu installer le programme de La Poste. Le problème, c'est que je le lance et j'obtiens ce magnifique message "Erreur, vous devez installer l'application depuis Google Play", et là , je ne peux absolument rien faire d'autre que de quitter l'application. => static/identite-numerique.png Message d'erreur de l'application La Poste sur LineageOS sans services Google Et voilà , je suis coincée, l'État m'impose d'utiliser Google pour utiliser ses propres services 🙄, mes solutions sont les suivantes :
on:
schedule:
- cron: '0 2 * * *' # every day at 02h00
# Credits I would like to thank Jonathan Tremesaygues who wrote most of the GitHub actions pieces after I shared with him about my idea and how I would implement it. => https://jtremesay.org/ Jonathan Tremesaygues's website # Going further Here is a simple script I'm using to use a local Linux machine as a Gentoo builder for the box you run it from. It's using a gentoo stage3 docker image, populated with packages from the local system and its `/etc/portage/` directory. Note that you have to use `app-misc/resolve-march-native` to generate the compiler command line parameters to replace `-march=native` because you want the remote host to build with the correct flags and not its own `-march=native`, you should also make sure those flags are working on the remote system. From my experience, any remote builder newer than your machine should be compatible. => https://tildegit.org/solene/gentoo-remote-builder Tildegit: Example of scripts to build packages on a remote machine for the local machine </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/github-actions-building-gentoo-packages.gmi</guid> <link>gemini://perso.pw/blog//articles/github-actions-building-gentoo-packages.gmi</link> <pubDate>Sat, 04 Mar 2023 00:00:00 GMT</pubDate> </item> <item> <title>Lightweight data monitoring using RRDtool</title> <description> <![CDATA[ <pre># Introduction I like my servers to run the least code possible, and the least services running in general, this ease maintenance and let room for other thing to run. I recently wrote about monitoring software to gather metrics and render them, but they are all overkill if you just want to keep track of a single value over time, and graph it for visualization. Fortunately, we have an old and robust tool doing the job fine, it's perfectly documented and called RRDtool. => https://oss.oetiker.ch/rrdtool/ RRDtool official website RRDtool stands for "Round Robin Database Tool", it's a set of programs and a specific file format to gather metrics. The trick with RRD files is that they have a fixed size, when you create it, you need to define how many values you want to store in it, at which frequency, for how long. This can't be changed after the file creation. In addition, RRD files allow you to create derivated time series to keep track of computed values on a longer timespan, but with a lesser resolution. Think of the following use case: you want to monitor your home temperature every 10 minutes for the past 48 hours, but you want to keep track of some information for the past year, you can tell RRD to compute the average temperature for every hour, but for a week, or the average temperature for four hours but for a month, and the average temperature per day for a year. All of this will be fixed size. # Anatomy of a RRD file RRD files can be dumped as XML, this will give you a glimpse that may ease the understanding of this special file format. Let's create a file to monitor the battery level of your computer every 20 seconds, with the last 5 values, don't focus at understanding the whole command line now:
rrdtool create test.rrd --step 10 DS:battery:GAUGE:20:0:100 RRA:AVERAGE:0.5:1:5
If we dump the created file using the according command, we get this result (stripped a bit to make it fit better):
<!-- Round Robin Database Dump -->
<rrd>
<version>0003</version>
<step>10</step> <!-- Seconds -->
<lastupdate>1676569107</lastupdate> <!-- 2023-02-16 18:38:27 CET -->
<ds>
<name> battery </name>
<type> GAUGE </type>
<minimal_heartbeat>20</minimal_heartbeat>
<min>0.0000000000e+00</min>
<max>1.0000000000e+02</max>
<!-- PDP Status -->
<last_ds>U</last_ds> <value>NaN</value> <unknown_sec> 7 </unknown_sec>
</ds>
<!-- Round Robin Archives -->
<rra>
<cf>AVERAGE</cf>
<pdp_per_row>1</pdp_per_row> <!-- 10 seconds -->
<params> <xff>5.0000000000e-01</xff> </params>
<cdp_prep>
<ds>
<primary_value>0.0000000000e+00</primary_value>
<secondary_value>0.0000000000e+00</secondary_value>
<value>NaN</value>
<unknown_datapoints>0</unknown_datapoints>
</ds>
</cdp_prep>
<database>
<!-- 2023-02-16 18:37:40 CET / 1676569060 --> <row><v>NaN</v></row>
<!-- 2023-02-16 18:37:50 CET / 1676569070 --> <row><v>NaN</v></row>
<!-- 2023-02-16 18:38:00 CET / 1676569080 --> <row><v>NaN</v></row>
<!-- 2023-02-16 18:38:10 CET / 1676569090 --> <row><v>NaN</v></row>
<!-- 2023-02-16 18:38:20 CET / 1676569100 --> <row><v>NaN</v></row>
</database>
</rra>
</rrd>
The most important thing to understand here, is that we have a "ds" (data serie) named battery of type GAUGE with no last value (I never updated it), but also a "RRA" (Round Robin Archive) for our average value that contain timestamp and no value associated to each. You can see that internally, we already have our 5 slots that exist with a null value associated. If I update the file, the first null value will disappear, and a new record will be added at the end with the actual value. # Monitoring a value In this guide, I would like to share my experience at using rrdtool to monitor my solar panel power output over the last few hours, which can be easily displayed on my local dashboard. The data are also collected and sent to a graphana server, but it's not local and displaying to know the last values is wasting resources and bandwidth. First, you need `rrdtool` to be installed, you don't need anything else to work with RRD files. ## Create the RRD file Creating the RRD file is the most tricky part, because you can't change it afterward. I want to collect a data every 5 minutes (300 seconds), this is an absolute data between 0 and 4000, so we will define a step of 300 seconds to tell the file must receive a value every 300 seconds. The type of the value will be GAUGE, because it's just a value that doesn't depend on the previous one. If we were monitoring power change over time, we would like to use DERIVE, because it computes the delta between each value. Furthermore, we need to configure the file to give up on a value slot if it's not updated within 600 seconds. Finally, we want to be able to graph each measurement, this can be done by adding an AVERAGE calculated value in the file, but with a resolution of 1 value, with 240 measurements stored. What this mean, is for each time we add a value in the RRD file, the field for AVERAGE will be calculated with only the last value as input, and we will keep 240 of them, allowing us to graph up to 240 * 5 minutes of data back in time.
rrdtool create solar-power.rrd --step 300 ds:value:gauge:600:0:4000 rra:average:0.5:1:240
^ ^ ^ ^ ^ ^ ^ ^ ^
| | | | | max value | | | | number of values to keep
| | | | min value | | | how many previous values should be used in the function, 1 means just a single value, so averaging itself
| | | time before null | | (xfiles factor) how much percent of unknown values do we agree to use for calculating a value
| | measurement type | function to apply, can be AVERAGE, MAX, MIN, LAST, or mathematical operations
| variable name
And then, you have your `solar-power.rrd` file created. You can inspect it with `rrdtool info solar-power.rrd` or dump its content with `rrdtool dump solar-power.rrd`. => https://oss.oetiker.ch/rrdtool/doc/rrdcreate.en.html RRDtool create documentation ## Add values to the RRD file Now that we have prepared the file to receive data, we need to populate it with something useful. This can be done using the command `rrdtool update`.
CURRENT_POWER=$(some-command-returning-a-value)
rrdtool update solar-power.rrd "N:${CURRENT_POWER}"
^ ^
| | value of the first field of the RRD file (we created a single field)
| when the value has been measured, N equals to NOW
=> https://oss.oetiker.ch/rrdtool/doc/rrdupdate.en.html RRDtool update documentation ## Graph the content of the RRD file The trickiest part, but less problematic, is to generate a usable graph from the data. The operation is not destructive as it's not modifying the file, so we can make a lot of experimentations on it without affecting the content. We will generate something simple like the picture below. Of course, you can add a lot more information, color, axis, legends etc.. but I need my dashboard to stay simple and clean. => ./static/solar-power.svg A diagram displaying solar power over time (on a cloudy day)
rrdtool graph --end now -l 0 --start end-14000s --width 600 --height 300 \
/var/www/htdocs/dashboard/solar.svg -a SVG \
DEF:ds0=/var/lib/rrdtool/solar-power.rrd:value:AVERAGE \
"LINE1:ds0#0000FF:power" \
"GPRINT:ds0:LAST:current value %2.1lf"
I think most flags are explicit, if not you can look at the documentation, what interests us here are the last three lines. The `DEF` line associates the RRA AVERAGE of the variable `value` in the file `/var/lib/rrdtool/solar-power.rrd` to the name `ds0` that will be used later in the command line. The `LINE1` line associates a legend, and a color to the rendering of this variable. The `GPRINT` line adds a text in the legend, here we are using the last value of `ds0` and format it in a printf style string `current value %2.1lf`. => https://oss.oetiker.ch/rrdtool/doc/rrdgraph.en.html RRDtool graph documentation => https://oss.oetiker.ch/rrdtool/doc/rrdgraph_examples.en.html RRDtool graph examples # Conclusion RRDtool is very nice, it's a storage engine for monitoring software such as collectd or munin, but we can also use them on the spot with simple scripts. However, they have drawbacks, when you start to create many files it doesn't scale well, generate a lot of I/O and consume CPU if you need to render hundreds of pictures, that's why a daemon named `rrdcached` has been created to help mitigate the load issue by delegating updates of a lot of RRD files in a more sequential way. # Going further I encourage you to look at the official project website, all the other command can be very useful, and rrdtool also exports data as XML or JSON if needed, which is perfect to plug in with other software. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/rrdtool-light-monitoring.gmi</guid> <link>gemini://perso.pw/blog//articles/rrdtool-light-monitoring.gmi</link> <pubDate>Thu, 16 Feb 2023 00:00:00 GMT</pubDate> </item> <item> <title>Introduction to nftables on Linux</title> <description> <![CDATA[ <pre># Introduction Linux kernel has an integrated firewall named netfilter, but you manipulate it through command lines such as the good old iptables, or nftables which will eventually superseed iptables. Today, I'll share my experience in using nftables to manage my Linux home router, and my workstation. I won't explain much in this blog post because I just want to introduce nftables and show what it looks like, and how to get started. I added comments in my configuration files, I hope it's enough to get a grasp and make you curious to learn about nftables if you use Linux. # Configurations nftables works by creating a file running `nft -f` in the shebang, this allows atomic replacement of the ruleset if it's valid. Depending on your system, you may need to run the script at boot, but for instance on Gentoo, a systemd service is provided to save rules upon shutdown and restore them at boot. ## Router
flush ruleset
table inet filter {
# defines a list of networks for further reference
set safe_local {
type ipv4_addr
flags interval
elements = { 10.42.42.0/24 }
}
chain input {
# drop by default
type filter hook input priority 0; policy drop;
ct state invalid drop comment "early drop of invalid packets"
# allow connections to work when initiated from this system
ct state {established, related} accept comment "accept all connections related to connections made by us"
# allow loopback
iif lo accept comment "accept loopback"
# remove weird packets
iif != lo ip daddr 127.0.0.1/8 drop comment "drop connections to loopback not coming from loopback"
iif != lo ip6 daddr ::1/128 drop comment "drop connections to loopback not coming from loopback"
# make ICMP work
ip protocol icmp accept comment "accept all ICMP types"
ip6 nexthdr icmpv6 accept comment "accept all ICMP types"
# only for known local networks
ip saddr @safe_local tcp dport {22, 53, 80, 2222, 19999, 12344, 12345, 12346} accept
ip saddr @safe_local udp dport {53} accept
# allow on WAN
iif eth0 tcp dport {80} accept
iif eth0 udp dport {7495} accept
}
# allow NAT to get outside
chain lan_masquerade {
type nat hook postrouting priority srcnat;
meta nfproto ipv4 oifname "eth0" masquerade
}
# port forwarding
chain lan_nat {
type nat hook prerouting priority dstnat;
iif eth0 tcp dport 80 dnat ip to 10.42.42.102:8080
}
}
## Workstation
flush ruleset
table inet filter {
set safe_local {
type ipv4_addr
flags interval
elements = { 10.42.42.0/24, 10.43.43.1/32 }
}
chain input {
# drop by default
type filter hook input priority 0; policy drop;
ct state invalid drop comment "early drop of invalid packets"
# allow connections to work when initiated from this system
ct state {established, related} accept comment "accept all connections related to connections made by us"
# allow loopback
iif lo accept comment "accept loopback"
# remove weird packets
iif != lo ip daddr 127.0.0.1/8 drop comment "drop connections to loopback not coming from loopback"
iif != lo ip6 daddr ::1/128 drop comment "drop connections to loopback not coming from loopback"
# make ICMP work
ip protocol icmp accept comment "accept all ICMP types"
ip6 nexthdr icmpv6 accept comment "accept all ICMP types"
# only for known local networks
ip saddr @safe_local tcp dport 22 accept comment "accept SSH"
ip saddr @safe_local tcp dport {7905, 7906} accept comment "accept musikcube"
ip saddr @safe_local tcp dport 8080 accept comment "accept nginx"
ip saddr @safe_local tcp dport 1714-1764 accept comment "accept kdeconnect TCP"
ip saddr @safe_local udp dport 1714-1764 accept comment "accept kdeconnect UDP"
ip saddr @safe_local tcp dport 22000 accept comment "accept syncthing"
ip saddr @safe_local udp dport 22000 accept comment "accept syncthing"
ip saddr @safe_local tcp dport {139, 775, 445} accept comment "accept samba"
ip saddr @safe_local tcp dport {111, 775, 2049} accept comment "accept NFS TCP"
ip saddr @safe_local udp dport 111 accept comment "accept NFS UDP"
# for my public IP over VPN
ip daddr 78.224.46.36 udp dport 57500-57600 accept comment "accept mosh"
ip6 daddr 2a00:5854:2151::1 udp dport 57500-57600 accept comment "accept mosh"
}
# drop anything that looks forwarded
chain forward {
type filter hook forward priority 0; policy drop;
}
}
# Some commands If you need to operate a firewall using nftables, you may use `nft` to add/remove rules on the go instead of using the script with the ruleset. However, let me share a small cheatsheet of useful commands: ## List rules If you need to display the current rules in use:
nft list ruleset
## Flush rules If you want to delete all the rules, just use:
nft flush ruleset
# Going further If you want to learn more about nftables, there is the excellent man page of the command `nft`. I used some resources from Arch Linux and Gentoo that you may also enjoy: => https://wiki.gentoo.org/wiki/Nftables Gentoo Wiki: Nftables => https://wiki.gentoo.org/wiki/Nftables/Examples Gentoo Wiki: Nftables examples => https://wiki.archlinux.org/title/Nftables Arch Linux Wiki: Nftables </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/nftables.gmi</guid> <link>gemini://perso.pw/blog//articles/nftables.gmi</link> <pubDate>Mon, 06 Feb 2023 00:00:00 GMT</pubDate> </item> <item> <title>[Cheatsheet] Fossil version control software</title> <description> <![CDATA[ <pre># Introduction Fossil is a DVCS (decentralized version control software), an alternative to programs such as darcs, mercurial or git. It's developed by the same people doing sqlite and rely on sqlite internally. => https://www2.fossil-scm.org/ Fossil official website # Why? Why not? I like diversity in software, and I'm unhappy to see Git dominating the field. Fossil is a viable alternative, with simplified workflow that work very well for my use case. One feature I really like is the autosync, when a remote is configured, fossil will automatically push the changes to the remote, then it looks like a centralizer version control software like SVN, but for my usage it's really practical. Of course, you can disable autosync if you don't want to use this feature. I suppose this could be reproduced in git using a post-commit hook that run `git push`. Fossil is opinionated, so you may not like it if that doesn't match your workflow, but when it does, it's a very practical software that won't get in your way. # Fossil repository is a file A major and disappointing fact at first is that a fossil repository is a single file. In order to checkout the content of the repository, you will need to run `fossil open /path/to/repo.fossil` in the directory you want to extract the files. Fossil supports multiple checkout of different branches in different directories, like git worktrees. # Cheatsheet Because I'm used to other versionning software, I need a simple cheatsheet to learn most operations, they are easy to learn, but I prefer to note it down somewhere. ## View extra files You can easily find non-versioned files using the following command: `fossil extras` ## View changes You can get a list of tracked files that changed: `fossil changes` Note that it only display a list of files, not the diff that you can obtain using `fossil diff`. ## Commit By default, fossil will commit all changes in tracked files, if you want to only commit a change in a file, you must pass it as a parameter. `fossil commit` ## Change author name `fossil user new solene@t470` and `fossil user default solene@t470` => https://www2.fossil-scm.org/home/doc/trunk/www/env-opts.md More possibilities are explained in Fossil documentation ## Add a remote Copy the .fossil file to a remote server (I'm using ssh), and in your fossil checkout, type `fossil remote add my-remote ssh://hostname//home/solene/my-file.fossil`, and then `fossil remote my-remote`. Note that the remote server must have the fossil binary available in `$PATH`. ## Display the Web Interface `fossil ui` will open your web browser and log in as admin user, you can view the timeline, bug trackers, wiki, forum etc... Of course, you can enable/disable everything you want. ## Get changes from a remote This is a two-step operation, you must first get changes from the remote fossil, and then update your local checkout:
fossil pull
fossil update
## Commit partial changes in a file Fossil doesn't allow staging and committing partial changes in a file like with `git add -p`, the official way is to stash your changes, generate a diff of the stash, edit the diff, apply it and commit. It's recommended to use a program named patchouli to select hunks in the diff file to ease the process. => https://www2.fossil-scm.org/home/doc/trunk/www/gitusers.md Fossil documentation: Git to Fossil translation The process looks like this:
fossil stash -m "tidying for making atomic commits"
fossil stash diff > diff
$EDITOR diff
patch -p0 < diff
fossil commit
Note that if you added new files, the "add" information is stashed and contained in the diff.</pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/fossil-scm.gmi</guid> <link>gemini://perso.pw/blog//articles/fossil-scm.gmi</link> <pubDate>Sun, 29 Jan 2023 00:00:00 GMT</pubDate> </item> <item> <title>Configure syncthing to sync a single file</title> <description> <![CDATA[ <pre># Introduction Quick blog entry to remember about something that wasn't as trivial as I thought. I needed to use syncthing to keep a single file in sync (KeePassXC database) without synchronizing the whole directory. You have to use mask exclusion feature to make it possible. Put it simple, you need the share to forbid every file, except the one you want to sync. This configuration happens in the `.stignore` file in the synchronized directory, but can also be managed from the Web interface. => https://docs.syncthing.net/users/ignoring.html Syncthing documentation about ignoring files # Example If I want to only sync KeePassXC files (they have the `.kdbx` extension), I have this in my `.stignore` file:
!*.kdbx
And that's all! Note that this must be set on all nodes using this share, otherwise you may have surprises. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/syncthing-single-file.gmi</guid> <link>gemini://perso.pw/blog//articles/syncthing-single-file.gmi</link> <pubDate>Sat, 28 Jan 2023 00:00:00 GMT</pubDate> </item> <item> <title>How to boot on a BTRFS snapshot</title> <description> <![CDATA[ <pre># Introduction I always wanted to have a simple rollback method on Linux systems, NixOS gave me a full featured one, but it wasn't easy to find a solution for other distributions. Fortunately, with BTRFS, it's really simple thanks to snapshots being mountable volumes. # Setup You need a Linux system with a BTRFS filesystem, in my examples, the root subvolume (where `/` is) is named `gentoo`. I use `btrbk` to make snapshots of `/` directly in `/.snapshots`, using the following configuration file:
snapshot_preserve_min 30d
volume /
snapshot_dir .snapshots
subvolume .
With a systemd service, it's running once a day, so I'll have for 30 days of snapshots to restore my system if needed. This creates snapshots named like the following:
$ ls /.snapshots/
ROOT.20230102
ROOT.20230103
ROOT.20230104
A snapshot address from BTRFS point of view looks like `gentoo/.snapshots/ROOT.20230102`. I like btrbk because it's easy to use and configure, and it creates easy to remember snapshots names. # Booting on a snapshot When you are in the bootloader (GRUB, systemd-boot, Lilo etc..), edit the command line, and add the new option (replace if already exists) with the following, the example uses the snapshot `ROOT.20230102`:
rootflags=subvol=gentoo/.snapshots/ROOT.20230103
Boot with the new command line, and you should be on your snapshot as the root filesystem. # Be careful When you are on a snapshot, this mean any change will be specific to this volume. If you use a separate partition for `/boot`, an older snapshot may not have the kernel (or its module) you are trying to boot. # Conclusion This is a very simple but effective mecanism, more than enough to recover from a bad upgrade, especially when you need the computer right now. # Going further There is a project grub-btrfs which can help you adding BTRFS snapshots as boot choices in GRUB menus. => https://github.com/Antynea/grub-btrfs grub-btrfs GitHub project page </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/boot-on-btrfs-snapshot.gmi</guid> <link>gemini://perso.pw/blog//articles/boot-on-btrfs-snapshot.gmi</link> <pubDate>Wed, 04 Jan 2023 00:00:00 GMT</pubDate> </item> <item> <title>Booting Gentoo on a BTRFS from multiple LUKS devices</title> <description> <![CDATA[ <pre># Introduction This is mostly a reminder for myself. I installed Gentoo on a machine, but I reused the same BTRFS filesystem where NixOS is already installed, the trick is the BTRFS filesystem is composed of two partitions (a bit like raid 0) but they are from two different LUKS partitions. It wasn't straightforward to unlock that thing at boot. # Fix grub error Grub was trying to autodetect the root partition to add `root=/dev/something`, but as my root filesystem requires `/dev/mapper/ssd1` and `/dev/mapper/ssd2`, it was simply adding `root=/dev/mapper/ssd1 /dev/mapper/ssd2`, which is wrong. This required a change in the file `/etc/grub.d/10_linux` where I entirely deleted the `root=` parameter. # Compile systemd with cryptsetup A mistake I made was to try to boot without systemd compiled with cryptsetup support, this was just failing because in the initramfs, some systemd services were used to unlock the partitions, but without proper support for cryptsetup it didn't work. # Linux command line parameters In `/etc/default/grub`, I added this line, it contains the UUID of both LUKS partitions needed, and a `root=/dev/dm-0` which is unexpectedly the first unlocked device path, and `rd.luks=1` to enble LUKS support.
GRUB_CMDLINE_LINUX="rd.luks.uuid=24682f88-9115-4a8d-81fb-a03ec61d870b rd.luks.uuid=1815e7a4-532f-4a6d-a5c6-370797ef2450 rootfs=btrfs root=/dev/dm-0 rd.luks=1"
# Run Dracut and grub After the changes, I did run `dracut --force --kver 5.15.85-gentoo-dist` and `grub-mkconfig -o /boot/grub/grub.cfg` # Conclusion It's working fine now, I thought it would require me to write a custom initrd script, but dracut is providing all I needed, but there were many quirks on the path with no really helpful message to understand what's failing. Now, I can enjoy my dual boot Gentoo / NixOS (they are quite antagonists :D), but they share the same filesystem and I really enjoy this weird setup. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/multi-luks-btrfs-boot-gentoo.gmi</guid> <link>gemini://perso.pw/blog//articles/multi-luks-btrfs-boot-gentoo.gmi</link> <pubDate>Mon, 02 Jan 2023 00:00:00 GMT</pubDate> </item> </channel> </rss>