💾 Archived View for perso.pw › blog › rss.xml captured on 2023-03-20 at 17:54:31.

View Raw

More Information

⬅️ Previous capture (2023-01-29)

➡️ Next capture (2023-04-19)

🚧 View Differences

-=-=-=-=-=-=-

<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Solene'%</title>
    <description></description>
    <link>gemini://perso.pw/blog/</link>
    <atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" />
    <item>
  <title>L'État m'impose Google (ou Apple)</title>
  <description>
    <![CDATA[
<pre># Introduction

C'est rare, mais ceci est un message de ras-le-bol.

Ayant besoin d'une formation, pour finir les procédures en lignes sur un compte CPF (Compte Formation Professionnelle), j'ai besoin d'avoir une "identité numérique +".

Sur le principe, c'est cool, c'est un moyen de créer un compte en validant l'identité de la personne via une pièce d'identité, jusque là c'est normal et plutot bien pensé.

# Le problème

Le gros soucis, c'est qu'une fois les formalités terminées, il faut installer l'application Android / iOS sur son téléphone, et là soucis.

=> https://play.google.com/store/apps/details?id=fr.laposte.idn&hl=fr&pli=1 Google Play : L'Identité Numérique La Poste

Ayant libéré mon téléphone Android de Google grâce à LineageOS, j'ai choisi de ne pas installer Google Play pour être 100% dégooglisé, et j'installe mes applications depuis le dépôt F-droid qui couvre tous mes besoins.

=> https://f-droid.org/en/ Site du projet F-droid
=> https://lineageos.org/ Site du projet LineageOS

Dans ma situation, il existe une solution pour installer des applications (heuresement très rares) nécessaires pour certains services, qui consiste à utiliser "Aurora Store" depuis mon téléphone pour télécharger un APK de Google Play (le fichier d'installation d'application) et l'installer.  Pas de soucis, j'ai pu installer le programme de La Poste.

Le problème, c'est que je le lance et j'obtiens ce magnifique message "Erreur, vous devez installer l'application depuis Google Play", et là, je ne peux absolument rien faire d'autre que de quitter l'application.

=> static/identite-numerique.png Message d'erreur de l'application La Poste sur LineageOS sans services Google

Et voilà, je suis coincée, l'État m'impose d'utiliser Google pour utiliser ses propres services 🙄, mes solutions sont les suivantes :



# Message Ă  La Poste

S'il vous plait, trouvez une solution pour que l'on puisse utiliser votre service SANS avoir recours Ă  Google.

# Extras

Il semblerait que l'on puisse Ă©viter d'utiliser l'application France Connect + via le formulaire suivant (merci Linuxmario)

=> https://www.moncompteformation.gouv.fr/espace-public/je-ne-remplis-pas-les-conditions-pour-utiliser-franceconnect-0 Je ne remplis pas les conditions pour utiliser france connect +
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/france-google.gmi</guid>
  <link>gemini://perso.pw/blog//articles/france-google.gmi</link>
  <pubDate>Fri, 17 Mar 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Launching on Patreon</title>
  <description>
    <![CDATA[
<pre># Introduction

Let me share some breaking news, if you enjoy this blog and my open source work, now you can sponsor me through the service Patreon.

=> https://patreon.com/user?u=23274526 Patreon page to sponsor me

Why would you do that in the first place?  Well, this would allow me to take time off my job, and spend it either writing on the blog, or by contributing to open source projects, mainly OpenBSD or a bit of nixpkgs.

I've been publishing on the blog for almost 7 years now, for the most recent years, I've been writing a lot here, and I still enjoy doing so!  However, I have a less free time now, and I'd prefer to continue writing here instead of working at my job full time.  I've been ocasionaly receiving donation for my blog work, but one-shot gifts (I appreciate! :-) ) won't help me much against regular monthly incomes that I can expect, and help me to organize myself with my job.

# What's the benefit for Patrons?

I chose Patreon because the platform is reliable and offers managing some extras for the people patronizing me.

Let be clear about the advantages:



# What won't change

This may sound scary to some I suppose, so let's answer some questions in advance:



# Just a note

It's hard for me to frame exactly what I'll be working on.  I include the OpenBSD webzine as an extension of the blog, and sometimes ports work too because I'm writing about a program, I go down the rabbit-hole of updating it, and then there is a whole story to tell.

To conclude, let me thank you if you plan to support me financially, every bit will help, even small sponsors.  I'm really motivated by this, I want to promote community driven open source projects such as OpenBSD, but I also want to cover a topic that matters a lot to me which is old hardware reuse.  I highlighted this with the old computer challenge, but this is also the core of all my self-hosting articles and what drives me when using computers.

# Asked Questions

I'll collect here asked questions (not yet frequently asked though), and my answers:


</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/going-on-patreon.gmi</guid>
  <link>gemini://perso.pw/blog//articles/going-on-patreon.gmi</link>
  <pubDate>Mon, 13 Mar 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Linux $HOME encryption with ecryptfs</title>
  <description>
    <![CDATA[
<pre># Introduction

In this article, I'd like to share with you about the Linux specific feature ecryptfs, which allows users to have encrypted directories.

While disk encryption done with cryptsetup/LUKS is very performant and secure, there are some edge cases in which you may want to use ecryptfs, whether the disk is LUKS encrypted or not.

I've been able to identify a few use cases making ecryptfs relevant:



=> https://www.ecryptfs.org/ ecryptfs official website

# Full $HOME Encryption

In this configuration, you want all the files in the $HOME directory of your user to be encrypted.  This works well and especially as it integrates with PAM (the "login manager" in Linux) so it unlocks the files upon login.

I tried the following setup on Gentoo Linux, the setup is quite standard for any Linux distribution packaging ecryptfs-utils.

## Setup

As I don't want to duplicate documentation effort, let me share two links explaining how to set up the home encryption for a user.

=> https://wiki.gentoo.org/wiki/Encrypt_a_home_directory_with_ECryptfs Gentoo Wiki: Encrypt a home directory with ECryptfs
=> https://wiki.archlinux.org/title/ECryptfs ArchWiki: eCryptfs

Both guides are good, they will explain thoroughly how to set up ecryptfs for a user.

However, here is a TLDR version:

1. install ecryptfs-utils and make sure ecryptfs module is loaded at boot
2. modify `/etc/pam.d/system-auth` to add ecryptfs unlocking at login (3 lines are needed, at specific places)
3. run `ecryptfs-migrate-home -u $YOUR_USER` as root to convert the user home directory into an encrypted version
4. delete the old unencrypted home which should be named after `/home/YOUR_USER.xxxxx` where xxxxx are random characters (make sure you have backups)

After those steps, you should be able to log in with your user, `mount` outputs should show a dedicated entry for the home directory.

# Private directory encryption

In this configuration, you will have ecryptfs encrypting a single directory named `Private` in the home directory.

That can be useful if you already have an encrypted disk, but you have very secret files that must be encrypted when you don't need them, this will protect file leak on a compromised running system, except if you unlock the directory while the system is compromised.

This can also be used on a thrashable system (like my netbook) that isn't encrypted, but I may want to save a few files there that are private.

## Setup

That part is really easy:

1. install a package named `ecryptfs-utils` (may depend on your distribution)
2. run `ecryptfs-setup-private --noautomount`
3. Type your login password
4. Press enter to use an auto generated mount passphrase (you don't use this one to unlock the directory)
5. Done!

The mount passphrase is used in addition to the login passphrase to encrypt the files, you may need it if you have to unlock backuped encrypted files, so better save it in your password manager if you make backup of the encrypted files.

You can unlock the access to the directory `~/Private` by typing `ecryptfs-mount-private` and type your login password.  Congratulations, now you have a local safe for your files!

# Performance

Ecryptfs was available in older Ubuntu installer releases as an option to encrypt a user home directory without the full disk, it seems it has been abandoned due to performance reasons.

I didn't make extensive benchmarks here, but I compared the writing speed of random characters into a file on an unencrypted ext4 partition, and the ecryptfs private directory on the same disk.  On the unencrypted directory, it was writing at 535 MB/s while on the ecryptfs it was only writing at 358 MB/s, that's almost 33% slower.  However, it's still fast enough for a daily workstation.  I didn't measure the time to read or browse many files, but it must be slower.  A LUKS encrypted disk should only have a performance penalty of a few percent, so ecryptfs is really not efficient in comparison, but it's still fast enough if you don't do database operation on it.

# Security shortcoming

There are extra security shortcomings coming with ecryptfs: when using your encrypted files unlocked, they may be copied in swap or in temporary directories, or in cache.

If you use the Private encrypted directories, for instance, you should think that most image reader will create a thumbnail in your HOME directory, so pictures in Private may have a local copy that is available outside the encrypted directory.  Some text editors may cache a backup file in another directory.

If your system is running a bit out of memory, data may be written to the swap file, if it's not encrypted then one may be able to recover files that were opened during that time. There is a command `ecryptfs-setup-swap` from the ecryptfs package which check if the swap files are encrypted, and if not, propose to encrypt them using LUKS.

One major source of leakage is the `/tmp/` directory, that may be used by programs to make a temporary copy of an opened file.  It may be safe to just use a `tmpfs` filesystem for it.

Finally, if you only have a Private directory encrypted, don't forget that if you use a file browser to delete a file, it may end up in a trash directory on the unencrypted filesystem.

# Troubleshooting

## setreuid: Operation not permitted

If you get the error `setreuid: Operation not permitted` when running ecryptfs commands, this mean the ecryptfs binaries aren't using suid bit.  On Gentoo, you have to compile `ecryptfs-utils` with the USE suid.

# Conclusion

Ecryptfs is can be useful in some real life scenarios, and doesn't have much alternative.  It's especially user-friendly when used to encrypt the whole home because users don't even have to know about it.

Of course, for a private encrypted directory, the most tech-savvy can just create a big raw file and format it in LUKS, and mount it on need, but this mean you will have to manage the disk file as a separate partition with its own size, and scripts to mount/umount the volume, while ecryptfs offers an easy secure alternative with a performance drawback.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/encrypt-with-ecryptfs.gmi</guid>
  <link>gemini://perso.pw/blog//articles/encrypt-with-ecryptfs.gmi</link>
  <pubDate>Sun, 12 Mar 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Using GitHub Actions to maintain Gentoo packages repository</title>
  <description>
    <![CDATA[
<pre># Introduction

In this blog post, I'd like to share how I had fun using GitHub actions in order to maintain a repository of generic x86-64 Gentoo packages up to date.

Built packages are available at https://interbus.perso.pw/ and can be used in your `binrepos.conf` for a generic x86-64 packages provider, it's not building many packages at the moment, but I'm open to add more packages if you want to use the repository.

=> https://github.com/rapenne-s/build-gentoo-packages-for-me GitHub Project page: Build Gentoo Packages For Me

# Why

I don't really like GitHub, but if we can use their CPU for free for something useful, why not?  The whole implementation and setup looked fun enough that I should give it a try.

I was using a similar setup locally to build packages for my Gentoo netbook using a more powerful computer, so it was actually achievable, so I had to try.  I don't have much use of it myself, but maybe a reader will enjoy the setup and do something similar (maybe not for Gentoo).

My personal infrastructure is quite light, with only an APU router plus a small box with an Atom CPU as a NAS, I was looking for a cheap way to keep their Gentoo systems running without having to compile locally.

# Challenges

Building a generic Gentoo packages repository isn't straighforward for a rew reasons:



Fortunately, there are Gentoo containers images that can be used to start a fresh Gentoo, and from there, build packages from a clean system every time.  Packages have to be added into the container before each change, otherwise the file `Packages` that will be generated as a repository index won't contain all the files.

Using a `-march=x86-64` compiler flag allows targeting all the amd64 systems, at the cost of less optimized binaries.

For the USE flags, a big part of Gentoo, I chose to select a default profile and simply stick with it.  People using the repository could still change their USE flags, and only pick the binary packages from the repo if they still match expectations.

# Setup

We will use GitHub actions (Free plan) to build packages for a given Gentoo profile, and then upload it to a remote server that will share the packages over HTTPS.

The plan is to use a docker image of a stage3 Gentoo provided by the project gentoo-docker-images, pull previously built packages from my server, build new packages or updating existing packages, and push the changes to my server.  Meanwhile, my server is serving the packages over https.

GitHub's actions are a feature from GitHub allowing to create Continuous Integration easy by providing "actions" (reusable components made by other) that you organize in steps.

For the job, I used the following steps on an Ubuntu system:

1. Deploy SSH keys (used to pull/push packages to my server) stored as secrets in the GitHub project
2. Checkout the sources of the project
3. Make a local copy of the packages repository
4. Create a container image based on the Gentoo stage3 + instructions to run
5. Run the image that will use emerge to build the packages
6. Copy the new repository on the remote server (using rsync to copy the diff)

=> https://github.com/gentoo/gentoo-docker-images GitHub project page: Gentoo Docker Images

# Problems encountered

While the idea is simple, I faced a lot of build failures, here is a list of problems I remember.

## Go is failing to build (problem is Docker specific)

For some reasons, Go was failing to build with a weird error, this is due to some sandboxing done by emerge that wasn't allowed by the Docker environment.

The solution is to loose the sandboxing with `FEATURES="-ipc-sandbox -pid-sandbox -sandbox -usersandbox"` in `/etc/portage/make.conf`.  That's not great.

## Raw stage3 is missing pieces

The starter image is a stage3 of Gentoo, it's quite bare, one critical package missing to build other but never pulled as dependency is kernel sources.

You need to install `sys-kernel/gentoo-sources` if you want builds to succeed for many packages.

## No merged-usr profile

The gentoo docker images repository isn't provided merged-usr profiles (yet?), I had to install merged-usr and run it, to have a correct environment matching the selected profile.

## Compilation is too long

The job time is limited to 6h00 on the free plan, I added a timeout for the emerge doing the building job to stop a bit earlier, to let it some time to push the packages to the remote server, this will allow saving time for the next run.  Of course, this only works until a single package require more than the timeout time to build (but it's quite unlikely given the CI is fast enough).

# Security

One has to trust GitHub actions, GitHub employees may have access to jobs running there, and could potentially compromise built packages using a rogue container image.  While it's unlikely, this is a possibility.

Also, please note that the current setup doesn't sign the packages.  This is something that could be added later, you can find documentation on the Gentoo Wiki for this part.

=> https://wiki.gentoo.org/wiki/Binary_package_guide#Binary_package_OpenGPG_signing Gentoo Wiki: Binary package guide

Another interesting area for security was the rsync access of the GitHub actions to easily synchronize the packages with the builder.  It's possible to restrict an SSH key to a single command to run, like a single rsync with no room to change a single parameter.  Unfortunately, the setup requires using rsync in two different cases: downloading and pushing files, so I had to write a wrapper looking at the variable `SSH_COMMAND` and allowing either the "pull" rsync, or the "push" rsync.

=> http://positon.org/rsync-command-restriction-over-ssh Restrict rsync command over SSH

# Conclusion

The GitHub free plan allows you to run a builder 24/7 (with no parallel execution), it's really fast enough to keep a non-desktop @world up to date.  If you have a pro account, the local cache GitHub cache may not be limited, and you may be able to keep the built packages there, removing the "pull packages" step.

If you really want to use this, I'd recommend using a schedule in the GitHub action to run it every day.  It's as simple as adding this in the GitHub workflow.

on:

schedule:

- cron: '0 2 * * *' # every day at 02h00


# Credits

I would like to thank Jonathan Tremesaygues who wrote most of the GitHub actions pieces after I shared with him about my idea and how I would implement it.

=> https://jtremesay.org/ Jonathan Tremesaygues's website

# Going further

Here is a simple script I'm using to use a local Linux machine as a Gentoo builder for the box you run it from.  It's using a gentoo stage3 docker image, populated with packages from the local system and its `/etc/portage/` directory.

Note that you have to use `app-misc/resolve-march-native` to generate the compiler command line parameters to replace `-march=native` because you want the remote host to build with the correct flags and not its own `-march=native`, you should also make sure those flags are working on the remote system.  From my experience, any remote builder newer than your machine should be compatible.

=> https://tildegit.org/solene/gentoo-remote-builder Tildegit: Example of scripts to build packages on a remote machine for the local machine
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/github-actions-building-gentoo-packages.gmi</guid>
  <link>gemini://perso.pw/blog//articles/github-actions-building-gentoo-packages.gmi</link>
  <pubDate>Sat, 04 Mar 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Lightweight data monitoring using RRDtool</title>
  <description>
    <![CDATA[
<pre># Introduction

I like my servers to run the least code possible, and the least services running in general, this ease maintenance and let room for other thing to run.  I recently wrote about monitoring software to gather metrics and render them, but they are all overkill if you just want to keep track of a single value over time, and graph it for visualization.

Fortunately, we have an old and robust tool doing the job fine, it's perfectly documented and called RRDtool.

=> https://oss.oetiker.ch/rrdtool/ RRDtool official website

RRDtool stands for "Round Robin Database Tool", it's a set of programs and a specific file format to gather metrics.  The trick with RRD files is that they have a fixed size, when you create it, you need to define how many values you want to store in it, at which frequency, for how long.  This can't be changed after the file creation.

In addition, RRD files allow you to create derivated time series to keep track of computed values on a longer timespan, but with a lesser resolution.  Think of the following use case: you want to monitor your home temperature every 10 minutes for the past 48 hours, but you want to keep track of some information for the past year, you can tell RRD to compute the average temperature for every hour, but for a week, or the average temperature for four hours but for a month, and the average temperature per day for a year.  All of this will be fixed size.

# Anatomy of a RRD file

RRD files can be dumped as XML, this will give you a glimpse that may ease the understanding of this special file format.

Let's create a file to monitor the battery level of your computer every 20 seconds, with the last 5 values, don't focus at understanding the whole command line now:

rrdtool create test.rrd --step 10 DS:battery:GAUGE:20:0:100 RRA:AVERAGE:0.5:1:5


If we dump the created file using the according command, we get this result (stripped a bit to make it fit better):

<!-- Round Robin Database Dump -->

<rrd>

<version>0003</version>

<step>10</step> <!-- Seconds -->

<lastupdate>1676569107</lastupdate> <!-- 2023-02-16 18:38:27 CET -->

<ds>

<name> battery </name>

<type> GAUGE </type>

<minimal_heartbeat>20</minimal_heartbeat>

<min>0.0000000000e+00</min>

<max>1.0000000000e+02</max>

<!-- PDP Status -->

<last_ds>U</last_ds> <value>NaN</value> <unknown_sec> 7 </unknown_sec>

</ds>

<!-- Round Robin Archives -->

<rra>

<cf>AVERAGE</cf>

<pdp_per_row>1</pdp_per_row> <!-- 10 seconds -->

<params> <xff>5.0000000000e-01</xff> </params>

<cdp_prep>

<ds>

<primary_value>0.0000000000e+00</primary_value>

<secondary_value>0.0000000000e+00</secondary_value>

<value>NaN</value>

<unknown_datapoints>0</unknown_datapoints>

</ds>

</cdp_prep>

<database>

<!-- 2023-02-16 18:37:40 CET / 1676569060 --> <row><v>NaN</v></row>

<!-- 2023-02-16 18:37:50 CET / 1676569070 --> <row><v>NaN</v></row>

<!-- 2023-02-16 18:38:00 CET / 1676569080 --> <row><v>NaN</v></row>

<!-- 2023-02-16 18:38:10 CET / 1676569090 --> <row><v>NaN</v></row>

<!-- 2023-02-16 18:38:20 CET / 1676569100 --> <row><v>NaN</v></row>

</database>

</rra>

</rrd>


The most important thing to understand here, is that we have a "ds" (data serie) named battery of type GAUGE with no last value (I never updated it), but also a "RRA" (Round Robin Archive) for our average value that contain timestamp and no value associated to each.  You can see that internally, we already have our 5 slots that exist with a null value associated.  If I update the file, the first null value will disappear, and a new record will be added at the end with the actual value.

# Monitoring a value

In this guide, I would like to share my experience at using rrdtool to monitor my solar panel power output over the last few hours, which can be easily displayed on my local dashboard.  The data are also collected and sent to a graphana server, but it's not local and displaying to know the last values is wasting resources and bandwidth.

First, you need `rrdtool` to be installed, you don't need anything else to work with RRD files.

## Create the RRD file

Creating the RRD file is the most tricky part, because you can't change it afterward.

I want to collect a data every 5 minutes (300 seconds), this is an absolute data between 0 and 4000, so we will define a step of 300 seconds to tell the file must receive a value every 300 seconds.  The type of the value will be GAUGE, because it's just a value that doesn't depend on the previous one.  If we were monitoring power change over time, we would like to use DERIVE, because it computes the delta between each value.

Furthermore, we need to configure the file to give up on a value slot if it's not updated within 600 seconds.

Finally, we want to be able to graph each measurement, this can be done by adding an AVERAGE calculated value in the file, but with a resolution of 1 value, with 240 measurements stored.  What this mean, is for each time we add a value in the RRD file, the field for AVERAGE will be calculated with only the last value as input, and we will keep 240 of them, allowing us to graph up to 240 * 5 minutes of data back in time.

rrdtool create solar-power.rrd --step 300 ds:value:gauge:600:0:4000 rra:average:0.5:1:240

^ ^ ^ ^ ^ ^ ^ ^ ^

| | | | | max value | | | | number of values to keep

| | | | min value | | | how many previous values should be used in the function, 1 means just a single value, so averaging itself

| | | time before null | | (xfiles factor) how much percent of unknown values do we agree to use for calculating a value

| | measurement type | function to apply, can be AVERAGE, MAX, MIN, LAST, or mathematical operations

| variable name


And then, you have your `solar-power.rrd` file created.  You can inspect it with `rrdtool info solar-power.rrd` or dump its content with `rrdtool dump solar-power.rrd`.

=> https://oss.oetiker.ch/rrdtool/doc/rrdcreate.en.html RRDtool create documentation

## Add values to the RRD file

Now that we have prepared the file to receive data, we need to populate it with something useful.  This can be done using the command `rrdtool update`.

CURRENT_POWER=$(some-command-returning-a-value)

rrdtool update solar-power.rrd "N:${CURRENT_POWER}"

^ ^

| | value of the first field of the RRD file (we created a single field)

| when the value has been measured, N equals to NOW


=> https://oss.oetiker.ch/rrdtool/doc/rrdupdate.en.html RRDtool update documentation

## Graph the content of the RRD file

The trickiest part, but less problematic, is to generate a usable graph from the data.  The operation is not destructive as it's not modifying the file, so we can make a lot of experimentations on it without affecting the content.

We will generate something simple like the picture below.  Of course, you can add a lot more information, color, axis, legends etc.. but I need my dashboard to stay simple and clean.

=> ./static/solar-power.svg A diagram displaying solar power over time (on a cloudy day)

rrdtool graph --end now -l 0 --start end-14000s --width 600 --height 300 \

/var/www/htdocs/dashboard/solar.svg -a SVG \

DEF:ds0=/var/lib/rrdtool/solar-power.rrd:value:AVERAGE \

"LINE1:ds0#0000FF:power" \

"GPRINT:ds0:LAST:current value %2.1lf"


I think most flags are explicit, if not you can look at the documentation, what interests us here are the last three lines.

The `DEF` line associates the RRA AVERAGE of the variable `value` in the file `/var/lib/rrdtool/solar-power.rrd` to the name `ds0` that will be used later in the command line.

The `LINE1` line associates a legend, and a color to the rendering of this variable.

The `GPRINT` line adds a text in the legend, here we are using the last value of `ds0` and format it in a printf style string `current value %2.1lf`.

=> https://oss.oetiker.ch/rrdtool/doc/rrdgraph.en.html RRDtool graph documentation
=> https://oss.oetiker.ch/rrdtool/doc/rrdgraph_examples.en.html RRDtool graph examples

# Conclusion

RRDtool is very nice, it's a storage engine for monitoring software such as collectd or munin, but we can also use them on the spot with simple scripts.  However, they have drawbacks, when you start to create many files it doesn't scale well, generate a lot of I/O and consume CPU if you need to render hundreds of pictures, that's why a daemon named `rrdcached` has been created to help mitigate the load issue by delegating updates of a lot of RRD files in a more sequential way.

# Going further

I encourage you to look at the official project website, all the other command can be very useful, and rrdtool also exports data as XML or JSON if needed, which is perfect to plug in with other software.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/rrdtool-light-monitoring.gmi</guid>
  <link>gemini://perso.pw/blog//articles/rrdtool-light-monitoring.gmi</link>
  <pubDate>Thu, 16 Feb 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Introduction to nftables on Linux</title>
  <description>
    <![CDATA[
<pre># Introduction

Linux kernel has an integrated firewall named netfilter, but you manipulate it through command lines such as the good old iptables, or nftables which will eventually superseed iptables.

Today, I'll share my experience in using nftables to manage my Linux home router, and my workstation.

I won't explain much in this blog post because I just want to introduce nftables and show what it looks like, and how to get started.

I added comments in my configuration files, I hope it's enough to get a grasp and make you curious to learn about nftables if you use Linux.

# Configurations

nftables works by creating a file running `nft -f` in the shebang, this allows atomic replacement of the ruleset if it's valid.

Depending on your system, you may need to run the script at boot, but for instance on Gentoo, a systemd service is provided to save rules upon shutdown and restore them at boot.

## Router

!/sbin/nft -f

flush ruleset

table inet filter {

# defines a list of networks for further reference

set safe_local {

type ipv4_addr

flags interval

elements = { 10.42.42.0/24 }

}

chain input {

# drop by default

type filter hook input priority 0; policy drop;

ct state invalid drop comment "early drop of invalid packets"

# allow connections to work when initiated from this system

ct state {established, related} accept comment "accept all connections related to connections made by us"

# allow loopback

iif lo accept comment "accept loopback"

# remove weird packets

iif != lo ip daddr 127.0.0.1/8 drop comment "drop connections to loopback not coming from loopback"

iif != lo ip6 daddr ::1/128 drop comment "drop connections to loopback not coming from loopback"

# make ICMP work

ip protocol icmp accept comment "accept all ICMP types"

ip6 nexthdr icmpv6 accept comment "accept all ICMP types"

# only for known local networks

ip saddr @safe_local tcp dport {22, 53, 80, 2222, 19999, 12344, 12345, 12346} accept

ip saddr @safe_local udp dport {53} accept

# allow on WAN

iif eth0 tcp dport {80} accept

iif eth0 udp dport {7495} accept

}

# allow NAT to get outside

chain lan_masquerade {

type nat hook postrouting priority srcnat;

meta nfproto ipv4 oifname "eth0" masquerade

}

# port forwarding

chain lan_nat {

type nat hook prerouting priority dstnat;

iif eth0 tcp dport 80 dnat ip to 10.42.42.102:8080

}

}


## Workstation

!/sbin/nft -f

flush ruleset

table inet filter {

set safe_local {

type ipv4_addr

flags interval

elements = { 10.42.42.0/24, 10.43.43.1/32 }

}

chain input {

# drop by default

type filter hook input priority 0; policy drop;

ct state invalid drop comment "early drop of invalid packets"

# allow connections to work when initiated from this system

ct state {established, related} accept comment "accept all connections related to connections made by us"

# allow loopback

iif lo accept comment "accept loopback"

# remove weird packets

iif != lo ip daddr 127.0.0.1/8 drop comment "drop connections to loopback not coming from loopback"

iif != lo ip6 daddr ::1/128 drop comment "drop connections to loopback not coming from loopback"

# make ICMP work

ip protocol icmp accept comment "accept all ICMP types"

ip6 nexthdr icmpv6 accept comment "accept all ICMP types"

# only for known local networks

ip saddr @safe_local tcp dport 22 accept comment "accept SSH"

ip saddr @safe_local tcp dport {7905, 7906} accept comment "accept musikcube"

ip saddr @safe_local tcp dport 8080 accept comment "accept nginx"

ip saddr @safe_local tcp dport 1714-1764 accept comment "accept kdeconnect TCP"

ip saddr @safe_local udp dport 1714-1764 accept comment "accept kdeconnect UDP"

ip saddr @safe_local tcp dport 22000 accept comment "accept syncthing"

ip saddr @safe_local udp dport 22000 accept comment "accept syncthing"

ip saddr @safe_local tcp dport {139, 775, 445} accept comment "accept samba"

ip saddr @safe_local tcp dport {111, 775, 2049} accept comment "accept NFS TCP"

ip saddr @safe_local udp dport 111 accept comment "accept NFS UDP"

# for my public IP over VPN

ip daddr 78.224.46.36 udp dport 57500-57600 accept comment "accept mosh"

ip6 daddr 2a00:5854:2151::1 udp dport 57500-57600 accept comment "accept mosh"

}

# drop anything that looks forwarded

chain forward {

type filter hook forward priority 0; policy drop;

}

}


# Some commands

If you need to operate a firewall using nftables, you may use `nft` to add/remove rules on the go instead of using the script with the ruleset.

However, let me share a small cheatsheet of useful commands:

## List rules

If you need to display the current rules in use:

nft list ruleset


## Flush rules

If you want to delete all the rules, just use:

nft flush ruleset


# Going further

If you want to learn more about nftables, there is the excellent man page of the command `nft`.

I used some resources from Arch Linux and Gentoo that you may also enjoy:

=> https://wiki.gentoo.org/wiki/Nftables Gentoo Wiki: Nftables
=> https://wiki.gentoo.org/wiki/Nftables/Examples Gentoo Wiki: Nftables examples
=> https://wiki.archlinux.org/title/Nftables Arch Linux Wiki: Nftables
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/nftables.gmi</guid>
  <link>gemini://perso.pw/blog//articles/nftables.gmi</link>
  <pubDate>Mon, 06 Feb 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>[Cheatsheet] Fossil version control software</title>
  <description>
    <![CDATA[
<pre># Introduction

Fossil is a DVCS (decentralized version control software), an alternative to programs such as darcs, mercurial or git.  It's developed by the same people doing sqlite and rely on sqlite internally.

=> https://www2.fossil-scm.org/ Fossil official website

# Why?

Why not?  I like diversity in software, and I'm unhappy to see Git dominating the field.  Fossil is a viable alternative, with simplified workflow that work very well for my use case.

One feature I really like is the autosync, when a remote is configured, fossil will automatically push the changes to the remote, then it looks like a centralizer version control software like SVN, but for my usage it's really practical.  Of course, you can disable autosync if you don't want to use this feature.  I suppose this could be reproduced in git using a post-commit hook that run `git push`.

Fossil is opinionated, so you may not like it if that doesn't match your workflow, but when it does, it's a very practical software that won't get in your way.

# Fossil repository is a file

A major and disappointing fact at first is that a fossil repository is a single file.  In order to checkout the content of the repository, you will need to run `fossil open /path/to/repo.fossil` in the directory you want to extract the files.

Fossil supports multiple checkout of different branches in different directories, like git worktrees.

# Cheatsheet

Because I'm used to other versionning software, I need a simple cheatsheet to learn most operations, they are easy to learn, but I prefer to note it down somewhere.

## View extra files

You can easily find non-versioned files using the following command:

`fossil extras`

## View changes

You can get a list of tracked files that changed:

`fossil changes`

Note that it only display a list of files, not the diff that you can obtain using `fossil diff`.

## Commit

By default, fossil will commit all changes in tracked files, if you want to only commit a change in a file, you must pass it as a parameter.

`fossil commit`

## Change author name

`fossil user new solene@t470` and `fossil user default solene@t470`

=> https://www2.fossil-scm.org/home/doc/trunk/www/env-opts.md More possibilities are explained in Fossil documentation

## Add a remote

Copy the .fossil file to a remote server (I'm using ssh), and in your fossil checkout, type `fossil remote add my-remote ssh://hostname//home/solene/my-file.fossil`, and then `fossil remote my-remote`.

Note that the remote server must have the fossil binary available in `$PATH`.

## Display the Web Interface

`fossil ui` will open your web browser and log in as admin user, you can view the timeline, bug trackers, wiki, forum etc...  Of course, you can enable/disable everything you want.

## Get changes from a remote

This is a two-step operation, you must first get changes from the remote fossil, and then update your local checkout:

fossil pull

fossil update


## Commit partial changes in a file

Fossil doesn't allow staging and committing partial changes in a file like with `git add -p`, the official way is to stash your changes, generate a diff of the stash, edit the diff, apply it and commit.  It's recommended to use a program named patchouli to select hunks in the diff file to ease the process.

=> https://www2.fossil-scm.org/home/doc/trunk/www/gitusers.md Fossil documentation: Git to Fossil translation

The process looks like this:

fossil stash -m "tidying for making atomic commits"

fossil stash diff > diff

$EDITOR diff

patch -p0 < diff

fossil commit


Note that if you added new files, the "add" information is stashed and contained in the diff.</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/fossil-scm.gmi</guid>
  <link>gemini://perso.pw/blog//articles/fossil-scm.gmi</link>
  <pubDate>Sun, 29 Jan 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Configure syncthing to sync a single file</title>
  <description>
    <![CDATA[
<pre># Introduction

Quick blog entry to remember about something that wasn't as trivial as I thought.  I needed to use syncthing to keep a single file in sync (KeePassXC database) without synchronizing the whole directory.

You have to use mask exclusion feature to make it possible.  Put it simple, you need the share to forbid every file, except the one you want to sync.

This configuration happens in the `.stignore` file in the synchronized directory, but can also be managed from the Web interface.

=> https://docs.syncthing.net/users/ignoring.html Syncthing documentation about ignoring files

# Example

If I want to only sync KeePassXC files (they have the `.kdbx` extension), I have this in my `.stignore` file:

!*.kdbx


And that's all!

Note that this must be set on all nodes using this share, otherwise you may have surprises.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/syncthing-single-file.gmi</guid>
  <link>gemini://perso.pw/blog//articles/syncthing-single-file.gmi</link>
  <pubDate>Sat, 28 Jan 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>How to boot on a BTRFS snapshot</title>
  <description>
    <![CDATA[
<pre># Introduction

I always wanted to have a simple rollback method on Linux systems, NixOS gave me a full featured one, but it wasn't easy to find a solution for other distributions.

Fortunately, with BTRFS, it's really simple thanks to snapshots being mountable volumes.

# Setup

You need a Linux system with a BTRFS filesystem, in my examples, the root subvolume (where `/` is) is named `gentoo`.

I use `btrbk` to make snapshots of `/` directly in `/.snapshots`, using the following configuration file:

snapshot_preserve_min 30d

volume /

snapshot_dir .snapshots

subvolume .


With a systemd service, it's running once a day, so I'll have for 30 days of snapshots to restore my system if needed.

This creates snapshots named like the following:

$ ls /.snapshots/

ROOT.20230102

ROOT.20230103

ROOT.20230104


A snapshot address from BTRFS point of view looks like `gentoo/.snapshots/ROOT.20230102`.

I like btrbk because it's easy to use and configure, and it creates easy to remember snapshots names.

# Booting on a snapshot

When you are in the bootloader (GRUB, systemd-boot, Lilo etc..), edit the command line, and add the new option (replace if already exists) with the following,  the example uses the snapshot `ROOT.20230102`:

rootflags=subvol=gentoo/.snapshots/ROOT.20230103


Boot with the new command line, and you should be on your snapshot as the root filesystem.

# Be careful

When you are on a snapshot, this mean any change will be specific to this volume.

If you use a separate partition for `/boot`, an older snapshot may not have the kernel (or its module) you are trying to boot.

# Conclusion

This is a very simple but effective mecanism, more than enough to recover from a bad upgrade, especially when you need the computer right now.

# Going further

There is a project grub-btrfs which can help you adding BTRFS snapshots as boot choices in GRUB menus.

=> https://github.com/Antynea/grub-btrfs grub-btrfs GitHub project page
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/boot-on-btrfs-snapshot.gmi</guid>
  <link>gemini://perso.pw/blog//articles/boot-on-btrfs-snapshot.gmi</link>
  <pubDate>Wed, 04 Jan 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Booting Gentoo on a BTRFS from multiple LUKS devices</title>
  <description>
    <![CDATA[
<pre># Introduction

This is mostly a reminder for myself.  I installed Gentoo on a machine, but I reused the same BTRFS filesystem where NixOS is already installed, the trick is the BTRFS filesystem is composed of two partitions (a bit like raid 0) but they are from two different LUKS partitions.

It wasn't straightforward to unlock that thing at boot.

# Fix grub error

Grub was trying to autodetect the root partition to add `root=/dev/something`, but as my root filesystem requires `/dev/mapper/ssd1` and `/dev/mapper/ssd2`, it was simply adding `root=/dev/mapper/ssd1 /dev/mapper/ssd2`, which is wrong.

This required a change in the file `/etc/grub.d/10_linux` where I entirely deleted the `root=` parameter.

# Compile systemd with cryptsetup

A mistake I made was to try to boot without systemd compiled with cryptsetup support, this was just failing because in the initramfs, some systemd services were used to unlock the partitions, but without proper support for cryptsetup it didn't work.

# Linux command line parameters

In `/etc/default/grub`, I added this line, it contains the UUID of both LUKS partitions needed, and a `root=/dev/dm-0` which is unexpectedly the first unlocked device path, and `rd.luks=1` to enble LUKS support.

GRUB_CMDLINE_LINUX="rd.luks.uuid=24682f88-9115-4a8d-81fb-a03ec61d870b rd.luks.uuid=1815e7a4-532f-4a6d-a5c6-370797ef2450 rootfs=btrfs root=/dev/dm-0 rd.luks=1"


# Run Dracut and grub

After the changes, I did run `dracut --force --kver 5.15.85-gentoo-dist` and `grub-mkconfig -o /boot/grub/grub.cfg`

# Conclusion

It's working fine now, I thought it would require me to write a custom initrd script, but dracut is providing all I needed, but there were many quirks on the path with no really helpful message to understand what's failing.

Now, I can enjoy my dual boot Gentoo / NixOS (they are quite antagonists :D), but they share the same filesystem and I really enjoy this weird setup.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/multi-luks-btrfs-boot-gentoo.gmi</guid>
  <link>gemini://perso.pw/blog//articles/multi-luks-btrfs-boot-gentoo.gmi</link>
  <pubDate>Mon, 02 Jan 2023 00:00:00 GMT</pubDate>
</item>

  </channel>
</rss>