💾 Archived View for perso.pw › blog › rss.xml captured on 2023-04-26 at 13:08:54.
View Raw
More Information
⬅️ Previous capture (2023-04-19)
➡️ Next capture (2023-05-24)
🚧 View Differences
-=-=-=-=-=-=-
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Solene'%</title>
<description></description>
<link>gemini://perso.pw/blog/</link>
<atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" />
<item>
<title>Trying some Linux distributions to free my Steam Deck</title>
<description>
<![CDATA[
<pre># Introduction
As the owner of a Steam Deck (a handeld PC gaming device), I wanted to explore alternatives to the pre-installed SteamOS you can find on it. Fortunately, this machine is a plain PC with UEFI Firmware allowing you to boot whatever you want.
# What's the deck?
It's like a Nintendo Switch, but much bigger. The "deck" is a great name because it's really what it looks like, with two touchpads and four extra buttons behind the deck. By default, it's running SteamOS, an ArchLinux based system working in two modes:
- Steam gamepadUI mode with a program named gamescope as a wayland compositor, everything is well integrated like you would expect from a gaming device. Special buttons trigger menus, integration with monitoring tool to view FPS, watts consumption, TDP limits, screen refresh rate....
- Desktop mode, using KDE Plasma, and it acts like a regular computer
Unfortunately for me, I don't like ArchLinux and I wanted to understand how the different modes were working, because on Steam, you just have a button menu to switch from Gaming to Desktop, and a desktop icon to switch from desktop to gaming.
=> https://www.steamdeck.com/ Steam Deck official website (with specs)
Here is a picture I took to compare a Nintendo Switch and a Steam Deck, it's really beefy and huge, but while its weight is higher than the Switch, I prefer how it holds and the buttons' placement.
=> static/deck-switch.jpg Steam Deck side by side with a Nintendo Switch
# Alternatives
And after starting my quest to free my Deck, I found there were already serious alternatives. Let's explore them.
## HoloISO
This project purpose is to reimplement SteamOS the best it can, but only using open source components. They also target alternative devices if you want to have a Steam Deck experience.
=> https://github.com/HoloISO/holoiso Project page
My experience wasn't great with it, once installation was done, I had to log in into Steam, and at every reboot it was asking me to log-in again. As the project was mostly providing the same experience based on ArchLinux, I wasn't really interested to look into it further.
## ChimeraOS
This project purpose is to give Steam Deck user (or similar device owners) an OS that would fit the device, it's currently offering a similar experience, but I've read plans to offer alternative UI. On top of that, they integrated a web server to manage emulations ROMS, or Epic Games and GOG installer, instead of having to fiddle with Lutris, minigalaxy or Heroic game launcher to install games from these store.
The project also has many side-projects such as gamescope-session, chimera or forks with custom patches.
=> https://chimeraos.org/ Project official website
My experience was very good, the web server to handle GOG/Epic is a very cool idea and worked well, the Steam GamepadUI was working as well.
## Jovian-NixOS
This project is truly amazing, it's currently what I'm running on my own devices. Let's use NixOS with some extra patches to run your Deck, and it's just working fine!
Jovian-NixOS (in reference to Neptune, the Deck codename) is a set of configuration to use with NixOS to adapt to the Steam Deck, or any similar handeld device. The installation isn't as smooth as the two other above because you have to install NixOS from console, write a bit of configuration, but the result is great. It's not for everyone though.
=> https://github.com/Jovian-Experiments/Jovian-NixOS Project page
Obviously, my experience is very good. I'm in full control of the system, thanks to NixOS declarative approach, no extra services running until I want to, it even makes a great Nix remote builder...
## Plain linux installed like a regular computer
The first attempt was to install openSUSE on the Deck like I would do on any computer. The experience was correct, installation went well, and I got in GNOME without issues.
However, some things you must know about the Deck:
- patches are required on the Linux kernel to have proper fan control, they work out of the box now but the fan curve isn't ideal, like the fan will never stop even under low temperature
- in Desktop mode, the controller is seen as a poor mouse with triggers to click, the touchscreen is working, but Linux isn't really ready to be used like a tablet, you need Steam in big picture mode to make the controller useful
- many patches here and there (Mesa, mangohud, gamescope) are useful to improve the experience
In order to switch between Desktop and Gaming mode, I found a weird setup that was working for me:
- gaming mode is started by automatically log-in my user on tty1 with the user .bashrc checking if running on tty1 and running steam over gamescope
- desktop mode is started by setting automatic login in GDM
- a script started from a .desktop file that would toggle between gaming and desktop mode. Either by killing gamescope and starting GDM, or by stopping gdm and startin tty1. The .desktop was added to Steam, so from Steam or GNOME I was able to switch to the other. It worked surprisingly well.
I turned out Steam GamepadUI with Gamescope button "Switch to desktop mode" is using a dbus signal to switch to desktop, distributions above handle it correctly.
Although it was mostly working, my main issues were:
- No fan curve control because it's not easy to find the kernel patches, and then run the utility to control the fans, my deck was constantly doing some fan noise, and it was irritating
- I had no idea how to allow firmware update (OS above support that)
- Integration with mangohud was bad, and performance control in Gaming mode wasn't working
- Sometimes, XWayland would crash or stay stuck when starting a game from Gaming mode
But, despite these issues, performance was perfectly fine, as well as battery life. But usability should be priority for such a device, and it didn't work very well here.
# Conclusion
If you already enjoy your Steam Deck the way it is, I recommend you to stick to SteamOS. It does the job fine, allows you to install programs from Flatpak, and you can also root it if you really need to install system packages.
If you want to do more on your Deck (use it as a server maybe? Who knows), you may find it interesting to get everything under your control.
# Pro tip
I'm using syncthing on my Steam Deck and other devices to synchronize GOG/Epic save games, Steam cloud is neat, but with one minute per game to configure syncthing, you have something similar.
Nintendo Switch emulation works fine on Steam Deck, more about that soon :)
=> static/deck-arceus.jpg Steam Deck displaying the Switch game Pokémon Arceus Legends
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/free-the-steam-deck.gmi</guid>
<link>gemini://perso.pw/blog//articles/free-the-steam-deck.gmi</link>
<pubDate>Sun, 16 Apr 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Quelques Haikus pour début 2023</title>
<description>
<![CDATA[
<pre>Une petite sélection de haikus qui ont été publiés sur Mastodon, cela dit, il ne sont pas toujours bien fichus mais ce sont mes premiers, espérons que l'expérience m'aide à faire mieux par la suite.
Merle qui chasse
Un ciel bleu teinté de blanc
Le thym en fleurs
Plateaux enneigés
Bien au chaud et Ă l'abri -
Violente tempĂŞte
Antarctique -
Monuments cyclopéens
Hiver ténébreux
Petit Ă©tang gris -
Tapissé de feuilles
Tout en silence
Plage au soleil
L'oiseau en laisse dans le ciel -
Son fil, cerf-volant
Idées et pensées -
Comme l'orage d'été
Tombent du ciel
Grâce matinée
Dimanche, changement d'heure -
Le chant des oiseaux
Maladie, douleur
Climat doux, bourgeons en fleurs -
Le temps, guérison
Le vent dans les feuilles -
Le ruissellement de l'eau
ForĂŞt en Ă©veil
Les rues silencieuses
L'aube qui peine Ă se lever -
Jardin givré
Une nuit de pleine lune
Barbecue par des amis -
Vacances d'été
Des pommes de terre
Plateau de charcuterie -
Copieuse raclette
Ciel bleu printanier
fleurs, abeilles, tout se réveil -
Balade en forĂŞt
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/haiku-2023Q1.gmi</guid>
<link>gemini://perso.pw/blog//articles/haiku-2023Q1.gmi</link>
<pubDate>Sun, 09 Apr 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>How to setup a local network cache for Flatpak</title>
<description>
<![CDATA[
<pre># Introduction
As you may have understood by now, I like efficiency on my systems, especially when it comes to network usage due to my poor slow ADSL internet connection.
Flatpak is nice, I like it for many reasons, and what's cool is that it can download only updated files instead of the whole package again.
Unfortunately, when you start using more and more packages that are updated daily, and which require subsystems like NVIDIA drivers, MESA etc... this adds up to quitea lot of daily downloads, and multiply that by a few computers and you gets a lot of network traffic.
But don't worry, you can cache it on your LAN to download updates only once.
# Setup
As usual for this kind of job, we will use Nginx on a local server on the network, and configure it to act as a reverse proxy to the flatpak repositories.
This requires modifying the URL of each flatpak repository on the machines, it's a one time operation.
Here is the configuration you need on your Nginx to proxy Flathub:
map $status $cache_header {
200 "public";
302 "public";
default "no-cache";
}
server {
listen 0.0.0.0:8080; # you may want to listen on port 80, or add TLS
server_name my-cache.local; # replace this with your hostname, or system IP
# flathub cache
set $flathub_cache https://dl.flathub.org;
location /flathub/ {
rewrite ^/flathub/(.*) /$1 break;
proxy_cache flathub;
proxy_cache_key "$request_filename";
add_header Cache-Control $cache_header always;
proxy_cache_valid 200 302 300d;
expires max;
proxy_pass $flathub_cache;
}
}
proxy_cache_path /var/cache/nginx/flathub/cache levels=1:2
keys_zone=flathub:5m
max_size=20g
inactive=60d
use_temp_path=off;
This will cause nginx to proxy requests to the flathub server, but keep files in a 20 GB cache.
You will certainly need to create the `/var/cache/nginx/flathub` directory, and make sure it has the correct ownership for your system configuration.
If you want to support another flatpak repository (like Fedora's), you need to create a new location, and new cache in your nginx config.
# Client configuration
On each client, you need to change the URL to reach flathub, in the example above, the URL is `http://my-cache.local:8080/flathub/repo/`.
You can change the URL with the following command:
flatpak remote-modify flathub --url=http://my-cache.local:8080/flathub/repo/`
Please note that if you add flathub repo, you must use the official URL to have the correct configuration, and then you can change its URL with the above command.
# Conclusion
Our dear nginx is still super useful as a local caching server, it's super fun to see some updates going at 100 MB/s from my NAS now.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/lan-cache-flatpak.gmi</guid>
<link>gemini://perso.pw/blog//articles/lan-cache-flatpak.gmi</link>
<pubDate>Wed, 05 Apr 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Detect left over users and groups on OpenBSD</title>
<description>
<![CDATA[
<pre># Introduction
If you use OpenBSD and administrate machines, you may be aware that packages can install new dedicated users and groups, and that if you remove a package doing so, the users/groups won't be deleted, instead, `pkg_delete` displays instructions about deletion.
In order to keep my OpenBSD systems clean, I wrote a script looking for users and groups that have been installed (they start by the character `_`), and check if the related package is still installed, if not, it outputs instructions that could be run in a shell to cleanup your system.
# The code
!/bin/sh
SYS_USERS=$(mktemp /tmp/system_users.txt.XXXXXXXXXXXXXXX)
PKG_USERS=$(mktemp /tmp/packages_users.txt.XXXXXXXXXXXXXXX)
awk -F ':' '/^_/ && $3 > 500 { print $1 }' /etc/passwd | sort > "$SYS_USERS"
find /var/db/pkg/ -name '+CONTENTS' -exec grep -h ^@newuser {} + | sed 's/^@newuser //' | awk -F ':' '{ print $1 }' | sort > "$PKG_USERS"
BOGUS=$(comm -1 -3 "$SYS_USERS" "$PKG_USERS")
if [ -n "$BOGUS" ]
then
echo "Bogus users/groups (missing in /etc/passwd, but a package need them)" >/dev/stderr
echo "$BOGUS" >/dev/stderr
fi
EXTRA=$(comm -2 -3 "$SYS_USERS" "$PKG_USERS")
if [ -n "$EXTRA" ]
then
echo "Extra users" >/dev/stderr
for user in $EXTRA
do
echo "userdel $user"
echo "groupdel $user"
done
fi
rm "$SYS_USERS" "$PKG_USERS"
## How to run
Write the content of the script above in a file, mark it executable, and run it from the shell, it should display a list of `userdel` and `groupdel` commands for all the extra users and groups.
# Conclusion
With this script and the package `sysclean`, it's quite easy to keep your OpenBSD system clean, as if it was just a fresh install.
# Limitations
It's not perfect in its current state because if you deleted an user, the according group that is still left won't be reported.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/openbsd-delete-old-users.gmi</guid>
<link>gemini://perso.pw/blog//articles/openbsd-delete-old-users.gmi</link>
<pubDate>Mon, 03 Apr 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Monitor your remote host network quality using smokeping on OpenBSD</title>
<description>
<![CDATA[
<pre># Introduction
If you need to more the network quality of a link, or the network availability of a remote host, I'd recommend you to take a look at Smokeping.
=> https://oss.oetiker.ch/smokeping/ Smokeping official Website
Smokeping is a Perl daemon that will regularly run a command (fping, some dns check, etc…) multiple times to check the availability of the remote host, but also the quality of the link, including the standard deviation of the response time.
It becomes very easy to know if a remote host is flaky, or if the link where Smokeping runs isn't stable any more when you see that all the remote hosts have connectivity issues.
Let me explain how to install and configure it on OpenBSD 7.2 and 7.3.
# Installation
Smokeping comes in two parts, but they are in the same package, the daemon components to run it 24/7 to gather metrics, and the fcgi component used to render the website for visualizing data.
First step is to install the `smokeping` package.
pkg_add smokeping
The package will also install the file `/usr/local/share/doc/pkg-readmes/smokeping` giving explanations for the setup. It contains a lot of instructions, from the setup to advanced configuration, but without many explanations if you are new to smokeping.
## The daemon
Once you installed the package, the first step is to configure smokeping by editing the file `/etc/smokeping/config` as root.
Under the `*** General ***` section, you can change the variables `owner` and `contact`, this information is displayed on Smokeping HTML interface, so if you are in company and some colleague look at the graphs, they can find out who to reach if there is an issue with smokeping or with the links. This is not useful if you use it for yourself.
Under the `*** Alerts ***` section, you can configure the emails notifications by configuring `to` and `from` to match your email address, and a custom address for smokeping emails origin.
Then, under `*** Targets ***` section, you can configure each host to monitor. The syntax is unusual though.
- lines starting with `+ SomeSingleWord` will create a category with attributes and subcategories. Attribute `title` is used to give a name to it when showing the category, and `menu` is the name displayed on the sidebar on the website.
- lines starting with `++ SomeSingleWord` will create a subcategory for a host. Attributes `title` and `menu` works the same as the first level, and `host` is used to define the remote host to monitor, it can be a hostname or an IP address.
That's for the simplest configuration file. It's possible to add new probes such as "SSH Ping", DNS, Telnet or LDAP...
Let me show a simple example of targets configuration I'm using:
probe = FPing
menu = Top
title = Network Latency Grapher
remark = Welcome to the SmokePing
+ Remote
menu= Remote
title= Remote hosts
++ Persopw
menu = perso.pw
title = My server perso.pw
host = perso.pw
++ openportspl
menu = openports.pl
title = openports.pl VM at openbsd.amsterdam
host = openports.pl
++ grifonfr
menu = grifon.fr
title = grifon.fr VPN endpoint
host = 89.234.186.37
+ LAN
menu = Lan
title = Lan network at home
++ solaredge
menu = solaredge
title = solardedge
host = 10.42.42.246
++ modem
menu = ispmodem
title = ispmodem
host = 192.168.1.254
Now you configured smokeping, you need to enable the service and run it.
rcctl enable smokeping
rcctl start smokeping
If everything is alright, `rcctl check smokeping` shouldn't fail, if so, you can read `/var/log/messages` to find why it's failing. Usually, it's a `+` line that isn't valid because of a non-authorized character or a space.
I recommend to always add a public host of a big platform that is known to be working reliably all the time, to have a comparison point against all your other hosts.
## The Web Interface
Now the daemon is running, you certainly want to view the graphs produced by Smokeping. Reusing the example from the pkg-readme file, you can configure httpd web server with this:
server "smokeping.example.org" {
listen on * port 80
location "/smokeping/smokeping.cgi*" {
fastcgi socket "/run/smokeping.sock"
root "/"
}
}
Your service will be available at the address `http://smokeping.example.org/smokeping/smokeping.cgi`.
For this to work, we need to run a separate FCGI server, fortunately packaged as an OpenBSD service.
rcctl enable smokeping_fcgi
rcctl start smokeping_fcgi
Note that there is a way to pre-render all the HTML interface by a cron job, but I don't recommend it as it will drain a lot of CPU for nothing, except if you have many users viewing the interface and that they don't need interactive zoom on the graphs.
# Conclusion
Smokeping is very effective because of the way it renders data, you can easily spot issues in your network that a simple ping or response time wouldn't catch.
Please note it's better to have two smokeping setup at different places in order to monitor each other remote smokeping link quality. Otherwise, if a remote host appear flaky, you can't entirely be sure if the Internet access of the smokeping is flaky, or if it's the remote host, or a peering issue.
Here is the 10 days graph for a device I have on my LAN but connected to the network using power line networking.
=> static/smokeping.png Monitoring graph of a device connected on LAN using power line network
Don't forget to read `/usr/local/share/doc/pkg-readmes/smokeping` and the official documentation if you want a more complicated setup.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/smokeping.gmi</guid>
<link>gemini://perso.pw/blog//articles/smokeping.gmi</link>
<pubDate>Sun, 26 Mar 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>L'État m'impose Google (ou Apple)</title>
<description>
<![CDATA[
<pre># Introduction
C'est rare, mais ceci est un message de ras-le-bol.
Ayant besoin d'une formation, pour finir les procédures en lignes sur un compte CPF (Compte Formation Professionnelle), j'ai besoin d'avoir une "identité numérique +".
Sur le principe, c'est cool, c'est un moyen de créer un compte en validant l'identité de la personne via une pièce d'identité, jusque là c'est normal et plutot bien pensé.
# Le problème
Le gros soucis, c'est qu'une fois les formalités terminées, il faut installer l'application Android / iOS sur son téléphone, et là soucis.
=> https://play.google.com/store/apps/details?id=fr.laposte.idn&hl=fr&pli=1 Google Play : L'Identité Numérique La Poste
Ayant libéré mon téléphone Android de Google grâce à LineageOS, j'ai choisi de ne pas installer Google Play pour être 100% dégooglisé, et j'installe mes applications depuis le dépôt F-droid qui couvre tous mes besoins.
=> https://f-droid.org/en/ Site du projet F-droid
=> https://lineageos.org/ Site du projet LineageOS
Dans ma situation, il existe une solution pour installer des applications (heuresement très rares) nécessaires pour certains services, qui consiste à utiliser "Aurora Store" depuis mon téléphone pour télécharger un APK de Google Play (le fichier d'installation d'application) et l'installer. Pas de soucis, j'ai pu installer le programme de La Poste.
Le problème, c'est que je le lance et j'obtiens ce magnifique message "Erreur, vous devez installer l'application depuis Google Play", et là , je ne peux absolument rien faire d'autre que de quitter l'application.
=> static/identite-numerique.png Message d'erreur de l'application La Poste sur LineageOS sans services Google
Et voilà , je suis coincée, l'État m'impose d'utiliser Google pour utiliser ses propres services 🙄, mes solutions sont les suivantes :
- installer les services Google sur mon téléphone, et ça me ferait bien mal au coeur car cela va à l'encontre de mes valeurs
- installer l'application dans un émulateur Android avec les services Google, c'est absolument pas pratique mais ça résoud le problème
- m'asseoir sur l'argent de mon compte de formation (500 € / an)
- remonter le problème publiquement en espérant que cela fasse changer quelque chose, au moins que l'on puisse installer l'application sans services Google
# Message Ă La Poste
S'il vous plait, trouvez une solution pour que l'on puisse utiliser votre service SANS avoir recours Ă Google.
# Extras
Il semblerait que l'on puisse Ă©viter d'utiliser l'application France Connect + via le formulaire suivant (merci Linuxmario)
=> https://www.moncompteformation.gouv.fr/espace-public/je-ne-remplis-pas-les-conditions-pour-utiliser-franceconnect-0 Je ne remplis pas les conditions pour utiliser france connect +
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/france-google.gmi</guid>
<link>gemini://perso.pw/blog//articles/france-google.gmi</link>
<pubDate>Fri, 17 Mar 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Launching on Patreon</title>
<description>
<![CDATA[
<pre># Introduction
Let me share some breaking news, if you enjoy this blog and my open source work, now you can sponsor me through the service Patreon.
=> https://patreon.com/user?u=23274526 Patreon page to sponsor me
Why would you do that in the first place? Well, this would allow me to take time off my job, and spend it either writing on the blog, or by contributing to open source projects, mainly OpenBSD or a bit of nixpkgs.
I've been publishing on the blog for almost 7 years now, for the most recent years, I've been writing a lot here, and I still enjoy doing so! However, I have a less free time now, and I'd prefer to continue writing here instead of working at my job full time. I've been ocasionaly receiving donation for my blog work, but one-shot gifts (I appreciate! :-) ) won't help me much against regular monthly incomes that I can expect, and help me to organize myself with my job.
# What's the benefit for Patrons?
I chose Patreon because the platform is reliable and offers managing some extras for the people patronizing me.
Let be clear about the advantages:
- you will ocasionaly be offered to choose the topic for the blog post I'm writing. I often can't decide what to write about when I look at my pipe of ideas.
- you will have access to the new blog posts a few days in advance.
- you give me incentive to write better content in order to make you happy of your expenses.
# What won't change
This may sound scary to some I suppose, so let's answer some questions in advance:
- the blog will stay free for everyone.
- the blog will stay JS-free, and no design change are to be expected.
- the blog won't include ads, sponsored ads or any "influencer" style things.
- publishing on alternate protocols gopher and gemini will continue
- content will be distributed under a CC-BY-4.0 licence (free to use/reuse).
# Just a note
It's hard for me to frame exactly what I'll be working on. I include the OpenBSD webzine as an extension of the blog, and sometimes ports work too because I'm writing about a program, I go down the rabbit-hole of updating it, and then there is a whole story to tell.
To conclude, let me thank you if you plan to support me financially, every bit will help, even small sponsors. I'm really motivated by this, I want to promote community driven open source projects such as OpenBSD, but I also want to cover a topic that matters a lot to me which is old hardware reuse. I highlighted this with the old computer challenge, but this is also the core of all my self-hosting articles and what drives me when using computers.
# Asked Questions
I'll collect here asked questions (not yet frequently asked though), and my answers:
- Do you accept crypto currency? The answer is no.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/going-on-patreon.gmi</guid>
<link>gemini://perso.pw/blog//articles/going-on-patreon.gmi</link>
<pubDate>Mon, 13 Mar 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Linux $HOME encryption with ecryptfs</title>
<description>
<![CDATA[
<pre># Introduction
In this article, I'd like to share with you about the Linux specific feature ecryptfs, which allows users to have encrypted directories.
While disk encryption done with cryptsetup/LUKS is very performant and secure, there are some edge cases in which you may want to use ecryptfs, whether the disk is LUKS encrypted or not.
I've been able to identify a few use cases making ecryptfs relevant:
- a multi-user system, people want their files to be private (and full disk encryption wouldn't help here)
- an encrypted disk on which you want to have an encrypted directory that is only available when needed (preventing a hacked live computer to leak important files)
- a non-encrypted disk on which you want to have an encrypted directory/$HOME instead of reinstalling with full disk encryption
=> https://www.ecryptfs.org/ ecryptfs official website
# Full $HOME Encryption
In this configuration, you want all the files in the $HOME directory of your user to be encrypted. This works well and especially as it integrates with PAM (the "login manager" in Linux) so it unlocks the files upon login.
I tried the following setup on Gentoo Linux, the setup is quite standard for any Linux distribution packaging ecryptfs-utils.
## Setup
As I don't want to duplicate documentation effort, let me share two links explaining how to set up the home encryption for a user.
=> https://wiki.gentoo.org/wiki/Encrypt_a_home_directory_with_ECryptfs Gentoo Wiki: Encrypt a home directory with ECryptfs
=> https://wiki.archlinux.org/title/ECryptfs ArchWiki: eCryptfs
Both guides are good, they will explain thoroughly how to set up ecryptfs for a user.
However, here is a TLDR version:
1. install ecryptfs-utils and make sure ecryptfs module is loaded at boot
2. modify `/etc/pam.d/system-auth` to add ecryptfs unlocking at login (3 lines are needed, at specific places)
3. run `ecryptfs-migrate-home -u $YOUR_USER` as root to convert the user home directory into an encrypted version
4. delete the old unencrypted home which should be named after `/home/YOUR_USER.xxxxx` where xxxxx are random characters (make sure you have backups)
After those steps, you should be able to log in with your user, `mount` outputs should show a dedicated entry for the home directory.
# Private directory encryption
In this configuration, you will have ecryptfs encrypting a single directory named `Private` in the home directory.
That can be useful if you already have an encrypted disk, but you have very secret files that must be encrypted when you don't need them, this will protect file leak on a compromised running system, except if you unlock the directory while the system is compromised.
This can also be used on a thrashable system (like my netbook) that isn't encrypted, but I may want to save a few files there that are private.
## Setup
That part is really easy:
1. install a package named `ecryptfs-utils` (may depend on your distribution)
2. run `ecryptfs-setup-private --noautomount`
3. Type your login password
4. Press enter to use an auto generated mount passphrase (you don't use this one to unlock the directory)
5. Done!
The mount passphrase is used in addition to the login passphrase to encrypt the files, you may need it if you have to unlock backuped encrypted files, so better save it in your password manager if you make backup of the encrypted files.
You can unlock the access to the directory `~/Private` by typing `ecryptfs-mount-private` and type your login password. Congratulations, now you have a local safe for your files!
# Performance
Ecryptfs was available in older Ubuntu installer releases as an option to encrypt a user home directory without the full disk, it seems it has been abandoned due to performance reasons.
I didn't make extensive benchmarks here, but I compared the writing speed of random characters into a file on an unencrypted ext4 partition, and the ecryptfs private directory on the same disk. On the unencrypted directory, it was writing at 535 MB/s while on the ecryptfs it was only writing at 358 MB/s, that's almost 33% slower. However, it's still fast enough for a daily workstation. I didn't measure the time to read or browse many files, but it must be slower. A LUKS encrypted disk should only have a performance penalty of a few percent, so ecryptfs is really not efficient in comparison, but it's still fast enough if you don't do database operation on it.
# Security shortcoming
There are extra security shortcomings coming with ecryptfs: when using your encrypted files unlocked, they may be copied in swap or in temporary directories, or in cache.
If you use the Private encrypted directories, for instance, you should think that most image reader will create a thumbnail in your HOME directory, so pictures in Private may have a local copy that is available outside the encrypted directory. Some text editors may cache a backup file in another directory.
If your system is running a bit out of memory, data may be written to the swap file, if it's not encrypted then one may be able to recover files that were opened during that time. There is a command `ecryptfs-setup-swap` from the ecryptfs package which check if the swap files are encrypted, and if not, propose to encrypt them using LUKS.
One major source of leakage is the `/tmp/` directory, that may be used by programs to make a temporary copy of an opened file. It may be safe to just use a `tmpfs` filesystem for it.
Finally, if you only have a Private directory encrypted, don't forget that if you use a file browser to delete a file, it may end up in a trash directory on the unencrypted filesystem.
# Troubleshooting
## setreuid: Operation not permitted
If you get the error `setreuid: Operation not permitted` when running ecryptfs commands, this mean the ecryptfs binaries aren't using suid bit. On Gentoo, you have to compile `ecryptfs-utils` with the USE suid.
# Conclusion
Ecryptfs is can be useful in some real life scenarios, and doesn't have much alternative. It's especially user-friendly when used to encrypt the whole home because users don't even have to know about it.
Of course, for a private encrypted directory, the most tech-savvy can just create a big raw file and format it in LUKS, and mount it on need, but this mean you will have to manage the disk file as a separate partition with its own size, and scripts to mount/umount the volume, while ecryptfs offers an easy secure alternative with a performance drawback.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/encrypt-with-ecryptfs.gmi</guid>
<link>gemini://perso.pw/blog//articles/encrypt-with-ecryptfs.gmi</link>
<pubDate>Sun, 12 Mar 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Using GitHub Actions to maintain Gentoo packages repository</title>
<description>
<![CDATA[
<pre># Introduction
In this blog post, I'd like to share how I had fun using GitHub actions in order to maintain a repository of generic x86-64 Gentoo packages up to date.
Built packages are available at https://interbus.perso.pw/ and can be used in your `binrepos.conf` for a generic x86-64 packages provider, it's not building many packages at the moment, but I'm open to add more packages if you want to use the repository.
=> https://github.com/rapenne-s/build-gentoo-packages-for-me GitHub Project page: Build Gentoo Packages For Me
# Why
I don't really like GitHub, but if we can use their CPU for free for something useful, why not? The whole implementation and setup looked fun enough that I should give it a try.
I was using a similar setup locally to build packages for my Gentoo netbook using a more powerful computer, so it was actually achievable, so I had to try. I don't have much use of it myself, but maybe a reader will enjoy the setup and do something similar (maybe not for Gentoo).
My personal infrastructure is quite light, with only an APU router plus a small box with an Atom CPU as a NAS, I was looking for a cheap way to keep their Gentoo systems running without having to compile locally.
# Challenges
Building a generic Gentoo packages repository isn't straighforward for a rew reasons:
- compilation flags must match all the consumers' architecture
- default USE flags must be useful for many
- no support for remote builders
- the whole repository must be generated on a single machine with all the files (can't be incremental)
Fortunately, there are Gentoo containers images that can be used to start a fresh Gentoo, and from there, build packages from a clean system every time. Packages have to be added into the container before each change, otherwise the file `Packages` that will be generated as a repository index won't contain all the files.
Using a `-march=x86-64` compiler flag allows targeting all the amd64 systems, at the cost of less optimized binaries.
For the USE flags, a big part of Gentoo, I chose to select a default profile and simply stick with it. People using the repository could still change their USE flags, and only pick the binary packages from the repo if they still match expectations.
# Setup
We will use GitHub actions (Free plan) to build packages for a given Gentoo profile, and then upload it to a remote server that will share the packages over HTTPS.
The plan is to use a docker image of a stage3 Gentoo provided by the project gentoo-docker-images, pull previously built packages from my server, build new packages or updating existing packages, and push the changes to my server. Meanwhile, my server is serving the packages over https.
GitHub's actions are a feature from GitHub allowing to create Continuous Integration easy by providing "actions" (reusable components made by other) that you organize in steps.
For the job, I used the following steps on an Ubuntu system:
1. Deploy SSH keys (used to pull/push packages to my server) stored as secrets in the GitHub project
2. Checkout the sources of the project
3. Make a local copy of the packages repository
4. Create a container image based on the Gentoo stage3 + instructions to run
5. Run the image that will use emerge to build the packages
6. Copy the new repository on the remote server (using rsync to copy the diff)
=> https://github.com/gentoo/gentoo-docker-images GitHub project page: Gentoo Docker Images
# Problems encountered
While the idea is simple, I faced a lot of build failures, here is a list of problems I remember.
## Go is failing to build (problem is Docker specific)
For some reasons, Go was failing to build with a weird error, this is due to some sandboxing done by emerge that wasn't allowed by the Docker environment.
The solution is to loose the sandboxing with `FEATURES="-ipc-sandbox -pid-sandbox -sandbox -usersandbox"` in `/etc/portage/make.conf`. That's not great.
## Raw stage3 is missing pieces
The starter image is a stage3 of Gentoo, it's quite bare, one critical package missing to build other but never pulled as dependency is kernel sources.
You need to install `sys-kernel/gentoo-sources` if you want builds to succeed for many packages.
## No merged-usr profile
The gentoo docker images repository isn't provided merged-usr profiles (yet?), I had to install merged-usr and run it, to have a correct environment matching the selected profile.
## Compilation is too long
The job time is limited to 6h00 on the free plan, I added a timeout for the emerge doing the building job to stop a bit earlier, to let it some time to push the packages to the remote server, this will allow saving time for the next run. Of course, this only works until a single package require more than the timeout time to build (but it's quite unlikely given the CI is fast enough).
# Security
One has to trust GitHub actions, GitHub employees may have access to jobs running there, and could potentially compromise built packages using a rogue container image. While it's unlikely, this is a possibility.
Also, please note that the current setup doesn't sign the packages. This is something that could be added later, you can find documentation on the Gentoo Wiki for this part.
=> https://wiki.gentoo.org/wiki/Binary_package_guide#Binary_package_OpenGPG_signing Gentoo Wiki: Binary package guide
Another interesting area for security was the rsync access of the GitHub actions to easily synchronize the packages with the builder. It's possible to restrict an SSH key to a single command to run, like a single rsync with no room to change a single parameter. Unfortunately, the setup requires using rsync in two different cases: downloading and pushing files, so I had to write a wrapper looking at the variable `SSH_COMMAND` and allowing either the "pull" rsync, or the "push" rsync.
=> http://positon.org/rsync-command-restriction-over-ssh Restrict rsync command over SSH
# Conclusion
The GitHub free plan allows you to run a builder 24/7 (with no parallel execution), it's really fast enough to keep a non-desktop @world up to date. If you have a pro account, the local cache GitHub cache may not be limited, and you may be able to keep the built packages there, removing the "pull packages" step.
If you really want to use this, I'd recommend using a schedule in the GitHub action to run it every day. It's as simple as adding this in the GitHub workflow.
on:
schedule:
- cron: '0 2 * * *' # every day at 02h00
# Credits
I would like to thank Jonathan Tremesaygues who wrote most of the GitHub actions pieces after I shared with him about my idea and how I would implement it.
=> https://jtremesay.org/ Jonathan Tremesaygues's website
# Going further
Here is a simple script I'm using to use a local Linux machine as a Gentoo builder for the box you run it from. It's using a gentoo stage3 docker image, populated with packages from the local system and its `/etc/portage/` directory.
Note that you have to use `app-misc/resolve-march-native` to generate the compiler command line parameters to replace `-march=native` because you want the remote host to build with the correct flags and not its own `-march=native`, you should also make sure those flags are working on the remote system. From my experience, any remote builder newer than your machine should be compatible.
=> https://tildegit.org/solene/gentoo-remote-builder Tildegit: Example of scripts to build packages on a remote machine for the local machine
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/github-actions-building-gentoo-packages.gmi</guid>
<link>gemini://perso.pw/blog//articles/github-actions-building-gentoo-packages.gmi</link>
<pubDate>Sat, 04 Mar 2023 00:00:00 GMT</pubDate>
</item>
</channel>
</rss>