💾 Archived View for perso.pw › blog › rss.xml captured on 2023-09-28 at 15:46:14.

View Raw

More Information

⬅️ Previous capture (2023-09-08)

➡️ Next capture (2023-11-04)

🚧 View Differences

-=-=-=-=-=-=-

<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Solene'%</title>
    <description></description>
    <link>gemini://perso.pw/blog/</link>
    <atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" />
    <item>
  <title>Firefox hardening with Arkenfox</title>
  <description>
    <![CDATA[
<pre># Introduction

Dear Firefox users, what if I told you it's possible to harden Firefox by changing a lot of settings?  Something really boring to explain and hard to reproduce on every computer.  Fortunately, someone did the job of automating all of that under the name Arkenfox.

Arkenfox design is simple, it's a Firefox configuration file (more precisely a `user.js` file), that you have to drop in your profile directory to override many Firefox defaults with a lot of curated settings to harden privacy and security.  Cherry on cake, it features an updater and a way to override some of its values with a user defined file.

This makes Arkenfox easy to use on any system (including Windows), but also easy to tweak or distribute across multiple computers.

=> https://github.com/arkenfox/user.js Arkenfox user.js GitHub project page
=> https://github.com/arkenfox/user.js/wiki Arkenfox user.js Documentation

# Setup

The official documentation contains more information, but basically the steps are the following:

1. find your Firefox profile directory: open `about:support` and search for an entry name profile directory
2. download latest Arkenfox user.js release archive
2. if the profile is not new, there is an extra step to clean it using `scratchpad-scripts/arkenfox-cleanup.js` which contains instructions at the top of the file
3. save the file `user.js` in the profile directory
4. add `update.sh` to the profile directory, so you can update `user.js` easily later
5. create `user-overrides.js` in the profile directory if you want to override some settings and keep them, the updater is required for the override

# Configuration

Basically, Arkenfox disables a lot of persistency such as cache storage, cookies, history.  But it also enforces a canvas of fixed size to render the content, reset the preferred languages to English only (that defines which language is used to display a multilingual website) and many more changes.

You may want to override some settings because you don't like them.  In the project's Wiki, you can find all Arkenfox overrides, with the explanation of its new value, and which value you may want to use in your own override.

=> https://github.com/arkenfox/user.js/wiki/3.2-Overrides-%5BCommon%5D Arkenfox user.js Wiki about common overrides

For instance, if you want to re-enable the cache storage, add the following code to the file `user-overrides.js`.

user_pref("browser.cache.disk.enable", true);

user_pref("privacy.clearOnShutdown.cache", false);


Now, run the updater script, that will verify that Arkenfox user.js file is the latest version, and will append your override to it.

# Tips

By default, cookies aren't saved, so if you don't want to log in every time you restart Firefox, you have to specifically allow cookies for each website.

The easiest method I found is to press `Ctrl+I`, visit the Permissions tab, and uncheck the "Default permissions" relative to cookies.  You could also do it by visiting Firefox settings, and search for an exception button in which you can enter a list of domains where cookies shouldn't be cleared on shutdown.

By default, entering text in the address bar won't trigger a search anymore, so instead of using Ctrl+L to type in the bar, you can use Ctrl+K to type for a search.

# Extensions

Arkenfox wiki recommends to use uBlock Origin and Skip redirect extensions only, with some details.  I agree they both work well and do the job.

It's possible to harden uBlock Origin by disabling 3rd party scripts / frames by default, and giving you the opportunity to allow per domain / globally some sources, this is called the blocking mode.  I found it to be way more usable than NoScript.js.

=> https://github.com/gorhill/uBlock/wiki/Blocking-mode:-medium-mode uBlock Origin blocking mode documentation

# Conclusion

I found that Arkenfox was a bit hard to use at first because I didn't fully understand the scope of its changes, but it didn't break any website even if it disables a lot of Firefox features that aren't really needed.

This reduces Firefox attack surface, and it's always a welcome improvement.

# Going further

Arkenfox user.js isn't the only set of Firefox settings around, there is also Betterfox (thanks prx!) which provides different profiles, even one for performance.  I didn't try any of these profiles yet, Arkenfox and Betterfox are parallel projects and not forks, it's actually complicated to compare which one would be better.

=> https://github.com/yokoffing/Betterfox Betterfox Github project page
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/harden-firefox-with-arkenfox.gmi</guid>
  <link>gemini://perso.pw/blog//articles/harden-firefox-with-arkenfox.gmi</link>
  <pubDate>Wed, 27 Sep 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Flatpak integration in Qubes OS templates</title>
  <description>
    <![CDATA[
<pre># Introduction

I recently wanted to improve Qubes OS accessibility to new users a bit, yesterday I found why GNOME Software wasn't working in the offline templates.

Today, I'll explain how to install programs from Flatpak in a template to provide to other qubes.  I really like flatpak as it provides extra security features and a lot of software choice, and all the data created by Flatpak packaged software are compartmentalized into their own tree in `~/.var/app/program.some.fqdn/`.

=> https://qubes-os.org Qubes OS official project website
=> https://www.flatpak.org/ Flatpak official project website
=> https://flathub.org/ Flathub: main flatpak repository

# Setup

All the commands in this guide are meant to be run in a Fedora or Debian template as root.

In order to add Flathub repository, you need to define the variable `https_proxy` so flatpak can figure how to reach the repository through the proxy:

export https_proxy=http://127.0.0.1:8082/

flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo


Make the environment variable persistent for the user `user`, this will allow GNOME Software to work with flatpak and all flatpak commands line to automatically pick the proxy.

mkdir -p /home/user/.config/environment.d/

cat <<EOF >/home/user/.config/environment.d/proxy.conf

https_proxy=http://127.0.0.1:8082/

EOF


In order to circumvent a GNOME Software bug, if you want to use it to install packages (Flatpak or not), you need to add the following line to `/rw/config/rc.local`:

ip route add default via 127.0.0.2


=> https://gitlab.gnome.org/GNOME/gnome-software/-/issues/2336 GNOME Software gitlab issue #2336 saying a default route is required to make it work

Restart the template, GNOME software is now able to install flatpak programs!

# Qubes OS integration

If you install or remove flatpak programs, either from the command line or with the Software application, you certainly want them to be easily available to add in the qubes menus.

Here is a script to automatically keep the applications list in sync every time a change is made to the flatpak applications.

## Inotify-tool

For the setup to work, you will have to install the package `inotify-tools` in the template, this will be used to monitor changes in a flatpak directory.

## Syncing app menu script

Create `/usr/local/sbin/sync-app.sh`:

!/bin/sh

when a desktop file is created/removed

- links flatpak .desktop in /usr/share/applications

- remove outdated entries of programs that were removed

- sync the menu with dom0

inotifywait -m -r \

-e create,delete,close_write \

/var/lib/flatpak/exports/share/applications/ |

while IFS=':' read event

do

find /var/lib/flatpak/exports/share/applications/ -type l -name "*.desktop" | while read line

do

ln -s "$line" /usr/share/applications/

done

find /usr/share/applications/ -xtype l -delete

/etc/qubes/post-install.d/10-qubes-core-agent-appmenus.sh

done


You have to mark this file as executable with `chmod +x /usr/local/sbin/sync-app.sh`.

## Start the file monitoring script at boot

Finally, you need to activate the script created above when the templates boots, this can be done by adding this snippet to `/rw/config/rc.local`:

start monitoring flatpak changes to reload icons

/usr/local/sbin/sync-app.sh &


## Updating

This solution will look for flatpak programs updates each time the template starts, which should occur regularly to update the template packages, and update them unconditionnaly.

Add this snippet to `/rw/config/rc.local`:

check for update

export https_proxy=http://127.0.0.1:8082/

flatpak upgrade -y --noninteractive


This could be enhanced by asking the user if they want to update or skip for later, but I still have to figure how to make `notify-send` from the root user, I opened a Qubes OS issue about this.

# Conclusion

With this setup, you can finally install programs from flatpak in a template to provide it to other qubes, with bells and whistles to not have to worry about creating desktop files or keeping them up to date.

Please note that while well-made Flatpak programs like Firefox will add extra security, the repository flathub allows anyone to publish programs.  You can browse flathub to see who is publishing which software, they may be the official project team (like Mozilla for Firefox) or some random people.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/flatpak-on-qubesos.gmi</guid>
  <link>gemini://perso.pw/blog//articles/flatpak-on-qubesos.gmi</link>
  <pubDate>Mon, 18 Sep 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>How to add pledge to a program in OpenBSD</title>
  <description>
    <![CDATA[
<pre># Introduction

This article is meant to be a simple guide explaining how to make use of the OpenBSD specific feature pledge in order to restrict a software capabilities for more security.

While pledge falls in the sandboxing features, it's different than the traditional sandboxing we are used to see because it happens within the source code itself, and can be really tightened.  Actually, many programs requires lot of privileges like reading files, doing DNS etc... when initializing, then those privileges could be removed, this is possible with pledge but not for traditional sandboxing wrappers.

In OpenBSD, most of the base userland have support for pledge, and more and more packaged software (including Chromium and Firefox) received some code to add pledge.  If a program tries to use a system call that isn't in pledge promises list, it dies and the violation is reported in the system logs.

What makes pledge pretty cool is how it's easy to implement it in your software, it has a simple mechanism of system call families so you don't have to worry about listing every system calls, but only their categories (named promises), like reading a file, writing a file, executing binaries etc...

=> https://man.openbsd.org/pledge.2 OpenBSD manual page for pledge(2)

# Let's pledge a program

I found a small utility that I will use to illustrate how to add pledge to a program.  The program is qprint, a C quoted printable encoder/decoder.  This kind of converter is quite easy to pledge because most of the time, they only take an input, do some computation and make an output, they don't run forever and don't do network.

=> https://www.fourmilab.ch/webtools/qprint/ qprint official project page

## Digging in the sources

When extracting the sources, we can find a bunch of files, we will focus at reading the `*.c` files, the first thing we want to find is the function `main()`.

It happens the main function is in the file `qprint.c`.  It's important to call pledge as soon as possible in the program, most of the time after variable initialization.

## Modifying the code

Adding pledge to a program requires to understand how it works, because some feature that aren't often used may be broken by pledge, and some programs having live reloading or being able to change behavior during runtime are complicated to pledge.

Within the function `main` below variables declaration, We will add a call to pledge for `stdio` because the program can display the result on the output, `rpath` because it can read files and `wpath` as it can also write files.

include <unistd.h>

[...]

pledge("stdio rpath wpath", NULL);


It's ok, we imported the library providing pledge, and called it from within.  But what if the pledge call fails for some reasons?  We need to ensure it worked or abort the program.  Let's add some checks.

include <unistd.h>

include <err.h>

[...]

if (pledge("stdio rpath wpath", NULL) == -1) {

err(1, "pledge call didn't work");

}


This is a lot better now, if pledge call failed, the program will stop and we will be warned about it.  I don't know exactly under which circumstance it could fail, but maybe if promise name changes or doesn't exist anymore in a program, that would be bad if pledge silently failed.

## Testing

Now we made some changes to the program, we need to verify it's still working as expected.

Fortunately, qprint comes with a test suite which can be used with `make wringer`, if the test suite pass and the tests have a good coverage, this mean we may have not break anything.  If the test suite fails, we should have an error in the output of `dmesg` telling us why it failed.

And, it failed!

qprint[98802]: pledge "cpath", syscall 5


This error (which killed the PID instantly) indicates that the pledge list is missing `cpath`, this makes sense because it has to create new files if you specify an output file.

Adding `cpath` to the list, and running the test suite again, all tests pass!  Now, we exactly know that the software can't do anything except using the system calls we whitelisted.

We could tighten pledge more by dropping `rpath` if the file is read from stdin, and `cpath wpath` if the output is sent to stdout.  I left this exercise to the reader :-)

## The diff

Here is my diff to add pledge support to qprint.

Index: qprint.c

--- qprint.c.orig

+++ qprint.c

@@ -2,6 +2,8 @@

#line 70 "./qprint.w"

#include "config.h"

+#include <unistd.h>

+#include <err.h>

#define REVDATE "16th December 2014" \

@@ -747,6 +749,9 @@ char*cp;

+if (pledge("stdio cpath rpath wpath", NULL) == -1) {

+ err(1, "pledge error");

+}

fi= stdin;

fo= stdout;


# Using pledge in non-C programs

It's actually possible to call pledge() in other programming languages, Perl has a library provided in OpenBSD base system that will work out of the box.  For some other, such library may be packaged already (for python and Golang at least).  If you use something less common, you can define an interface to call the library.

=> https://man.openbsd.org/man3p/OpenBSD::Pledge.3p OpenBSD manual page for the Perl pledge library

Here is an example in Common LISP to create a new function `c-kiosk-pledge`.

+ecl

(progn

(ffi:clines "

#include <unistd.h>

void kioskPledge() {

pledge(\"dns inet stdio tty rpath\",NULL);

}

#endif")

#+openbsd

(ffi:def-function

("kioskPledge" c-kiosk-pledge)

() :returning :void))


# Extra

It's possible to find which running programs are currently using pledge() by using `ps auxww | awk '$8 ~ "p" { print }'`, any PID with a state containing `p` indicates it's pledged.

If you want to add pledge to a packaged program on OpenBSD, make sure it still fully work.

Adding pledge to a program that contain most promises won't be doing much...

# Exercise reader

Now, if you want to practice, you can tighten the pledge calls to only allow qprint to use the pledge `stdio` only in the case it's used in a pipe for input and output like this: `./qprint < input.txt > output.txt`.

Ideally, it should add the pledge `cpath wpath` only when it writes into a file, and `rpath` only when it has to read a file, so in the case of using stdin and stdout, only `stdio` would have been added at the beginning.

Good luck, Have fun!  Thanks to Brynet@ for the suggestion!

# Conclusion

The system call pledge() is a wonderful security feature that is reliable, and as it must be done in the source code, the program isn't run from within a sandboxed environment that may be possible to escape.  I can't say pledge can't be escaped, but I think it's a lot less likely to be escaped than any other sandbox mechanism (especially since the program immediately dies if it tries to escape).

Next time, I'll present its companion system called unveil which is used to restrict access to the filesystem, except some developer defined files.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-how-to-pledge-a-program.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-how-to-pledge-a-program.gmi</link>
  <pubDate>Mon, 11 Sep 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>My top 20 video games</title>
  <description>
    <![CDATA[
<pre># Introduction

I wanted to share my favorite games list of all time.  Making the list wasn't easy though, but I've set some rules to help deciding myself.

Here are the criteria:



Trivia, I'm not a huge gamer, I still play many games nowaday, but I only play each of them for a couple of hours to see what they have to offer in term of gameplay, mechanics, and see if they are innovative in some way.  If a game is able to surprise me or give me something new, I may spend a bit more time on it.

# My top 20

Here is the list of my top 20 games I enjoyed, and with which I'd be fine to enjoy play them again anytime.

I tried to elect some games to be a bit better than the other, so there is my top 3, top 10, and the top 20.  I haven't been able to rank them from 1 to 20, so I just made tiers.

## Top 20 to 11

### Heroes of Might and Magic III

=> https://www.gog.com/fr/game/heroes_of_might_and_magic_3_complete_edition Product page on GOG

I spent so many hours playing with my brother or friends, sharing the mouse each turn so everyone could play with a single computer.

And not only the social factor was nice, the game was cool, there are many different factions to play, the game is cool and there is strategy at play to win.  A must have.

### Saturn Bomberman

=> https://retrospiritgames.blogspot.com/2013/11/retro-review-saturn-bomberman-saturn.html Game review

The Sega Saturn hasn't been very popular, but it had some good games, and one is Saturn Bomberman.  From all the games from the Bomberman franchise, this looks really the best, it featured some dinosaurs with unique abilities, and they could grow up, some weird items, many maps.

And it had an excellent campaign that was long to play, and could be played in coop!  The campaign was really really top notch for this kind of game, with unique items you couldn't find in multiplayer.

### Tony Hawk's Pro Skater 1 and 2

=> https://store.epicgames.com/fr/p/tony-hawks-pro-skater-1-and-2 Product page on Epic Game Store

I guess this is a classic, I played a lot the Nintendo 64 version, and now we have the 1+2 games into one, with high refresh rate, HD textures and still the same good music.

A chill game that is always fun to play.

### Risk of rain 2

=> https://store.steampowered.com/app/632360/Risk_of_Rain_2/ Product page on Steam

A pure rogue-like that shines in multiplayer, lot of classes, lot of weapons, lot of items, lot of enemies, lot of fun.

While it's not the kind of game I'd play all day, I'm always up for a run or two.

### Warhammer 40K: Dawn of War

=> https://store.steampowered.com/app/4580/Warhammer_40000_Dawn_of_War__Dark_Crusade/ Product page on Steam (Dark Crusade)

This may sound like heresy, but I never played the campaign of this game.  I just played skirmish or in multiplayer with friends, and with the huge factions choice with different gameplay, it's always cool even if the graphics aged a bit.

Being able to send dreadnought from space directly into the ork base, or send legions of necrons to that Tau player is always source of joy.

### Street Fighter 2 Special Champion Edition

=> https://www.youtube.com/watch?v=sPTb1nvRg4s Video review on YouTube

A classic on the megadrive/genesis, it's smooth, music is good.  So many characters and stages, incredible soundtracks.  The combos were easy to remember, just enough to give each character their own identity and allow players to quickly onboard.

Maybe the super NES version is superior, but I always played it on megadrive.

### Slay the Spire

=> https://www.gog.com/fr/game/slay_the_spire Product page on GOG

Maybe the game which demonstrated we can do great deck based video games.

Playing a character with a set of skills as cards, gathering items while climbing a tower, it can get a bit repetitive over time though, but the game itself is good and doing a run occasionally is always tempting.

The community made a lot of mods, even adding new characters with very specific mechanics, I highly recommend it for anyone looking for a card based game.

### Monster Hunter 4 Ultimate

=> https://www.ign.com/articles/2015/02/10/monster-hunter-4-ultimate-review Game review on IGN

My first Monster Hunter game, on 3DS.  I absolutely loved it, insane fights against beloved monsters (we need to study them carefully, so we need to hunt a lot of them :P).

While Monster Hunter World shown better graphics and smoother gameplay, I still prefer the more rigid MH like MH4U or MH Generations Ultimate.

The 3D effect on the console was working quite well too!

### Peggle Nights

=> https://store.steampowered.com/app/3540/Peggle_Nights/ Product page on Steam

A simple arcade game with some extra powers depending on the character you picked.  It's really addictive despite the gameplay being simplistic.

### Monster Train

=> https://www.gog.com/fr/game/monster_train Product page on GOG

A very good card game with multiple factions, but not like Slay the Spire.

There are lot of combos to create as cards are persistent within the train, and runs are not that much depending on RNG (random number generator), which make it a great game.

## Top 10 to 4

Not ranked, let's enter the top 10 up to just before the top 3.

### Call of Cthulhu: Prisoner of Ice

=> https://www.gog.com/fr/game/call_of_cthulhu_prisoner_of_ice Product page on GOG

One of the first PC game I played, when I was 6.  I'm not into point & click usually, but this one features Lovecraft horrors, so it's good :)

### The Elder Scrolls IV: Oblivion

=> https://www.gog.com/fr/game/elder_scrolls_iv_oblivion_game_of_the_year_edition_deluxe_the Product page on GOG

A classic among the RPG, I wanted to put an Elder Scrolls game into the list and I went with Oblivion.  In my opinion, this was the coolest one compared to Morrowind or Skyrim.  I have to say, I just hesitated with Morrowind, but because of all Morrowind flaws and issues, Oblivion built a better game.  Skyrim was just bad for me, really boring and not interesting.

Oblivion gave the opportunity to discover many cities with day/night cycle, NPC that had homes and were at work during day, the game was incredible when it was released, and I think it's still really good.

Trivia, I never did the story of Morrowind or Oblivion, but yet I spent a lot of time playing them!

### Shining the Holy Ark

=> https://www.youtube.com/watch?v=MF2q28fWRzA Video review on YouTube

Another Sega Saturn game, almost unknown to the public I guess.  While not a Shining Force game, it's part of the franchise.

It's an RPG / dungeon crawler in first person view, in which you move from tiles to tiles and sometimes fight monster with your team.

### Into the Breach

=> https://www.gog.com/fr/game/into_the_breach Product page on GOG

The greatest puzzle game I ever played.  It's like chess, but actually fun.  Moving some mechas on a small tiled board when it's your turn, you must think about everything that will happen and in which order.

The number of mechas and equipment you find in the game make it really replayable, and game sessions can be short so it's always tempting to start yet another run.

### Like a Dragon

=> https://www.gog.com/fr/game/yakuza_like_a_dragon Product page on GOG

My first Yakuza / Like a dragon game, I didn't really know what to expect, and I was happy to discover it!

A Japanese RPG / turn based game featuring the most stupid skills or quests I've ever seen.  The story was really engaging, unlocking new jobs / characters leads to more stupidity around.

### Secret of Mana

=> https://www.ign.com/articles/2008/10/14/secret-of-mana-review Game review on IGN

A super NES classic, and it was possible to play in coop with a friend!

The game had so much content, lot of weapons, of magic, of monsters, the soundtrack is just incredible all along.  And even more, at some point in the game you have the opportunity to move from your current location by riding a dragon in a 3D view over the planet!

I start and finish this game every few years!

### Baldur's Gate 3

=> https://www.gog.com/fr/game/baldurs_gate_iii Product page on GOG

At the moment, it's the best RPG I played, and it's turn based like how I like them.

I'd have added Neverwinter Night, but BG3 does better than it in every way, so I retained BG3 instead.

Every new game could be played a lot differently than the previous one, there are so many possibilities out there, it's quite the next level of RPG compared to what we had before.

## Top 3

And finally, not ranked but my top 3 of my favorite games!

### Factorio

=> https://www.gog.com/fr/game/factorio Product page on GOG

After hesitating between Factorio and Dyson Sphere Program in the list, I chose to retain Factorio, because DSP is really good, but I can't see myself starting it again and again like Factorio.  DSP has a very very slow beginning, while Factorio provides fun much faster.

Factorio invented a new genre of game: automation.  I get crazy with automation, optimization.  It's like doing computer stuff in a game, everything is clean, can be calculated, I could stare at conveyor belts transporting stuff like I could stare at Gentoo compilation logs for hours.  The game is so deep, you can do crazy things, even more when you get into the logic circuits.

While I finished the game, I'm always up for a new world with some goals, and modding community added a lot of high quality content.

The only issue with this game is that it's hard to stop playing.

### Street of rage 4

=> https://www.gog.com/fr/game/streets_of_rage_4 Product page on GOG

While I played Street of Rage 2 a lot more than the 4Th, I think this modern version is just better.

You can play with a friend almost immediately, fun is there, brawling bad guys is pretty cool.  The music are good, the character roster is complete, it's just 100% fun to play it again and again.

### Outer Wilds

=> https://store.steampowered.com/app/753640/Outer_Wilds/ Product page on Steam

That's one game I wish I could forget to play it again...

It gave me a truly unique experience as a gamer.

It's an adventure game featuring a time loop of 15 minutes, the only things you acquire in the game is knowledge in your own mind.  With that knowledge, you can complete the game in different ways, but first, you need to find clues leading to other clues, leading to some pieces of the whole puzzle.

# Games that I couldn't put in the list

There are some games I really enjoyed, but for some reasons I haven't been able to put them in the list, could be replayability issues or the nostalgia factor that was too high maybe?


</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/my-top20-favorite-video-games.gmi</guid>
  <link>gemini://perso.pw/blog//articles/my-top20-favorite-video-games.gmi</link>
  <pubDate>Thu, 31 Aug 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>OpenBSD vmm and qcow2 derived disks</title>
  <description>
    <![CDATA[
<pre># Introduction

Let me show you a very practical feature of qcow2 virtual disk format, that is available in OpenBSD vmm, allowing you to easily create derived disks from an original image (also called delta disks).

A derived disk image is a new storage file that will inherit all the data from the original file, without modifying the original ever, it's like stacking a new fresh disk on top of the previous one, but all the changes are now written on the new one.

This allows interesting use cases such as using a golden image to provide a base template, like a fresh OpenBSD install, or create a temporary disks to try changes without harming to original file (and without having to backup a potentially huge file).

This is NOT OpenBSD specific, it's a feature of the qcow2 format, so while this guide is using OpenBSD as an example, this will work wherever qcow2 can be used.

=> https://man.openbsd.org/vmctl#b OpenBSD vmctl man page: -b flag

# Setup

First, you need to have a qcow2 file with something installed in it, let's say you already have a virtual machine with its storage file `/var/lib/vmm/alpine.qcow2`.

We will create a derived file `/var/lib/vmm/derived.qcow2` using the `vmctl` command:

vmctl create -b /var/lib/vmm/alpine.qcow2 /var/lib/vmm/derived.qcow2


That's it!  Now you have the new disk that already inherits all the other file data without modifying it ever.

# Limitations

The derived disk will stop working if the original file is modified, so once you make derived disks from a base image, you shouldn't modify the base image.

However, it's possible to merge changes from a derived disk to the base image using the `qemu-img` command:

=> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-using_qemu_img-re_basing_a_backing_file_of_an_image Red Hat documentation: Rebasing a Backing File of an Image

# Conclusion

The derived images can be useful in some scenarios, if you have an image and want to make some experimentation without making a full backup, just use a derived disk.  If you want to provide a golden image as a start like an installed OS, this will work too.

One use case I had was with OpenKuBSD, I had a single OpenBSD install as a base image, each VM had a derived disk as their root but removed and recreated at every boot, but they also had a dedicated disk for /home, this allows me to keep all the VMs clean, and I just have a single system to manage.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-vmm-templates.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-vmm-templates.gmi</link>
  <pubDate>Thu, 31 Aug 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Manipulate PDF files easily with pdftk</title>
  <description>
    <![CDATA[
<pre># Introduction

I often need to work with PDF, sometimes I need to extract a single page, or add a page, too often I need to rotate pages.

Fortunately, there is a pretty awesome tool to do all of these tasks, it's called PDFtk.

=> https://gitlab.com/pdftk-java/pdftk pdftkofficial project website

# Operations

Pdftk command line isn't the most obvious out there, but it's not that hard.

## Extracting a page

Extracting a page requires the `cat` sub command, and we need to give a page number or a range of pages.

For instance, extracting the pages 11, and from 16 to 18 from the file my_pdf.pdf to a new file export.pdf can be done with the following command:

pdftk my_pdf.pdf cat 11 16-18 output export.pdf


## Merging PDF into a single PDF

Merging multiple PDFs into a single PDF also uses the sub command `cat`.  In the following example, you will concatenate the PDF first.pdf and second.pdf into a merged.pdf result:

pdftk first.pdf second.pdf cat output merged.pdf


Note that they are concatenated in their order in the command line.

## Rotating PDF

Pdftk comes with a very powerful way to rotate PDFs pages.  You can specify pages or ranges of pages to rotate, the whole document, or only odd/even pages etc...

If you want to rotate all the pages of a PDF clockwise (east), we need to specify a range `1-end`, which means first to last page:

pdftk input.pdf rotate 1-endeast output rotated.pdf


If you want to select even or odd pages, you can add the keyword `even` or `odd` between the range and the rotation direction: `1-10oddwest` or `2-8eveneast` are valid rotations.

## Reversing the page ordering

If you want to reverse how pages are in your PDF, we can use the special range `end-1` which will go through pages from the last to the first one, with the sub command `cat` this will only recreate a new PDF:

pdftk input.pdf cat end-1 output reversed.pdf


# Conclusion

Pdftk have some other commands, most people will need to extract / merge / rotate pages, but take a look at the documentation to learn about all pdftk features.

PDF are usually a pain to work with, but pdftk make it very fast and easy to apply transformation on them.  What a great tool :-)
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/pdftk-guide.gmi</guid>
  <link>gemini://perso.pw/blog//articles/pdftk-guide.gmi</link>
  <pubDate>Tue, 22 Aug 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Migrating prosody internal storage to SQLite on OpenBSD</title>
  <description>
    <![CDATA[
<pre># Introduction

As some may know, I'm an XMPP user, an instant messaging protocol which used to be known as Jabber.  My server is running Prosody XMPP server on OpenBSD.  Recently, I got more users on my server, and I wanted to improve performance a bit by switching from the internal storage to SQLite.

Actually, prosody comes with a tool to switch from a storage to another, but I found the documentation lacking and on OpenBSD the migration tool isn't packaged (yet?).

The switch to SQLite drastically reduced prosody CPU usage on my small server, and went pain free.

=> https://prosody.im/doc/migrator Prosody documentation: Prosody migrator

# Setup

For the migration to be done, you will need a few prerequisites:



On OpenBSD, the migration tool can be retrieved by downloading the sources of prosody.  If you have the ports tree available, just run `make extract` in `net/prosody` and cd into the newly extracted directory.  The directory path can be retrieved using `make show=WRKSRC`.

The migration tool can be found in the subdirectory `tools/migration` of the sources, the program `gmake` is required to build the program (it's only replacing a few variables in it, so no worry about a complex setup).

In the migration directory, run `gmake`, you will obtain the migration tool `prosody-migrator.install` which is the program you will run for the migration to happen.

# Prepare the configuration file

In the migration directory, you will find a file `migrator.cfg.lua.install`, this is a configuration file describing your current prosody deployment and what you want with the migration, it defaults to a conversion from "internal" to "sqlite" which is what most users will want in my opinion.

Make sure the variable `data_path` in the file refers to `/var/prosody` which is the default directory on OpenBSD, and check the hosts in the "input" part which describe the current storage.  By default, the new storage will be in `/var/prosody/prosody.sqlite`.


# Run the tool

Once you have the migrator and its configuration file, it's super easy to proceed:



storage = "sql"

sql = {

driver = "SQLite3";

database = "prosody.sqlite";

}




If you had an error at the migration step, check the logs carefully to check if you missed something, a bad path, maybe.

# Conclusion

Prosody comes with a migration tool to switch from a storage backend to another, that's very handy when you didn't think about scaling the system correctly at first.

The migrator can also be used to migrate from the server ejabberd to prosody.

Thanks prx for your report about some missing steps!
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/prosody-migrating-to-sqlite.gmi</guid>
  <link>gemini://perso.pw/blog//articles/prosody-migrating-to-sqlite.gmi</link>
  <pubDate>Mon, 21 Aug 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Some explanations about OpenBSD memory usage</title>
  <description>
    <![CDATA[
<pre># Introduction

I regularly see people reporting high memory usage on OpenBSD when looking at some monitoring program output.

Those programs may be not reporting what you think.  The memory usage can be accounted in different ways.

Most of the time, the file system cache stored in-memory is added to memory usage, which lead to think about a high memory consumption.

# How to figure the real memory usage?

Here are a few methods to gather the used memory.

## Using ps

You can actually use `ps` and sum the RSS column and display it as megabytes:

ps auwxx | awk '{ sum+=$6 } END { print sum/1024 }'


You could use the 5th column if you want to sum the virtual memory, which can be way higher than your system memory (hence why it's called virtual).

## Using top

When running `top` in interactive mode, you can find a memory line at the top of the output, like this:

Memory: Real: 244M/733M act/tot Free: 234M Cache: 193M Swap: 158M/752M


This means there are 244 MB of memory currently in use, and 158 MB in the swap file.

The cache column displays how much file system data you have cached in memory, this is extremely useful because every time you open a program, this would avoid seeking it on the storage media if it's already in the memory cache, which is way faster.  This memory is freed when needed if there are not enough free memory available.

The "free" column only tell you that this ram is completely unused.

The number 733M indicates the total real memory, which includes memory in use that could be freed if required, however if someone find a clearer explanation, I'd be happy to read it.

## Using systat

The command `systat` is OpenBSD specific, often overlooked but very powerful, it has many displays you can switch to using left/right arrows, each aspect of the system has its own display.

The default display has a "memory totals in (KB)" area about your real, free or virtual memory.

# Going further

Inside the kernel, the memory naming is different, and there are extra categories.  You can find them in the kernel file `sys/uvm/uvmexp.h`:

=> https://github.com/openbsd/src/blob/master/sys/uvm/uvmexp.h#L56-L62 GitHub page for sys/uvm/uvmexp.h lines 56 to 62

# Conclusion

When one looks at OpenBSD memory usage, it's better to understand the various field before reporting a wrong amount, or that OpenBSD uses too much memory.  But we have to admit the documentation explaining each field is quite lacking.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-understand-memory-usage.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-understand-memory-usage.gmi</link>
  <pubDate>Tue, 15 Aug 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Authenticate the SSH servers you are connecting to</title>
  <description>
    <![CDATA[
<pre># Introduction

It's common knowledge that SSH connections are secure; however, they always had a flaw: when you connect to a remote host for the first time, how can you be sure it's the right one and not a tampered system?

SSH uses what we call TOFU (Trust On First Use), when you connect to a remote server for the first time, you have a key fingerprint displayed, and you are asked if you want to trust it or not.  Without any other information, you can either blindly trust it or deny it and not connect.  If you trust it, the key's fingerprint is stored locally in the file `known_hosts`, and if the remote server offers you a different key later, you will be warned and the connection will be forbidden because the server may have been replaced by a malicious one.

Let's try an analogy.  It's a bit like if you only had a post-it with, supposedly, your bank phone number on it, but you had no way to verify if it was really your bank on that number.  This would be pretty bad.  However, using an up-to-date trustable public reverse lookup directory, you could check that the phone number is genuine before calling.

What we can do to improve the TOFU situation is to publish the server's SSH fingerprint over DNS, so when you connect, SSH will try to fetch the fingerprint if it exists and compare it with what the server is offering.  This only works if the DNS server uses DNSSEC, which guarantees the DNS answer hasn't been tampered with in the process.  It's unlikely that someone would be able to simultaneously hijack your SSH connection to a different server and also craft valid DNSSEC replies.

# Setup

The setup is really simple, we need to gather the fingerprints of each key (they exist in multiple different crypto) on a server, securely, and publish them as SSHFP DNS entries.

If the server has new keys, you need to update its SSHFP entries.

We will use the tool `ssh-keygen` which contains a feature to automatically generate the DNS records for the server on which the command is running.

For example, on my server `interbus.perso.pw`, I will run `ssh-keygen -r interbus.perso.pw.` to get the records

$ ssh-keygen -r interbus.perso.pw.

interbus.perso.pw. IN SSHFP 1 1 d93504fdcb5a67f09d263d6cbf1fcf59b55c5a03

interbus.perso.pw. IN SSHFP 1 2 1d677b3094170511297579836f5ef8d750dae8c481f464a0d2fb0943ad9f0430

interbus.perso.pw. IN SSHFP 3 1 98350f8a3c4a6d94c8974df82144913fd478efd8

interbus.perso.pw. IN SSHFP 3 2 ec67c81dd11f24f51da9560c53d7e3f21bf37b5436c3fd396ee7611cedf263c0

interbus.perso.pw. IN SSHFP 4 1 cb5039e2d4ece538ebb7517cc4a9bba3c253ef3b

interbus.perso.pw. IN SSHFP 4 2 adbcdfea2aee40345d1f28bc851158ed5a4b009f165ee6aa31cf6b6f62255612


You certainly noted I used an extra dot, this is because they will be used as DNS records, so either:



If you use `interbus.perso.pw` without the dot, this would be for the domain `interbus.perso.pw.perso.pw` because it would be treated as a subdomain.

Note that `-r arg` isn't used for anything but the raw text in the output, this doesn't make `ssh-keygen` fetch the keys of a remote URL.

Now, just add each of the generated entries in your DNS.

# How to use SSHFP on your OpenSSH client

By default, if you connect to my server, you should see this output:

ssh interbus.perso.pw

The authenticity of host 'interbus.perso.pw (46.23.92.114)' can't be established.

ED25519 key fingerprint is SHA256:rbzf6iruQDRdHyi8hRFY7VpLAJ8WXuaqMc9rb2IlVhI.

This key is not known by any other names

Are you sure you want to continue connecting (yes/no/[fingerprint])?


It's telling you the server isn't known in `known_hosts` yet, and you have to trust it (or not, but you wouldn't connect).

However, with the option `VerifyHostKeyDNS` set to yes, the fingerprint will automatically be accepted if the one offered is found in an SSHFP entry.

As I explained earlier, this only works if the DNS answer is valid with regard to DNSSEC, otherwise, the setting "VerifyHostKeyDNS" automatically falls back to "ask", asking you to manually check the DNS SSHFP found and if you want to accept or not.

For example, without a working DNSSEC, the output would look like this:

$ ssh -o VerifyHostKeyDNS=yes interbus.perso.pw

The authenticity of host 'interbus.perso.pw (46.23.92.114)' can't be established.

ED25519 key fingerprint is SHA256:rbzf6iruQDRdHyi8hRFY7VpLAJ8WXuaqMc9rb2IlVhI.

Matching host key fingerprint found in DNS.

This key is not known by any other names

Are you sure you want to continue connecting (yes/no/[fingerprint])?


With a working DNSSEC, you should immediately connect without any TOFU prompt, and the host fingerprint won't be stored in `known_hosts`.

# Conclusion

SSHFP is a simple mechanism to build a chain of trust using an external service to authenticate the server you are connecting to.  Another method to authenticate a remote server would be to use an SSH certificate, but I'll keep that one for later.

# Going further

We saw that VerifyHostKeyDNS is reliable, but doesn't save the fingerprint in the file `~/.ssh/known_hosts`, which can be an issue if you need to connect later to the same server if you don't have a working DNSSEC resolver, you would have to trust blindly the server.

However, you could generate the required output from the server to be used by the known_hosts when you have DNSSEC working, so next time, you won't only rely on DNSSEC.

Note that if the server is replaced by another one and its SSHFP records updated accordingly, this will ask you what to do if you have the old keys in known_hosts.

To gather the fingerpints, connect on the remote server, which will be `remote-server.local` in the example and add the command output to your known_hosts file:

ssh-keyscan localhost 2>/dev/null | sed 's/^localhost/remote-server/'


We omit the `.local` in the `remote-server.local` hostname because it's a subdomain of the DNS zone. (thanks Francisco Gaitán for spotting it).

Basically, `ssh-keyscan` can remotely gather keys, but we want the local keys of the server, then we need to modify its output to replace localhost by the actual server name used to ssh into it.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/sshfp-dns-entries.gmi</guid>
  <link>gemini://perso.pw/blog//articles/sshfp-dns-entries.gmi</link>
  <pubDate>Wed, 09 Aug 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Turning a 15 years old laptop into a children proof retrogaming station</title>
  <description>
    <![CDATA[
<pre># Introduction

This article explains a setup I made for our family vacation place, I wanted to turn an old laptop (a Dell Vostro 1500 from 2008) into a retrogaming station.  That's actually easy to do, but I wanted to make it "childproof" so it will always work even if we let children alone with the laptop for a moment, that part was way harder.

This is not a tutorial explaining everything from A to Z, but mostly what worked / didn't work from my experimentation.

# Choosing an OS

First step is to pick an operating system.  I wanted to use Alpine, with the persistent mode I described last week, this would allow having nothing persistent except the ROMs files.  Unfortunately, the packages for Retroarch on Alpine were missing the cores I wanted, so I dropped Alpine.  A retroarch core is the library required to emulate a given platform/console.

Then, I wanted to give FreeBSD a try before switching to a more standard Linux system (Alpine uses the libc musl which makes it "non-standard" for my use case).  The setup was complicated as FreeBSD barely do anything by itself at install time, but after I got a working desktop, Retroarch had an issue, I couldn't launch any game even though the cores were loaded.  I can't explain why this wasn't working, everything seemed fine.  On top of this issue, game pad support have been really random, so I gave up.

Finally, I installed Debian 12 using the netinstall ISO, and without installing any desktop and graphical server like X or Wayland, just a bare Debian.

# Retroarch on a TTY

To achieve a more children-proof environment, I decided to run Retroarch directly from a TTY, without a graphical server.

This removes a lot of issues:



In addition to all the benefits listed above, this also reduces the emulation latency, and makes the system lighter by not having to render through X/Wayland.  I had to install the retroarch package and some GL / vulkan / mesa / sdl2 related packages to have it working. 

One major painful issue I had was to figure a way to start retroarch in tty1 at boot.  Actually, this is really hard, especially since it must start under a dbus session to have all features enabled.

My solution is a hack, but good enough for the use case.  I overrode the getty@tty1 service to automatically log in the user, and modified the user `~/.bashrc` file to exec retroarch.  If retroarch quits, the tty1 would be reset and retroarch started again, and you can't escape it.

# Retroarch configuration

I can't describe all the tweaks I did in retroarch, some were for pure enhancement, some for "hardening".  Here is a list of things I changed:



In addition to all of that, there is a lovely kiosk mode.  This basically just allow you to password protect all the settings in Retroarch, once you are done with the configuration, enable the kiosk mode, nothing can be changed (except putting a ROM in favorite).

# Extra settings

I configured a few more extra things to make the experience more children proof.

## Grub config

Grub can be a major issue if a children boots up the laptop but press a key at grub time.  Just set `GRUB_TIMEOUT=0` to disable the menu prompt, it will directly start into Debian.

## Disabled networking

The computer doesn't need to connect to any network, so I disabled all the services related to network, this reduced the boot time by a few seconds, and will prevent anything weird from happening.

## Bios lock

It may be wise to lock the bios, so in case you have children who know how to boot something on a computer, they wouldn't even be able to do that.  This also prevent mistakes in the bios, better be careful.  Don't lose that password.

## Plymouth splash screen

If you want your gaming console to have this extra thing that will turn the boring and scary boot process text into something cool, you can use Plymouth. 

I found a nice splash screen featuring Optimus head from Transformers while the system is booting, this looks pretty cool!  And surely, this will give the system some charm and persona compared to systemd boot process.  This delays the boot by a few seconds though.

# Conclusion

Retroarch is a fantastic software for emulation, and you can even run it from a TTY for lower latency.  Its controller mapping is really smart, you have to configure each controller against some kind of "reference" controller, and then each core will have a map from the reference controller to convert into the console controller you are emulating.  This mean you don't have to map your controller for each console, just once.

Doing a children proof kiosk computer wasn't easy, I'm sure there is room for improvement, but I'm happy that I turned a 15 years old laptop into something useful that will bring joy for kids, and memories for adults, without them fearing that the system will be damaged by kids (except physical damage but hey, I won't put the thing in a box).

Now, I have to do some paint job for the laptop behind-the-screen part to look bright and shiny :)
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/childproof-retrogaming-station.gmi</guid>
  <link>gemini://perso.pw/blog//articles/childproof-retrogaming-station.gmi</link>
  <pubDate>Fri, 28 Jul 2023 00:00:00 GMT</pubDate>
</item>

  </channel>
</rss>