💾 Archived View for perso.pw › blog › rss.xml captured on 2023-09-08 at 15:58:40.

View Raw

More Information

⬅️ Previous capture (2023-07-22)

➡️ Next capture (2023-09-28)

🚧 View Differences

-=-=-=-=-=-=-

<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Solene'%</title>
    <description></description>
    <link>gemini://perso.pw/blog/</link>
    <atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" />
    <item>
  <title>My top 20 video games</title>
  <description>
    <![CDATA[
<pre># Introduction

I wanted to share my favorite games list of all time.  Making the list wasn't easy though, but I've set some rules to help deciding myself.

Here are the criteria:



Trivia, I'm not a huge gamer, I still play many games nowaday, but I only play each of them for a couple of hours to see what they have to offer in term of gameplay, mechanics, and see if they are innovative in some way.  If a game is able to surprise me or give me something new, I may spend a bit more time on it.

# My top 20

Here is the list of my top 20 games I enjoyed, and with which I'd be fine to enjoy play them again anytime.

I tried to elect some games to be a bit better than the other, so there is my top 3, top 10, and the top 20.  I haven't been able to rank them from 1 to 20, so I just made tiers.

## Top 20 to 11

### Heroes of Might and Magic III

=> https://www.gog.com/fr/game/heroes_of_might_and_magic_3_complete_edition Product page on GOG

I spent so many hours playing with my brother or friends, sharing the mouse each turn so everyone could play with a single computer.

And not only the social factor was nice, the game was cool, there are many different factions to play, the game is cool and there is strategy at play to win.  A must have.

### Saturn Bomberman

=> https://retrospiritgames.blogspot.com/2013/11/retro-review-saturn-bomberman-saturn.html Game review

The Sega Saturn hasn't been very popular, but it had some good games, and one is Saturn Bomberman.  From all the games from the Bomberman franchise, this looks really the best, it featured some dinosaurs with unique abilities, and they could grow up, some weird items, many maps.

And it had an excellent campaign that was long to play, and could be played in coop!  The campaign was really really top notch for this kind of game, with unique items you couldn't find in multiplayer.

### Tony Hawk's Pro Skater 1 and 2

=> https://store.epicgames.com/fr/p/tony-hawks-pro-skater-1-and-2 Product page on Epic Game Store

I guess this is a classic, I played a lot the Nintendo 64 version, and now we have the 1+2 games into one, with high refresh rate, HD textures and still the same good music.

A chill game that is always fun to play.

### Risk of rain 2

=> https://store.steampowered.com/app/632360/Risk_of_Rain_2/ Product page on Steam

A pure rogue-like that shines in multiplayer, lot of classes, lot of weapons, lot of items, lot of enemies, lot of fun.

While it's not the kind of game I'd play all day, I'm always up for a run or two.

### Warhammer 40K: Dawn of War

=> https://store.steampowered.com/app/4580/Warhammer_40000_Dawn_of_War__Dark_Crusade/ Product page on Steam (Dark Crusade)

This may sound like heresy, but I never played the campaign of this game.  I just played skirmish or in multiplayer with friends, and with the huge factions choice with different gameplay, it's always cool even if the graphics aged a bit.

Being able to send dreadnought from space directly into the ork base, or send legions of necrons to that Tau player is always source of joy.

### Street Fighter 2 Special Champion Edition

=> https://www.youtube.com/watch?v=sPTb1nvRg4s Video review on YouTube

A classic on the megadrive/genesis, it's smooth, music is good.  So many characters and stages, incredible soundtracks.  The combos were easy to remember, just enough to give each character their own identity and allow players to quickly onboard.

Maybe the super NES version is superior, but I always played it on megadrive.

### Slay the Spire

=> https://www.gog.com/fr/game/slay_the_spire Product page on GOG

Maybe the game which demonstrated we can do great deck based video games.

Playing a character with a set of skills as cards, gathering items while climbing a tower, it can get a bit repetitive over time though, but the game itself is good and doing a run occasionally is always tempting.

The community made a lot of mods, even adding new characters with very specific mechanics, I highly recommend it for anyone looking for a card based game.

### Monster Hunter 4 Ultimate

=> https://www.ign.com/articles/2015/02/10/monster-hunter-4-ultimate-review Game review on IGN

My first Monster Hunter game, on 3DS.  I absolutely loved it, insane fights against beloved monsters (we need to study them carefully, so we need to hunt a lot of them :P).

While Monster Hunter World shown better graphics and smoother gameplay, I still prefer the more rigid MH like MH4U or MH Generations Ultimate.

The 3D effect on the console was working quite well too!

### Peggle Nights

=> https://store.steampowered.com/app/3540/Peggle_Nights/ Product page on Steam

A simple arcade game with some extra powers depending on the character you picked.  It's really addictive despite the gameplay being simplistic.

### Monster Train

=> https://www.gog.com/fr/game/monster_train Product page on GOG

A very good card game with multiple factions, but not like Slay the Spire.

There are lot of combos to create as cards are persistent within the train, and runs are not that much depending on RNG (random number generator), which make it a great game.

## Top 10 to 4

Not ranked, let's enter the top 10 up to just before the top 3.

### Call of Cthulhu: Prisoner of Ice

=> https://www.gog.com/fr/game/call_of_cthulhu_prisoner_of_ice Product page on GOG

One of the first PC game I played, when I was 6.  I'm not into point & click usually, but this one features Lovecraft horrors, so it's good :)

### The Elder Scrolls IV: Oblivion

=> https://www.gog.com/fr/game/elder_scrolls_iv_oblivion_game_of_the_year_edition_deluxe_the Product page on GOG

A classic among the RPG, I wanted to put an Elder Scrolls game into the list and I went with Oblivion.  In my opinion, this was the coolest one compared to Morrowind or Skyrim.  I have to say, I just hesitated with Morrowind, but because of all Morrowind flaws and issues, Oblivion built a better game.  Skyrim was just bad for me, really boring and not interesting.

Oblivion gave the opportunity to discover many cities with day/night cycle, NPC that had homes and were at work during day, the game was incredible when it was released, and I think it's still really good.

Trivia, I never did the story of Morrowind or Oblivion, but yet I spent a lot of time playing them!

### Shining the Holy Ark

=> https://www.youtube.com/watch?v=MF2q28fWRzA Video review on YouTube

Another Sega Saturn game, almost unknown to the public I guess.  While not a Shining Force game, it's part of the franchise.

It's an RPG / dungeon crawler in first person view, in which you move from tiles to tiles and sometimes fight monster with your team.

### Into the Breach

=> https://www.gog.com/fr/game/into_the_breach Product page on GOG

The greatest puzzle game I ever played.  It's like chess, but actually fun.  Moving some mechas on a small tiled board when it's your turn, you must think about everything that will happen and in which order.

The number of mechas and equipment you find in the game make it really replayable, and game sessions can be short so it's always tempting to start yet another run.

### Like a Dragon

=> https://www.gog.com/fr/game/yakuza_like_a_dragon Product page on GOG

My first Yakuza / Like a dragon game, I didn't really know what to expect, and I was happy to discover it!

A Japanese RPG / turn based game featuring the most stupid skills or quests I've ever seen.  The story was really engaging, unlocking new jobs / characters leads to more stupidity around.

### Secret of Mana

=> https://www.ign.com/articles/2008/10/14/secret-of-mana-review Game review on IGN

A super NES classic, and it was possible to play in coop with a friend!

The game had so much content, lot of weapons, of magic, of monsters, the soundtrack is just incredible all along.  And even more, at some point in the game you have the opportunity to move from your current location by riding a dragon in a 3D view over the planet!

I start and finish this game every few years!

### Baldur's Gate 3

=> https://www.gog.com/fr/game/baldurs_gate_iii Product page on GOG

At the moment, it's the best RPG I played, and it's turn based like how I like them.

I'd have added Neverwinter Night, but BG3 does better than it in every way, so I retained BG3 instead.

Every new game could be played a lot differently than the previous one, there are so many possibilities out there, it's quite the next level of RPG compared to what we had before.

## Top 3

And finally, not ranked but my top 3 of my favorite games!

### Factorio

=> https://www.gog.com/fr/game/factorio Product page on GOG

After hesitating between Factorio and Dyson Sphere Program in the list, I chose to retain Factorio, because DSP is really good, but I can't see myself starting it again and again like Factorio.  DSP has a very very slow beginning, while Factorio provides fun much faster.

Factorio invented a new genre of game: automation.  I get crazy with automation, optimization.  It's like doing computer stuff in a game, everything is clean, can be calculated, I could stare at conveyor belts transporting stuff like I could stare at Gentoo compilation logs for hours.  The game is so deep, you can do crazy things, even more when you get into the logic circuits.

While I finished the game, I'm always up for a new world with some goals, and modding community added a lot of high quality content.

The only issue with this game is that it's hard to stop playing.

### Street of rage 4

=> https://www.gog.com/fr/game/streets_of_rage_4 Product page on GOG

While I played Street of Rage 2 a lot more than the 4Th, I think this modern version is just better.

You can play with a friend almost immediately, fun is there, brawling bad guys is pretty cool.  The music are good, the character roster is complete, it's just 100% fun to play it again and again.

### Outer Wilds

=> https://store.steampowered.com/app/753640/Outer_Wilds/ Product page on Steam

That's one game I wish I could forget to play it again...

It gave me a truly unique experience as a gamer.

It's an adventure game featuring a time loop of 15 minutes, the only things you acquire in the game is knowledge in your own mind.  With that knowledge, you can complete the game in different ways, but first, you need to find clues leading to other clues, leading to some pieces of the whole puzzle.

# Games that I couldn't put in the list

There are some games I really enjoyed, but for some reasons I haven't been able to put them in the list, could be replayability issues or the nostalgia factor that was too high maybe?


</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/my-top20-favorite-video-games.gmi</guid>
  <link>gemini://perso.pw/blog//articles/my-top20-favorite-video-games.gmi</link>
  <pubDate>Thu, 31 Aug 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>OpenBSD vmm and qcow2 derived disks</title>
  <description>
    <![CDATA[
<pre># Introduction

Let me show you a very practical feature of qcow2 virtual disk format, that is available in OpenBSD vmm, allowing you to easily create derived disks from an original image (also called delta disks).

A derived disk image is a new storage file that will inherit all the data from the original file, without modifying the original ever, it's like stacking a new fresh disk on top of the previous one, but all the changes are now written on the new one.

This allows interesting use cases such as using a golden image to provide a base template, like a fresh OpenBSD install, or create a temporary disks to try changes without harming to original file (and without having to backup a potentially huge file).

This is NOT OpenBSD specific, it's a feature of the qcow2 format, so while this guide is using OpenBSD as an example, this will work wherever qcow2 can be used.

=> https://man.openbsd.org/vmctl#b OpenBSD vmctl man page: -b flag

# Setup

First, you need to have a qcow2 file with something installed in it, let's say you already have a virtual machine with its storage file `/var/lib/vmm/alpine.qcow2`.

We will create a derived file `/var/lib/vmm/derived.qcow2` using the `vmctl` command:

vmctl create -b /var/lib/vmm/alpine.qcow2 /var/lib/vmm/derived.qcow2


That's it!  Now you have the new disk that already inherits all the other file data without modifying it ever.

# Limitations

The derived disk will stop working if the original file is modified, so once you make derived disks from a base image, you shouldn't modify the base image.

However, it's possible to merge changes from a derived disk to the base image using the `qemu-img` command:

=> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-using_qemu_img-re_basing_a_backing_file_of_an_image Red Hat documentation: Rebasing a Backing File of an Image

# Conclusion

The derived images can be useful in some scenarios, if you have an image and want to make some experimentation without making a full backup, just use a derived disk.  If you want to provide a golden image as a start like an installed OS, this will work too.

One use case I had was with OpenKuBSD, I had a single OpenBSD install as a base image, each VM had a derived disk as their root but removed and recreated at every boot, but they also had a dedicated disk for /home, this allows me to keep all the VMs clean, and I just have a single system to manage.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-vmm-templates.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-vmm-templates.gmi</link>
  <pubDate>Thu, 31 Aug 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Manipulate PDF files easily with pdftk</title>
  <description>
    <![CDATA[
<pre># Introduction

I often need to work with PDF, sometimes I need to extract a single page, or add a page, too often I need to rotate pages.

Fortunately, there is a pretty awesome tool to do all of these tasks, it's called PDFtk.

=> https://gitlab.com/pdftk-java/pdftk pdftkofficial project website

# Operations

Pdftk command line isn't the most obvious out there, but it's not that hard.

## Extracting a page

Extracting a page requires the `cat` sub command, and we need to give a page number or a range of pages.

For instance, extracting the pages 11, and from 16 to 18 from the file my_pdf.pdf to a new file export.pdf can be done with the following command:

pdftk my_pdf.pdf cat 11 16-18 output export.pdf


## Merging PDF into a single PDF

Merging multiple PDFs into a single PDF also uses the sub command `cat`.  In the following example, you will concatenate the PDF first.pdf and second.pdf into a merged.pdf result:

pdftk first.pdf second.pdf cat output merged.pdf


Note that they are concatenated in their order in the command line.

## Rotating PDF

Pdftk comes with a very powerful way to rotate PDFs pages.  You can specify pages or ranges of pages to rotate, the whole document, or only odd/even pages etc...

If you want to rotate all the pages of a PDF clockwise (east), we need to specify a range `1-end`, which means first to last page:

pdftk input.pdf rotate 1-endeast output rotated.pdf


If you want to select even or odd pages, you can add the keyword `even` or `odd` between the range and the rotation direction: `1-10oddwest` or `2-8eveneast` are valid rotations.

## Reversing the page ordering

If you want to reverse how pages are in your PDF, we can use the special range `end-1` which will go through pages from the last to the first one, with the sub command `cat` this will only recreate a new PDF:

pdftk input.pdf cat end-1 output reversed.pdf


# Conclusion

Pdftk have some other commands, most people will need to extract / merge / rotate pages, but take a look at the documentation to learn about all pdftk features.

PDF are usually a pain to work with, but pdftk make it very fast and easy to apply transformation on them.  What a great tool :-)
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/pdftk-guide.gmi</guid>
  <link>gemini://perso.pw/blog//articles/pdftk-guide.gmi</link>
  <pubDate>Tue, 22 Aug 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Migrating prosody internal storage to SQLite on OpenBSD</title>
  <description>
    <![CDATA[
<pre># Introduction

As some may know, I'm an XMPP user, an instant messaging protocol which used to be known as Jabber.  My server is running Prosody XMPP server on OpenBSD.  Recently, I got more users on my server, and I wanted to improve performance a bit by switching from the internal storage to SQLite.

Actually, prosody comes with a tool to switch from a storage to another, but I found the documentation lacking and on OpenBSD the migration tool isn't packaged (yet?).

The switch to SQLite drastically reduced prosody CPU usage on my small server, and went pain free.

=> https://prosody.im/doc/migrator Prosody documentation: Prosody migrator

# Setup

For the migration to be done, you will need a few prerequisites:



On OpenBSD, the migration tool can be retrieved by downloading the sources of prosody.  If you have the ports tree available, just run `make extract` in `net/prosody` and cd into the newly extracted directory.  The directory path can be retrieved using `make show=WRKSRC`.

The migration tool can be found in the subdirectory `tools/migration` of the sources, the program `gmake` is required to build the program (it's only replacing a few variables in it, so no worry about a complex setup).

In the migration directory, run `gmake`, you will obtain the migration tool `prosody-migrator.install` which is the program you will run for the migration to happen.

# Prepare the configuration file

In the migration directory, you will find a file `migrator.cfg.lua.install`, this is a configuration file describing your current prosody deployment and what you want with the migration, it defaults to a conversion from "internal" to "sqlite" which is what most users will want in my opinion.

Make sure the variable `data_path` in the file refers to `/var/prosody` which is the default directory on OpenBSD, and check the hosts in the "input" part which describe the current storage.  By default, the new storage will be in `/var/prosody/prosody.sqlite`.


# Run the tool

Once you have the migrator and its configuration file, it's super easy to proceed:



storage = "sql"

sql = {

driver = "SQLite3";

database = "prosody.sqlite";

}




If you had an error at the migration step, check the logs carefully to check if you missed something, a bad path, maybe.

# Conclusion

Prosody comes with a migration tool to switch from a storage backend to another, that's very handy when you didn't think about scaling the system correctly at first.

The migrator can also be used to migrate from the server ejabberd to prosody.

Thanks prx for your report about some missing steps!
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/prosody-migrating-to-sqlite.gmi</guid>
  <link>gemini://perso.pw/blog//articles/prosody-migrating-to-sqlite.gmi</link>
  <pubDate>Mon, 21 Aug 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Some explanations about OpenBSD memory usage</title>
  <description>
    <![CDATA[
<pre># Introduction

I regularly see people reporting high memory usage on OpenBSD when looking at some monitoring program output.

Those programs may be not reporting what you think.  The memory usage can be accounted in different ways.

Most of the time, the file system cache stored in-memory is added to memory usage, which lead to think about a high memory consumption.

# How to figure the real memory usage?

Here are a few methods to gather the used memory.

## Using ps

You can actually use `ps` and sum the RSS column and display it as megabytes:

ps auwxx | awk '{ sum+=$6 } END { print sum/1024 }'


You could use the 5th column if you want to sum the virtual memory, which can be way higher than your system memory (hence why it's called virtual).

## Using top

When running `top` in interactive mode, you can find a memory line at the top of the output, like this:

Memory: Real: 244M/733M act/tot Free: 234M Cache: 193M Swap: 158M/752M


This means there are 244 MB of memory currently in use, and 158 MB in the swap file.

The cache column displays how much file system data you have cached in memory, this is extremely useful because every time you open a program, this would avoid seeking it on the storage media if it's already in the memory cache, which is way faster.  This memory is freed when needed if there are not enough free memory available.

The "free" column only tell you that this ram is completely unused.

The number 733M indicates the total real memory, which includes memory in use that could be freed if required, however if someone find a clearer explanation, I'd be happy to read it.

## Using systat

The command `systat` is OpenBSD specific, often overlooked but very powerful, it has many displays you can switch to using left/right arrows, each aspect of the system has its own display.

The default display has a "memory totals in (KB)" area about your real, free or virtual memory.

# Going further

Inside the kernel, the memory naming is different, and there are extra categories.  You can find them in the kernel file `sys/uvm/uvmexp.h`:

=> https://github.com/openbsd/src/blob/master/sys/uvm/uvmexp.h#L56-L62 GitHub page for sys/uvm/uvmexp.h lines 56 to 62

# Conclusion

When one looks at OpenBSD memory usage, it's better to understand the various field before reporting a wrong amount, or that OpenBSD uses too much memory.  But we have to admit the documentation explaining each field is quite lacking.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-understand-memory-usage.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-understand-memory-usage.gmi</link>
  <pubDate>Tue, 15 Aug 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Authenticate the SSH servers you are connecting to</title>
  <description>
    <![CDATA[
<pre># Introduction

It's common knowledge that SSH connections are secure; however, they always had a flaw: when you connect to a remote host for the first time, how can you be sure it's the right one and not a tampered system?

SSH uses what we call TOFU (Trust On First Use), when you connect to a remote server for the first time, you have a key fingerprint displayed, and you are asked if you want to trust it or not.  Without any other information, you can either blindly trust it or deny it and not connect.  If you trust it, the key's fingerprint is stored locally in the file `known_hosts`, and if the remote server offers you a different key later, you will be warned and the connection will be forbidden because the server may have been replaced by a malicious one.

Let's try an analogy.  It's a bit like if you only had a post-it with, supposedly, your bank phone number on it, but you had no way to verify if it was really your bank on that number.  This would be pretty bad.  However, using an up-to-date trustable public reverse lookup directory, you could check that the phone number is genuine before calling.

What we can do to improve the TOFU situation is to publish the server's SSH fingerprint over DNS, so when you connect, SSH will try to fetch the fingerprint if it exists and compare it with what the server is offering.  This only works if the DNS server uses DNSSEC, which guarantees the DNS answer hasn't been tampered with in the process.  It's unlikely that someone would be able to simultaneously hijack your SSH connection to a different server and also craft valid DNSSEC replies.

# Setup

The setup is really simple, we need to gather the fingerprints of each key (they exist in multiple different crypto) on a server, securely, and publish them as SSHFP DNS entries.

If the server has new keys, you need to update its SSHFP entries.

We will use the tool `ssh-keygen` which contains a feature to automatically generate the DNS records for the server on which the command is running.

For example, on my server `interbus.perso.pw`, I will run `ssh-keygen -r interbus.perso.pw.` to get the records

$ ssh-keygen -r interbus.perso.pw.

interbus.perso.pw. IN SSHFP 1 1 d93504fdcb5a67f09d263d6cbf1fcf59b55c5a03

interbus.perso.pw. IN SSHFP 1 2 1d677b3094170511297579836f5ef8d750dae8c481f464a0d2fb0943ad9f0430

interbus.perso.pw. IN SSHFP 3 1 98350f8a3c4a6d94c8974df82144913fd478efd8

interbus.perso.pw. IN SSHFP 3 2 ec67c81dd11f24f51da9560c53d7e3f21bf37b5436c3fd396ee7611cedf263c0

interbus.perso.pw. IN SSHFP 4 1 cb5039e2d4ece538ebb7517cc4a9bba3c253ef3b

interbus.perso.pw. IN SSHFP 4 2 adbcdfea2aee40345d1f28bc851158ed5a4b009f165ee6aa31cf6b6f62255612


You certainly noted I used an extra dot, this is because they will be used as DNS records, so either:



If you use `interbus.perso.pw` without the dot, this would be for the domain `interbus.perso.pw.perso.pw` because it would be treated as a subdomain.

Note that `-r arg` isn't used for anything but the raw text in the output, this doesn't make `ssh-keygen` fetch the keys of a remote URL.

Now, just add each of the generated entries in your DNS.

# How to use SSHFP on your OpenSSH client

By default, if you connect to my server, you should see this output:

ssh interbus.perso.pw

The authenticity of host 'interbus.perso.pw (46.23.92.114)' can't be established.

ED25519 key fingerprint is SHA256:rbzf6iruQDRdHyi8hRFY7VpLAJ8WXuaqMc9rb2IlVhI.

This key is not known by any other names

Are you sure you want to continue connecting (yes/no/[fingerprint])?


It's telling you the server isn't known in `known_hosts` yet, and you have to trust it (or not, but you wouldn't connect).

However, with the option `VerifyHostKeyDNS` set to yes, the fingerprint will automatically be accepted if the one offered is found in an SSHFP entry.

As I explained earlier, this only works if the DNS answer is valid with regard to DNSSEC, otherwise, the setting "VerifyHostKeyDNS" automatically falls back to "ask", asking you to manually check the DNS SSHFP found and if you want to accept or not.

For example, without a working DNSSEC, the output would look like this:

$ ssh -o VerifyHostKeyDNS=yes interbus.perso.pw

The authenticity of host 'interbus.perso.pw (46.23.92.114)' can't be established.

ED25519 key fingerprint is SHA256:rbzf6iruQDRdHyi8hRFY7VpLAJ8WXuaqMc9rb2IlVhI.

Matching host key fingerprint found in DNS.

This key is not known by any other names

Are you sure you want to continue connecting (yes/no/[fingerprint])?


With a working DNSSEC, you should immediately connect without any TOFU prompt, and the host fingerprint won't be stored in `known_hosts`.

# Conclusion

SSHFP is a simple mechanism to build a chain of trust using an external service to authenticate the server you are connecting to.  Another method to authenticate a remote server would be to use an SSH certificate, but I'll keep that one for later.

# Going further

We saw that VerifyHostKeyDNS is reliable, but doesn't save the fingerprint in the file `~/.ssh/known_hosts`, which can be an issue if you need to connect later to the same server if you don't have a working DNSSEC resolver, you would have to trust blindly the server.

However, you could generate the required output from the server to be used by the known_hosts when you have DNSSEC working, so next time, you won't only rely on DNSSEC.

Note that if the server is replaced by another one and its SSHFP records updated accordingly, this will ask you what to do if you have the old keys in known_hosts.

To gather the fingerpints, connect on the remote server, which will be `remote-server.local` in the example and add the command output to your known_hosts file:

ssh-keyscan localhost 2>/dev/null | sed 's/^localhost/remote-server/'


We omit the `.local` in the `remote-server.local` hostname because it's a subdomain of the DNS zone. (thanks Francisco Gaitán for spotting it).

Basically, `ssh-keyscan` can remotely gather keys, but we want the local keys of the server, then we need to modify its output to replace localhost by the actual server name used to ssh into it.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/sshfp-dns-entries.gmi</guid>
  <link>gemini://perso.pw/blog//articles/sshfp-dns-entries.gmi</link>
  <pubDate>Wed, 09 Aug 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Turning a 15 years old laptop into a children proof retrogaming station</title>
  <description>
    <![CDATA[
<pre># Introduction

This article explains a setup I made for our family vacation place, I wanted to turn an old laptop (a Dell Vostro 1500 from 2008) into a retrogaming station.  That's actually easy to do, but I wanted to make it "childproof" so it will always work even if we let children alone with the laptop for a moment, that part was way harder.

This is not a tutorial explaining everything from A to Z, but mostly what worked / didn't work from my experimentation.

# Choosing an OS

First step is to pick an operating system.  I wanted to use Alpine, with the persistent mode I described last week, this would allow having nothing persistent except the ROMs files.  Unfortunately, the packages for Retroarch on Alpine were missing the cores I wanted, so I dropped Alpine.  A retroarch core is the library required to emulate a given platform/console.

Then, I wanted to give FreeBSD a try before switching to a more standard Linux system (Alpine uses the libc musl which makes it "non-standard" for my use case).  The setup was complicated as FreeBSD barely do anything by itself at install time, but after I got a working desktop, Retroarch had an issue, I couldn't launch any game even though the cores were loaded.  I can't explain why this wasn't working, everything seemed fine.  On top of this issue, game pad support have been really random, so I gave up.

Finally, I installed Debian 12 using the netinstall ISO, and without installing any desktop and graphical server like X or Wayland, just a bare Debian.

# Retroarch on a TTY

To achieve a more children-proof environment, I decided to run Retroarch directly from a TTY, without a graphical server.

This removes a lot of issues:



In addition to all the benefits listed above, this also reduces the emulation latency, and makes the system lighter by not having to render through X/Wayland.  I had to install the retroarch package and some GL / vulkan / mesa / sdl2 related packages to have it working. 

One major painful issue I had was to figure a way to start retroarch in tty1 at boot.  Actually, this is really hard, especially since it must start under a dbus session to have all features enabled.

My solution is a hack, but good enough for the use case.  I overrode the getty@tty1 service to automatically log in the user, and modified the user `~/.bashrc` file to exec retroarch.  If retroarch quits, the tty1 would be reset and retroarch started again, and you can't escape it.

# Retroarch configuration

I can't describe all the tweaks I did in retroarch, some were for pure enhancement, some for "hardening".  Here is a list of things I changed:



In addition to all of that, there is a lovely kiosk mode.  This basically just allow you to password protect all the settings in Retroarch, once you are done with the configuration, enable the kiosk mode, nothing can be changed (except putting a ROM in favorite).

# Extra settings

I configured a few more extra things to make the experience more children proof.

## Grub config

Grub can be a major issue if a children boots up the laptop but press a key at grub time.  Just set `GRUB_TIMEOUT=0` to disable the menu prompt, it will directly start into Debian.

## Disabled networking

The computer doesn't need to connect to any network, so I disabled all the services related to network, this reduced the boot time by a few seconds, and will prevent anything weird from happening.

## Bios lock

It may be wise to lock the bios, so in case you have children who know how to boot something on a computer, they wouldn't even be able to do that.  This also prevent mistakes in the bios, better be careful.  Don't lose that password.

## Plymouth splash screen

If you want your gaming console to have this extra thing that will turn the boring and scary boot process text into something cool, you can use Plymouth. 

I found a nice splash screen featuring Optimus head from Transformers while the system is booting, this looks pretty cool!  And surely, this will give the system some charm and persona compared to systemd boot process.  This delays the boot by a few seconds though.

# Conclusion

Retroarch is a fantastic software for emulation, and you can even run it from a TTY for lower latency.  Its controller mapping is really smart, you have to configure each controller against some kind of "reference" controller, and then each core will have a map from the reference controller to convert into the console controller you are emulating.  This mean you don't have to map your controller for each console, just once.

Doing a children proof kiosk computer wasn't easy, I'm sure there is room for improvement, but I'm happy that I turned a 15 years old laptop into something useful that will bring joy for kids, and memories for adults, without them fearing that the system will be damaged by kids (except physical damage but hey, I won't put the thing in a box).

Now, I have to do some paint job for the laptop behind-the-screen part to look bright and shiny :)
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/childproof-retrogaming-station.gmi</guid>
  <link>gemini://perso.pw/blog//articles/childproof-retrogaming-station.gmi</link>
  <pubDate>Fri, 28 Jul 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Old Computer Challenge v3: postmortem</title>
  <description>
    <![CDATA[
<pre># Challenge report

Hi!  I've not been very communicative about my week during the Old Computer Challenge v3, the reason is that I failed it.  Time for a postmortem (analysis of what happened) to understand the failure!

For the context, the last time I was using a restricted hardware was for the first edition of the challenge two years ago.  Last year challenge was about reducing Internet connectivity.

# Wasn't prepared

I have to admit, I didn't prepare anything.  I thought I could simply limit the requirements on my laptop, either on OpenBSD or openSUSE and enjoy the challenge.  It turned out it was more complicated than that.



I had to figure a backup plan, which turned to be using Alpine Linux installed on a USB memory stick, memory and core number restriction worked out of the box, figuring how to effectively reduce the frequency was hard, but I did it finally.

From this point, I had a non-encrypted Alpine Linux on a poor storage medium.  What would I do with this?  Nothing much.

# Memory limitation

It turns out that in 2 years, my requirements evolved a bit.  512 MB wasn't enough to use a web browser with JavaScript, and while I thought it wouldn't be such a big deal, it WAS.

I regularly need to go on some websites, doing it on my non-trusted smartphone is a no-go, so I need a computer, and Firefox on 512 MB just doesn't work.  Chromium almost work, but it depends on the page, and WebKit browser often didn't work well enough.

Here is a sample of websites I needed to visit:



For this reason, I often had to use my "work" computer to do the tasks, and ended up inadvertently continuing on this computer :(

In addition to web browsing, some programs like LanguageTool (a java GUI spellcheck program) required too much memory to be started, so I couldn't even spell check my blog posts (Aspell is not as complete as LanguageTool).

# CPU limitation

At first when I thought about the rules for the 3rd edition, the CPU frequency seemed to be the worst part.  In practice, the system was almost swapping continuously but wasn't CPU bound.  Hardware acceleration was fast enough to play videos smoothly.

If you can make good use of the 512 MB of memory, you certainly won't have CPU problems.

# Security issues

This is not related to the challenge itself, but I felt a bit stuck with my untrusted Alpine Linux, I have some ssh / GPG keys that are secured on two systems and my passwords, I almost can't do anything without them, and I didn't want to take the risk of compromising my security chain for the challenge.

In fact, since I started using Qubes OS, I started being reluctant to mix all my data on a single system, even the other one I'm used to being working with (which has all the credentials too), but Qubes OS is the anti-oldcomputerchallenge as you need to throw the more hardware you can to make it useful.

# Not a complete failure

However, the challenge wasn't such a complete failure for me.  While I can't say I played by the rules, it definitely helped me to realize the changes in my computer use over the last years.  This was the point when I started the "offline laptop" project three years ago, which transformed into the old computer challenge the year after.

I tried to use less the computer as I wasn't able to fulfill the challenge requirements, and did some stuff IRL at home and outside, the week went SUPER FAST, I was astonished to realize it's already over.  This also forced me to look for solutions, so I spent *a LOT* of time trying to make Firefox fit in 512 MB, TLDR it didn't work.

The LEAST memory I'd need nowadays is 1 GB of memory, it's still not much compared to what we have nowadays (my main system has 32 GB), but it's twice the first requirements I've set.

# Conclusion

It seems everyone had a nice week with the challenge, I'm very happy to see the community enjoying this every year.  I may not be the challenge paragon for this year, but it was useful to me, and since then I couldn't stop thinking about how to improve my computer usage.

Next challenge should be two weeks long :)
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/old-computer-challenge-v3-part2.gmi</guid>
  <link>gemini://perso.pw/blog//articles/old-computer-challenge-v3-part2.gmi</link>
  <pubDate>Mon, 17 Jul 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>How-to install Alpine Linux in full ram with persistency</title>
  <description>
    <![CDATA[
<pre># Introduction

In this guide, I'd like to share with you how to install Alpine Linux, so it runs entirely from RAM, but using its built-in tool to handle persistency.  Perfect setup for a NAS or router, so you don't waste a disk for the system, and this can even be used for a workstation.

=> https://www.alpinelinux.org Alpine Linux official project website
=> https://wiki.alpinelinux.org/wiki/Alpine_local_backup Alpine Linux wiki: Alpine local backup

# The plan

Basically, we want to get the Alpine installer on a writable disk formatted in FAT instead of a read only image like official installers, then we will use the command `lbu` to handle persistency, and we will see what need to be configured to have a working system.

This is only a list of steps, they will be detailed later:

1. boot from an Alpine Installer (if you are using Alpine, you don't need too)
2. format an usb memory drive with an ESP partition and make it bootable
3. run `setup-bootloader` to copy the bootloader from the installer to the freshly formatted drive
4. reboot on the usb drive
5. run `setup-alpine`
6. you are on your new Alpine system
7. run `lbu commit` to make changes persistent across reboot
8. make changes, run `lbu commit` again

=> static/solene-lbu.png A mad scientist Girl with a t-shirt labeled "rare t-shirt" is looking at a penguin strapped on a Frankenstein like machine, with his head connected to a huge box with LBU written on it.

=> https://merveilles.town/@prahou Artwork above by Prahou

# The setup

## Booting Alpine

For this step you have to download an Alpine Linux installer, take the one that suits your needs, if unsure, take the "Extended" one.  Don't forget to verify the file checksum.

=> https://www.alpinelinux.org/downloads/

Once you have the ISO file, create the installation media:

=> https://docs.alpinelinux.org/user-handbook/0.1a/Installing/medium.html#_using_the_image Alpine Linux documentation: Using the image

Now, boot your system using your brand-new installer.

## Writable boot media creation

In this step, we will need to boot on the Alpine installer to create a new Alpine installer, but writable.

You need another USB media for this step, the one that will keep your system and data.

On Alpine Linux, you can use `setup-alpine` to configure your network, key map and a few things for the current system.  You only have to say "none" when you are asked what you want to install, where, and if you want to store the configuration somewhere.

Run the following commands on the destination USB drive (networking is required to install a package), this will format it and use all the space as a FAT32 partition.  In the example below, the drive is `/dev/sdc`.

apk add parted

parted /dev/sdc -- mklabel gpt

parted /dev/sdc -- mkpart ESP fat32 1MB 100%

parted /dev/sdc -- set 1 esp on


This creates a GPT table on `/dev/sdc`, then creates a first partition as FAT32 from the first megabyte up to the full disk size, and finally marks it bootable.  This guide is only for UEFI compatible systems.

We actually have to format the drive as FAT32, otherwise it's just a partition type without a way to mount it as FAT32:

mkfs.vfat /dev/sdc1

modprobe vfat


Final step, we use an Alpine tool to copy the bootloader from the installer to our new disk.  In the example below, your installer may be `/media/usb` and the destination `/dev/sdc1`, you could figure the first one using `mount`.

setup-bootable /media/usb /dev/sdc1


At this step, you made a USB disk in FAT32 containing the Alpine Linux installer you were using live.  Reboot on the new one.

## System installation

On your new installation media, run `setup-alpine` as if you were installing Alpine Linux, but answer "none" when you are asked which disk you want to use. When asked "Enter where to store configs", you should be prompted your new device by default, accept.  Immediately, after, you will be prompted for an APK cache, accept.

At this point, we can say Alpine is installed!  Don't reboot yet, you are already on your new system!

Just use it, and run `lbu commit` when you need to save changes done to packages or `/etc/`.  `lbu commit` creates a new tarball in your USB disk containing a list of files configured in `/etc/apk/protected_paths.d/`, and this tarball is loaded at boot time, and will install your package list quickly from the local cache.

=> https://wiki.alpinelinux.org/wiki/Alpine_local_backup Alpine Linux wiki: Alpine local backup (lbu command documentation)

Please take extra care that if you include more files, everything you commit the changes, they have to be stored on your USB media.  You could modify the fstab to add an extra disk/partition for persistent data on a performant drive.

# Updating the kernel

The kernel can't be upgraded using apk, you have to use the script `update-kernel` that will create a "modloop" file in the boot partition which contains the boot image.  You can't rollback this file.

You will need a few gigabytes in your in-memory filesystem, or use a temporary build directory by affecting `TMPDIR` variable to a persistent storage.

By default, tmpfs on root is set to 1 GB, this can be increased given you have enough memory using the command: `mount -o remount,size=6G /`.

The script should have the boot directory as a parameter, so it should look like `update-kernel /media/usb/boot` in a default setup, if you use an external partition, this would look like `env TMPDIR=/mnt/something/ update-kernel /media/usb/boot`.

## Extra configuration

Here is a list of tweaks to improve your experience!

### keep last n configuration

By default, lbu will only keep the last version you save, by setting`BACKUP_LIMIT` to a number n, you will always have the last n versions of your system stored in the boot media, this is practical if you want to roll back a change.

### apk repositories

Edit `/etc/apk/repositories` to uncomment the community repository.

### fstab check

Edit `/etc/fstab` to make sure the disk you are using is explicitly configured using a UUID entry, if you only have this:

/dev/cdrom /media/cdrom iso9660 noauto,ro 0 0

/dev/usbdisk /media/usb vfat noauto,ro 0 0


This mean your system may have troubles if you use it on a different computer or that you plug another USB disk in it.  Fix by using the UUID of your partition, you can find it using the program `blkid` from the eponym package, and fix the fstab like this:

UUID=61B2-04FA /media/persist vfat noauto,ro 0 0

/dev/cdrom /media/cdrom iso9660 noauto,ro 0 0

/dev/usbdisk /media/usb vfat noauto,ro 0 0


This will ALWAYS mount your drive as `/media/persist`.

If you had to make the change, you need to make some extra changes to keep things coherent:



### desktop setup

You can install a graphical desktop, this can easily be done with these commands:

setup-desktop xfce

setup-xorg-base


Due to a bug, we have to re-enable some important services, otherwise you would not have networking at the next boot:

rc-update add hwdrivers sysinit


=> https://gitlab.alpinelinux.org/alpine/aports/-/issues/9653 Alpine bug report #9653

You may want to enable the display manager at boot, which may be lightdm, gdm or sddm depending on your desktop:

rc-update add lightdm


### user persistency

If you added a user during `setup-alpine`, its home directory has been automatically added to `/etc/apk/protected_paths.d/lbu.list`, when you run `lbu commit`, its whole home is stored.  This may not be desired.

If you don't want to save the whole home directory, but only a selection of files/directories, here is how to proceed:

1. edit `/etc/apk/protected_paths.d/lbu.list` to remove the line adding your user directory
2. you need to create the user directory at boot with the correct permissions: `echo "install -d -o solene -g solene -m 700 /home/solene" | doas tee /etc/local.d/00-user.start`
3. in case you have some persistency set at least one user sub directories, it's important to fix the permissions of all the user data after the boot: `echo "chown -R solene:solene /home/solene | doas tee -a /etc/local.d/00-user.start`
4. you need to mark this script as executable: `doas chmod +x /etc/local.d/00-user.start`
5. you need to run the local scripts at boot time: `doas rc-update add local`
6. save the changes: `doas lbu commit`

I'd recommend the use of a directory named `Persist` and adding it to the lbu list.  Doing so, you have a place to store some important data without having to save all your home directory (including garbage such as cache).  This is even nicer if you use ecryptfs as explained below.

### extra convenience

Because Alpine Linux is packaged in a minimalistic manner, you may have to install a lot of extra packages to have all the fonts, icons, emojis, cursors etc... working correctly as you would expect for a standard Linux desktop.

Fortunately, there is a community guide explaining each section you may want to configure.

=> https://wiki.alpinelinux.org/wiki/Post_installation Alpine Linux wiki: Post installation

### Set X default keyboard layout

Alpine insists of you using a qwerty desktop for X until you log into your session, this can be complicated to type passwords.

You can create a file `/etc/X11/xorg.conf.d/00-keyboard.conf` like in the linked example and choose your default keyboard layout.  You will have to create the directories `/etc/X11/xorg.conf.d` first.

=> https://wiki.archlinux.org/title/Xorg/Keyboard_configuration#Using_X_configuration_files Arch Linux wiki: Keyboard configuration

### encrypted personal directory

You could use ecryptfs to either encrypt the home partition of your user, or just give it a Private directory that could be unlocked on demand AND made persistent without pulling all the user files at every configuration commit.

$ doas apk add ecryptfs-utils

$ doas modprobe ecryptfs

$ ecryptfs-setup-private

Enter your login passphrase [solene]:

Enter your mount passphrase [leave blank to generate one]:

[...]

$ doas lbu add $HOME/.Private

$ doas lbu add $HOME/.ecryptfs

$ echo "install -d -o solene -g solene -m 700 /home/solene/Private" | doas tee /etc/local.d/50-ecryptfs.start

$ doas chmod +x /etc/local.d/50-ecryptfs.start

$ doas rc-update add local

$ doas lbu commit


Now, when you need to access your private directory, run `ecryptfs-mount-private` and you have your `$HOME/Private` directory which is encrypted.

You could use ecryptfs to encrypt the whole user directory, this requires extra steps and changes into `/etc/pam.d/base-auth`, don't forget to add `/home/.ecryptfs` to the lbu include list.

=> https://dataswamp.org/~solene/2023-03-12-encrypt-with-ecryptfs.html Using ecryptfs guide

# Security

Let's be clear, this setup isn't secure!  The weak part is the boot media, which doesn't use secure boot, could easily be modified, and has nothing encrypted (except the local backups, but NOT BY DEFAULT).

However, once the system has booted, if you remove the boot media, nothing can be damaged as everything lives in memory, but you should still use passwords for your users.

# Conclusion

Alpine is a very good platform for this kind of setup, and they provide all the tools out of the box!  It's a very fun setup to play with.

Don't forget that by default everything runs from memory without persistency, so be careful if you generate data you don't want to lose (passwords, downloads, etc...).

# Going further

The lbu configuration can be encrypted, this is recommended if you plan to carry your disk around, especially if it contains sensitive data.

You can use the fat32 partition only for the bootloader and the local backup files, but you could have an extra partition that could be mounted for /home or something, and why not a layer of LUKS for encryption.

You may want to use zram if you are tight on memory, this creates a compressed block device that could be used for swap, it's basically compressed RAM, it's very efficient but less useful if you have a slow CPU.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/alpine-linux-from-ram-but-persistent.gmi</guid>
  <link>gemini://perso.pw/blog//articles/alpine-linux-from-ram-but-persistent.gmi</link>
  <pubDate>Tue, 18 Jul 2023 00:00:00 GMT</pubDate>
</item>

  </channel>
</rss>