💾 Archived View for perso.pw › blog › rss.xml captured on 2024-07-09 at 01:32:48.

View Raw

More Information

⬅️ Previous capture (2024-06-19)

➡️ Next capture (2024-08-18)

🚧 View Differences

-=-=-=-=-=-=-

<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Solene'%</title>
    <description></description>
    <link>gemini://perso.pw/blog/</link>
    <atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" />
    <item>
  <title>WireGuard and Linux network namespaces</title>
  <description>
    <![CDATA[
<pre># Introduction

This guide explains how to setup a WireGuard tunnel on Linux using a dedicated network namespace so you can choose to run a program on the VPN or over clearnet.

I have been able to figure the setup thanks to the following blog post, I enhanced it a bit using scripts and sudo rules.

=> https://www.ismailzai.com/blog/creating-wireguard-jails-with-linux-network-namespaces Mo Ismailzai's blog: Creating WireGuard jails with Linux network namespaces

# Explanations

By default, if you connect WireGuard tunnel, its "allowedIps" field will be used as a route with a higher priority than your current default route.  It is not always ideal to have everything routed through a VPN, so you will create a dedicated network namespace that uses the VPN as a default route, without affecting all other software.

Unfortunately, compared to OpenBSD rdomain (which provide the same features in this situation), network namespaces are much more complicated to deal with and requires root to run a program under a namespace.

You will create a SAFE sudo rule to allow your user to run commands under the new namespace, making it more practical for daily use.

# Setup

## VPN tunnel and namespace

You need a wg-quick compatible WireGuard configuration file, but do not make it automatically used at boot.

Create a script (for root use only) with the following content, then make it executable:

!/bin/sh

your VPN configuration file

CONFIG=/etc/wireguard/my-vpn.conf

this directory is used to have a per netns resolver file

mkdir -p /etc/netns/vpn/

cleanup any previous VPN in case you want to restart it

ip netns exec vpn ip l del tun0

ip netns del vpn

information to reuse later

DNS=$(awk '/^DNS/ { print $3 }' $CONFIG)

IP=$(awk '/^Address/ { print $3 }' $CONFIG)

the namespace will use the DNS defined in the VPN configuration file

echo "nameserver $DNS" > /etc/netns/vpn/resolv.conf

now, it creates the namespace and configure it

ip netns add vpn

ip -n vpn link set lo up

ip link add tun0 type wireguard

ip link set tun0 netns vpn

ip netns exec vpn wg setconf tun0 <(wg-quick strip "$CONFIG")

ip -n vpn a add "$IP" dev tun0

ip -n vpn link set tun0 up

ip -n vpn route add default dev tun0

ip -n vpn add

extra check if you want to verify the DNS used and the public IP assigned

ip netns exec vpn dig ifconfig.me

ip netns exec vpn curl https://ifconfig.me


This script autoconfigure the network namespace and the VPN interface + the DNS server to use.  There are extra checks at the end of the script that you can uncomment if you want to take a look at the public IP and DNS resolver used just after connection.

Running this script will make the netns "vpn" available for use.

The command to run a program under the namespace is `ip netns exec vpn your command`, it can only be run as root.

## Sudo rule

Now you need a specific rule so you can use sudo to run a command in vpn netns as your own user without having to log in as root.

Add this to your sudo configuration file, in my example I allow the user `solene` to run commands as `solene` for the netns vpn:

solene ALL=(root) NOPASSWD: /usr/sbin/ip netns exec vpn /usr/bin/sudo -u solene -- *


When using this command line, you MUST use full paths exactly as in the sudo configuration file, this is important otherwise it would allow you to create a script called `ip` with whatever commands and run it as root, while `/usr/sbin/ip` can not be spoofed by a local script in $PATH.

If I want a shell session with the VPN, I can run the following command:

sudo /usr/sbin/ip netns exec vpn /usr/bin/sudo -u solene -- bash


This runs bash under the netns vpn, so any command I'm running from it will be using the VPN.

# Limitations

It is not a real limitation, but you may be caught by it, if you make a program listening on localhost in the netns vpn, you can only connect to it from another program in the same namespace.  There are methods to connect two namespaces, but I do not plan to cover it, if you need to search about this setup, it can be done using socat (this is explained in the blog post linked earlier) or a local bridge interface.

# Conclusion

Network namespaces are a cool feature on Linux, but it is overly complicated in my opinion, unfortunately I have to deal with it, but at least it is working fine in practice.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/linux-vpn-netns.gmi</guid>
  <link>gemini://perso.pw/blog//articles/linux-vpn-netns.gmi</link>
  <pubDate>Thu, 04 Jul 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>The Old Computer Challenge v4 (Olympics edition)</title>
  <description>
    <![CDATA[
<pre># Introduction

This is the time of the year where I announce the Old Computer Challenge (OCC) date.

I recommend visiting the community website about the OCC if you want to connect with the community.

=> https://occ.deadnet.se/ Old Computer Challenge community

=> https://dataswamp.org/~solene/tag-oldcomputerchallenge.html The Old Computer Challenge history

=> static/occ-v4.jpg The Old Computer Challenge v4 poster, by @prahou@merveilles.town on Mastodon

# When?

The Old Computer Challenge 4th edition will begin 13th July to 20th July 2024.  It will be the prequel to Olympics, I was not able to get the challenge accepted there so we will do it our way.

# How to participate?

While the three previous editions had different rules, I came to agree with the community for this year.  Choose your rules!

When I did the challenge for the first time, I did not expect it to become a yearly event nor that it would gather aficionados during the trip.  The original point of the challenge was just to see if I could use my oldest laptop as my main computer for a week, there were no incentive, it was not a contest and I did not have any written rules.

Previous editions rules were about using an old laptop, use a computer with limited hardware (and tips to slow down a modern machine) or limit Internet access to a single hour per day.  I always insist on the fact it should not hinder your job, so people participating do not have to "play" during work.  Smartphones became complicated to handle, especially with the limited Internet access, all I can recommend to people is to define some rules you want to stick to, and apply to it the best you can.  If you realllyyyy need once to use a device that would break the rules, so be it if it is really important, nobody will yell at you.

People doing the OCC enjoy it for multiple reasons, find yours!  Some find the opportunity to disconnect a bit, change their habit, do some technoarcheology to run rare hardware, play with low-tech, demonstrate obsolescence is not a fatality etc...

Some ideas if you do not know what to do for the challenge:



# What to do during the challenge?

You can join the community and share your experience.

There are many ways!  It's the opportunity to learn how to use Gopher or Gemini to publish content, or to join the mailing list and participate with the other or simply come to the IRC channel to chat a bit.

# I can't join during 13th to 20th July!

Well, as nobody enforces you to do the OCC, you can just do it when you want, even in December if it suits your calendar better than mid July, nobody will complain at you.

# Conclusion

There is a single rule, do it for fun!  Do not impede yourself for weird reasons, it is here for fun, and doing the whole week is as good as failing and writing about the why you failed.  It is not a contest, just try and see how it goes, and tell us your story :)
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/old-computer-challenge-v4-announce.gmi</guid>
  <link>gemini://perso.pw/blog//articles/old-computer-challenge-v4-announce.gmi</link>
  <pubDate>Mon, 24 Jun 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>How to mount ISO or file disk images on OpenBSD</title>
  <description>
    <![CDATA[
<pre># Introduction

If you ever happen to mount a .iso file on OpenBSD, you may wonder how to proceed as the command `mount_cd9660` requires a device name.

While the solution is entirely documented into man pages and in the official FAQ, it may not be easy to find it at first glance, especially since most operating system allow to mount an iso file in a single step where as OpenBSD requires an extra step.

=> https://www.openbsd.org/faq/faq14.html#MountImage OpenBSD FAQ: Mounting disk images
=> https://man.openbsd.org/vnconfig#EXAMPLES OpenBSD manual page: vnconfig(8) EXAMPLES section

Note that this method does also work for disk images, not only .iso files.

# Exposing a file as a device

On OpenBSD you need to use the command `vnconfig` to map a file to a device node, allowing interesting actions such as using a file as a storage disk (which you can encrypt) or mounting a .iso file.

This command must be used as root as it manipulates files in /dev.

# Mounting an ISO file

Now, let's see how to mount a .iso file, which is a dump of a CD9660 file (most of the time):

vnconfig vnd0 /path/to/file.iso


This will create a new device `/dev/vnd0`, now you can mount it on your file-system with:

mount -t cd9660 /dev/vnd0c /mnt


You should be able to browser your iso file content in /mnt at this point.

# Unmounting

If you are done with the file, you have to umount it with `umount /mnt` and destroy the vnd device using `vnconfig -u vnd0`.

# Going further: Using a file as an encrypted disk

If you want to use a single file as a file system, you have to provision the file with disk space using the command `dd`, you can fill it with zeroes but if you plan to use encryption on top of it, it's better to use random data. In the following example, you will create a file `my-disk.img` of a size of 10 GB (1000 x 10 MB):

dd if=/dev/random of=my-disk.img bs=10M count=1000


Now you can use vnconfig to expose it as a device:

vnconfig vnd0 my-disk.img


Finally, the command `bioctl` can be used to configure encryption on the disk, `disklabel` to partition it and `newfs` to format the partitions.  You can follow OpenBSD FAQ guides, make sure use the the device name `/dev/vnd0` instead of wd0 or sd0 from the examples.

=> https://www.openbsd.org/faq/faq14.html#softraidCrypto OpenBSD FAQ: Encrypting external disk
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/mount-iso-file-openbsd.gmi</guid>
  <link>gemini://perso.pw/blog//articles/mount-iso-file-openbsd.gmi</link>
  <pubDate>Tue, 18 Jun 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>OpenBSD extreme privacy setup</title>
  <description>
    <![CDATA[
<pre># Introduction

This blog post explains how to configure an OpenBSD workstation with extreme privacy in mind.

This is an attempt to turn OpenBSD into a Whonix or Tails alternative, although if you really need that level of privacy, use a system from this list and not the present guide.  It is easy to spot OpenBSD using network fingerprinting, this can not be defeated, you can not hide the fact you use OpenBSD to network operators.

I did this guide as a challenge for fun, but I also know some users have a use for this level of privacy.

Note: this guide explains steps related to increase privacy of OpenBSD and its base system, it will not explain how to configure a web browser or how to choose a VPN.

# Checklist

OpenBSD does not have much network activity with a default installation, but the following programs generate traffic:



# Setup

## OpenBSD installation

If you do not have OpenBSD installed yet, you will have to download an installer.  Choose from the official mirrors or my tor/i2p proxy mirror.

=> https://www.openbsd.org/faq/faq4.html#Download OpenBSD official website: Downloading OpenBSD
=> https://dataswamp.org/~solene/2024-05-25-openbsd-privacy-friendly-mirror.html OpenBSD privacy-friendly mirrors

Choose the full installer, for 7.5 it would be install75.img for USB installer or install75.iso for using a CD-ROM.

It is important to choose the full installer to avoid any network at install time.

Full disk encryption is recommended, but it's your choice.  If you choose encryption, it is recommended to wipe the drive with random data before.

=> https://www.openbsd.org/faq/faq14.html#softraid OpenBSD FAQ: Crypto and disks

During the installation, do not configure the network at all.  You want to avoid syspatch and fw_update to run at the end of the installer, and also ntpd to ping many servers upon boot.

## First boot (post installation)

Once OpenBSD booted after the installation, you need to take a decision for ntpd (time synchronization daemon).



Whonix (maybe Tails too?) uses a custom tailored program named swdate to update the system clock over Tor (because Tor only supports TCP while NTP uses UDP), it is unfortunately not easily portable on OpenBSD.

Next step is to edit the file `/etc/hosts` to disable the firmware server whose hostname is hard-coded in the program `fw_update`, add this line to the file:

127.0.0.9 firmware.openbsd.org


## Packages, firmware and mirrors

The firmware installation and OpenBSD mirror configuration using Tor and I2P are covered in my previous article, it explains how to use tor or i2p to download firmware, packages and system sets to upgrade.

=> https://dataswamp.org/~solene/2024-05-25-openbsd-privacy-friendly-mirror.html OpenBSD privacy-friendly mirrors

There is a chicken / egg issue with this though, on a fresh install you have neither tor nor i2p, so you can not download tor or i2p packages through it.  You could download the packages and their dependencies from another system and install them locally using USB.

Wi-Fi and some other devices requiring a firmware may not work until you run fw_update, you may have to download the files from another system and pass the network interface firmware over a USB memory stick to get network.  A smartphone with USB tethering is also a practical approach for downloading firmware, but you will have to download it over clearnet.

## DNS

DNS is a huge topic for privacy-oriented users, I can not really recommend a given public DNS servers because they all have pros and cons, I will use 1.1.1.1 and 9.9.9.9 for the example, but use your favorite DNS.

Enable the daemon unwind, it is a local DNS resolver with some cache, and supports DoT, DoH and many cool features.  Edit the file `/etc/unwind.conf` with this configuration:

forwarder { 1.1.1.1 9.9.9.9 }


As I said, DoT and DoH is supported, you can configure it directly in the forwarder block, the man page explains the syntax:

=> https://man.openbsd.org/unwind.conf OpenBSD manual pages: unwind.conf

Now, enable, start and make sure the service is running fine:

rcctl enable unwind

rcctl start unwind

rcctl check unwind


A program named `resolvd` is running by default, when it finds that unwind is running, resolvd modifies `/etc/resolv.conf` to switch DNS resolution to 127.0.0.1, so you do not have anything to do.

## Firewall configuration

A sane firewall configuration for workstations is to block all incoming connections.  This can be achieved with the following `/etc/pf.conf`: (reminder, last rule matches)

set block-policy drop

set skip on lo

match in all scrub (no-df random-id max-mss 1440)

antispoof quick for egress

block all traffic (in/out)

block

allow reaching the outside (IPv4 + IPv6)

pass out quick inet

pass out quick inet6

allow ICMP (ping) for MTU discovery

pass in proto icmp

uncomment if you use SLAAC or ICMP6 (IPv6)

pass in on egress inet6 proto icmp6

pass in on egress inet6 proto udp from fe80::/10 port dhcpv6-server to fe80::/10 port dhcpv6-client no state


Reload the rules with `pfctl -f /etc/pf.conf`.

## Network configuration

Everything is ready so you can finally enable networking.  You can find a list of network interfaces with `ifconfig`.

Create the hostname.if file for your network device.

=> https://man.openbsd.org/hostname.if OpenBSD manual pages: hostname.if

An ethernet device configuration using DHCP would look like this

inet autoconf


A wireless device configuration would look like this:

join SSID_NAME wpakey password1

join OTHER_NET wpakey hunter2

inet autoconf


You can randomize your network device MAC address at each boot by adding the line `lladdr random` to its configuration file.

Start the network with `sh /etc/netstart ifname`.

# Special attention during updates

When you upgrade your OpenBSD system from a release to another or to a newer snapshot using `sysupgrade`, the command `fw_update` will automatically be run at the very end of the installer.

It will bypass any `/etc/hosts` changes as it runs from a mini root filesystem, if you do not want `fw_update` to be used over clearnet at this step, the only method is to disable network at this step, which can be done by using `sysupgrade -n` to prepare the upgrade without rebooting, and then:



You could use this script to automate the process:

mv /etc/hostname.* /root/

sysupgrade -n

echo 'mv /root/hostname.* /etc/' > /etc/rc.firsttime

echo 'sh /etc/netstart' >> /etc/rc.firsttime

chmod +x /etc/rc.firsttime

reboot


It will move all your network configuration in `/root/`, run sysupgrade, and configure the next boot to restore the hostname files back to place and start the network.

# Webcam and Microphone protection

By default, OpenBSD "filters" webcam and microphone use, if you try to use them, you get a video stream with a black background and no audio on the microphone. This is handled directly by the kernel and only root can change this behavior.

To toggle microphone recording, change the sysctl `kern.audio.record` to 1 or 0 (default).

To toggle webcam recording, change the sysctl `kern.video.record` to 1 or 0 (default).

What is cool with this mechanism is it makes software happy when they make webcam/microphone a requirement, they exist but just record nothing.

# Conclusion

Congratulations, you achieved a high privacy level with your OpenBSD installation!  If you have money and enough trust in some commercial services, you could use a VPN instead (or as a base) of Tor/I2P, but it is not in the scope of this guide.

I did this guide after installing OpenBSD on a laptop connected to another laptop doing NAT and running Wireshark to see exactly what was leaking over the network.  It was a fun experience.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-privacy-setup.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-privacy-setup.gmi</link>
  <pubDate>Mon, 10 Jun 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>OpenBSD mirror over Tor / I2P</title>
  <description>
    <![CDATA[
<pre># Introduction

For an upcoming privacy related article about OpenBSD I needed to setup an access to an OpenBSD mirror both from a Tor hidden service and I2P.

The server does not contain any data, it only act as a proxy fetch files from a random existing OpenBSD mirror, so it does not waste bandwidth mirroring everything, the server does not have the storage required anyway.  There is a little cache to keep most requested files locally.

=> https://en.wikipedia.org/wiki/I2P Wikipedia page about I2P protocol
=> https://en.wikipedia.org/wiki/The_Tor_Project Wikipedia page about Tor

It is only useful if you can not reach OpenBSD mirrors, or if you really need to hide your network activity.  Tor or I2P will be much slower than connecting to a mirror using HTTP(s).

However, as they exist now, let me explain how to start using them.

# Tor

Using a client with tor proxy enabled, you can reach the following address to download installers or sets.

=> http://kdzlr6wcf5d23chfdwvfwuzm6rstbpzzefkpozp7kjeugtpnrixldxqd.onion/pub/OpenBSD/ OpenBSD onion mirror over Tor

If you want to install or update your packages from tor, you can use the onion address in `/etc/installurl`. However, it will not work for sysupgrade and syspatch, and you need to export the variable `FETCH_CMD="/usr/local/bin/curl -L -s -q -N -x socks5h://127.0.0.1:9050"` in your environment to make `pkg_*` programs able to use the mirror.

To make sysupgrade or syspatch able to use the onion address, you need to have the program `torsocks` installed, and patch the script to use torsocks:



These patches will have to be reapplied after each sysupgrade run.

# I2P

If you have a client with i2p proxy enabled, you can reach the following address to download installers or sets.

=> http://2st32tfsqjnvnmnmy3e5o5y5hphtgt4b2letuebyv75ohn2w5umq.b32.i2p:8081/pub/OpenBSD/ OpenBSD mirror address over I2P

If you want to install or update your packages from i2p, install i2pd with `pkg_add i2pd`, edit the file `/etc/i2pd/i2pd.conf` to set `notransit = true` except if you want to act as an i2p relay (high cpu/bandwidth consumption).

Replace the file `/etc/i2pd/tunnels.conf` by the following content (or adapt your current tunnels.conf if you configured it earlier):

[MIRROR]

type = client

address = 127.0.0.1

port = 8080

destination = 2st32tfsqjnvnmnmy3e5o5y5hphtgt4b2letuebyv75ohn2w5umq.b32.i2p

destinationport = 8081

keys = mirror.dat


Now, enable and start i2pd with `rcctl enable i2pd && rcctl start i2pd`.

After a few minutes to let i2pd establish tunnels, you should be able to browse the mirror over i2p using the address `http://127.0.0.1:8080/`.  You can configure the port 8080 to another you prefer by modifying the file `tunnels.conf`.

You can use the address `http://127.0.0.1:8080/pub/OpenBSD/` in `/etc/installurl` to automatically use the I2P mirror for installing/updating packages, or keeping your system up to date with syspatch/sysupgrade.

Note: from experience the I2P mirror works fine to install packages, but did not play well with fw_update, syspatch and sysupgrade, maybe because they use ftp command that seems to easily drop the connection.  Downloading the files locally using a proper HTTP client supporting transfer resume would be better.  On the other hand, this issue may be related to the current attack the I2P network is facing as of the time of writing (May 2024).

# Firmware mirror

OpenBSD pulls firmware from a different server than the regular mirrors, the address is `http://firmware.openbsd.org/firmware/`, the files on this server are signed packages, they can be installed using `fw_update $file`.

Both i2p and tor hidden service hostname can be reused, you only have to change `/pub/OpenBSD/` by `/firmware/` to browse the files. 

The proxy server does not cache any firmware, it directly proxy to the genuine firmware web server.  They are on a separate server for legal matter, it seems to be a grey area.

## Disable firmware.openbsd.org

For maximum privacy, you need to neutralize `firmware.openbsd.org` DNS lookup using a hosts entry.  This is important because `fw_update` is automatically used after a system upgrade (as of 2024).

In `/etc/hosts` add the line:

127.0.0.9 firmware.openbsd.org


The IP in the snippet above is not a mistake, it will avoid fw_update to try to connect to a local web server if any.

## Tor access

If you use tor, it is complicated to patch `fw_update` to use torsocks, the best method is to download the firmware manually.

=> http://kdzlr6wcf5d23chfdwvfwuzm6rstbpzzefkpozp7kjeugtpnrixldxqd.onion/firmware/ Firmware onion address

## I2P access

If you use i2p, you can reuse the tunnel configuration described in the I2P section, and pass the full url to `fw_update`:

release users

fw_update -p http://127.0.0.1:8080/firmware/$(uname -r)/

snapshot users

fw_update -p http://127.0.0.1:8080/firmware/snapshots/


Or you can browse the I2P url using an http client with the i2p proxy to download the firmware manually.

=> http://2st32tfsqjnvnmnmy3e5o5y5hphtgt4b2letuebyv75ohn2w5umq.b32.i2p:8081/firmware/ Firmware i2p address

# Conclusion

There were no method to download OpenBSD files over Tor and I2P for people really needing it, it is now a thing.

If you encounter issues with the service, please let me know.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-privacy-friendly-mirror.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-privacy-friendly-mirror.gmi</link>
  <pubDate>Sat, 25 May 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Improve your SSH agent security</title>
  <description>
    <![CDATA[
<pre># Introduction

If you are using SSH quite often, it is likely you use an SSH agent which stores your private key in memory so you do not have to type your password every time.

This method is convenient, but it comes at the expense of your SSH key use security, anyone able to use your session while the agent holds the key unlocked can use your SSH key.  This scenario is most likely to happen when using a compromised build script.

However, it is possible to harden this process at a small expense of convenience, make your SSH agent ask for confirmation every time the key has to be used.

# Setup

The tooling provided with OpenSSH comes with a simple SSH agent named `ssh-agent`.  On OpenBSD, the agent is automatically started and ask to unlock your key upon graphical login if it finds a SSH key in the default path (like `~/.ssh/id_rsa`).

Usually, the method to run the ssh-agent is the following.  In a shell script defining your environment at an early stage, either your interactive shell configuration file or the script running your X session, you use `eval $(ssh-agent -s)`.  This command runs ssh-agent and also enable the environment variables to make it work.

Once your ssh-agent is correctly configured, it is required to add a key into it, now, here are two methods to proceed.

## OpenSSH ssh-add

In addition to ssh-agent, OpenSSH provides ssh-add to load keys into the agent.  It is simple of use, just run `ssh-add /path/to/key`.

=> https://man.openbsd.org/ssh-add ssh-add manual page

If you want to have a GUI confirmation upon each SSH key use, just add the flag `-c` to this command line: `ssh-add -c /path/to/key`.

In OpenBSD, if you have your key at a standard location, you can modify the script `/etc/X11/xenodm/Xsession` to change the first occurrence of `ssh-add` by `ssh-add -c`.  You will still be greeting for your key password upon login, but you will be asked for each of its use.

## KeepassXC

It turns out the password manager KeepassXC can hold SSH keys, it works great for having used for a while.  KeepassXC can either store the private key within its database or load a private key from the filesystem using a path and unlock it using a stored password, the choice is up to you.

You need to have the ssh-agent variables in your environment to have the feature work, as KeepassXC will replace ssh-add only, not the agent.

KeepassXC documentation has a "SSH Agent integration" section explaining how it works and how to configure it.

=> https://keepassxc.org/docs/ KeepassXC official documentation

In the key settings and "SSH Agent" tab, there is a checkbox to ask user confirmation upon each key use.

# Other security features

## Timeout

I would recommend to automatically delete the key from the agent after some time, this is especially useful if you do not actively use your SSH key.

In `ssh-add`, this can be achieved using `-t time` flag (it's tea time, if you want to remember about it), where time is a number of seconds or a time format specified in sshd_config, like 5s for 5 seconds, 10m for 10 minutes, 16h for 16 hours or 2d for 2 days.

In KeepassXC, it's in the key settings, within the SSH agent tab, you can configure the delay before the key is removed from the agent.

# Conclusion

The ssh-agent is a practical software that ease the use of SSH keys without much compromise with regards to security, but some extra security could be useful in certain scenarios, especially for developers running untrusted code as their user holding the SSH key.

While the extra confirmation could still be manipulated by a rogue script, it would come with a greater complexity at the cost of being spotted more easily.  If you really want to protect your SSH keys, you should use them from a hardware token requiring a physical action to unlock it. While I find those tokens not practical and expensive, they have their use and they can not be beaten by a pure software solution.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/ssh-agent-security-enhancement.gmi</guid>
  <link>gemini://perso.pw/blog//articles/ssh-agent-security-enhancement.gmi</link>
  <pubDate>Mon, 27 May 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Organize your console with tmuxinator</title>
  <description>
    <![CDATA[
<pre># Introduction

This article is about the program tmuxinator, a tool to script the generation of tmux sessions from a configuration file.

=> https://github.com/tmuxinator/tmuxinator tmuxinator official project website on GitHub

This program is particularly useful when you have repeated tasks to achieve in a terminal, or if you want to automate your tmux session to save your fingers from always typing the same commands.

tmuxinator is packaged in most distributions and requires tmux to work.

# Configuration

tmuxinator requires a configuration file for each "session" you want to manage with it.  It provides a command line parameter to generate a file from a template:

$ tmuxinator new name_here


By default, it will create the yaml file for this project in `$HOME/.config/tmuxinator/name_here.yml`, if you want the project file to be in a directory (to make it part of a versioned project repository?), you can add the parameter `--local`.

# Real world example

Here is a tmuxinator configuration file I use to automatically do the following tasks, the commands include a lot of monitoring as I love watching progress and statistics:



I can start all of this using `tmuxinator start dpb`, or stop only these "parts" of tmux with `tmuxinator stop dpb` which is practical when using tmux a lot.

Here is my file `dpb.yml`:

name: dpb

root: ~/

Runs on project start, always

on_project_start: cd /usr/ports && doas -u solene git pull -r

windows:

- dpb:

layout: tiled

panes:

- dpb:

- cd /root/packages/packages

- ./dpb.sh -P list.txt -R

- watcher:

- cd /root/logs

- ls -altrh locks

- date

- while true ; do clear && env CCACHE_DIR=/build/tmp/pobj/.ccache/ ccache -s ; sleep 5 ; done

- while true ; do df -h /build/tmp/pobj_mfs/ | grep % ; sleep 10 ; done

- top

- top -U _pbuild


# Going further

Tmuxinator could be used to ssh into remote servers, connect to IRC, open your email client, clean stuff, there are no limits.  

This is particularly easy to configure as it does not try to run commands, but only send the keys to each tmux panes, which mean it will send keystrokes like if you typed them.  In the example above, you can see how the pane "dpb" can cd into a directory and then run a command, or how the pane "watcher" can run multiple commands and leave the shell as is.

# Conclusion

I knew about tmuxinator for a while, but I never gave it a try before this week.  I really regret not doing it earlier.  Not only it allows me to "script" my console usage, but I can also embed some development configuration into my repositories.  While you can use it as an automation method, I would not rely too much on it though, it only types blindly on the keyboard.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/potw-tmuxinator.gmi</guid>
  <link>gemini://perso.pw/blog//articles/potw-tmuxinator.gmi</link>
  <pubDate>Mon, 20 May 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>What is going on in Nix community?</title>
  <description>
    <![CDATA[
<pre># Introduction

You may have heard about issues within the Nix/NixOS community, this blog post will try to help you understand what is going on.

Please note that it is hard to get a grasp of the big picture, it is a more long term feeling that the project governance was wrong (or absent?) and people got tired.

This blog posts was written with my knowledge and feelings, I clearly do not represent the community.

=> https://save-nix-together.org/ Save Nix Together: an open letter to the NixOS foundation
=> https://xeiaso.net/blog/2024/much-ado-about-nothing/ Xe blog post: Much ado about nothing

There is a maintainer departure milestone in the Nixpkgs GitHub project.

=> https://github.com/NixOS/nixpkgs/milestone/27 GitHub milestone 27: Maintainers leaving

# Project structure

First, it is important to understand how the project works.

Nix (and NixOS, but it is not the core of the project), was developed by Eelco Dolstra early 2000.  The project is open source, available on GitHub and everyone can contribute.

Nix is a tool to handle packaging in a certain way, and it has another huge repository (top 10 GitHub repo) called nixpkgs that contains all packages definition.  nixpkgs is known to be the most up-to-date repository and biggest repository of packages, thanks to heavy automation and a huge community.

The NixOS foundation (that's the name of the entity managing the project) has a board that steer the project in some direction and handle questions.  First problem is that it is known to be slow to act and response.

Making huge changes to Nix or Nixpkgs requires making an RFC (Request For Comment), explaining the rationale behind a change and a consensus has to be found with others to agree (it is somewhat democratic).  Eelco decided a while ago to introduce a huge change in Nix (called Flakes) without going through the whole RFC process, this introduced a lot of tension and criticism because they should have gone through the process like every other people, and the feature is half-baked but got some traction and now Nix paradigm was split between two different modes that are not really compatible.

=> https://github.com/NixOS/rfcs/pull/49#issuecomment-659372623 GitHub Pull request to introduce Flakes: Eelco Dolstra mentioning they could merge it as experimental

There are also issues related to some sponsors in the Nix conferences, like companies related to militaries, but this is better  explained in the links above, so I will not make a recap.

# Company involvement

This point is what made me leave NixOS community.  I worked for a company called Tweag, involved into Nix for a while and paying people to contribute to Nix and Nixpkgs to improve the user experience for their client.  This made me realize the impact of companies into open source, and the more I got involved into this, the more I realized that Nix was mostly driven by companies paying developers to improve the tool for business.

Paying people to develop features or fixing bug is fine, but when a huge number of contributors are paid by companies, this lead to poor decisions and conflicts of interest.

In the current situation, Eelco Dolstra published a blog post to remember the project is open source and belong to its contributors.

=> https://determinate.systems/posts/on-community-in-nix/ Eelco Dolstra blog post

The thing in this blog post, that puzzles me, is that most people at Determinate Systems (Eelco co-founded company) are deeply involved into Nix in various way.  In this situation, it is complicated for contributors to separate what they want for the project from what their employer wants.  It is common for nix contributors to contribute with both hats.

# Conclusion

Unfortunately, I am not really surprised this is happening.  When a huge majority of people spending their free time contributing to a project they love and that companies relentlessly quiet their voice, it just can't work.

I hope Nix community will be able to sort this out and keep contributing to the project they love.  This is open source and libre software, most affected people contribute because they like doing so, they do not deserve what is happening, but it never came with any guarantees either.

# Extra: Why did I stop using Nix?

I don't think this deserved a dedicated blog post, so here are some words.

From my experience, contributing to Nix was complicated.  Sometimes, changes could be committed in minutes, leaving no time for other to review a change, and sometimes a PR could take months or years because of nitpicking and maintainer losing faith.

Another reason I stopped using nix was that it is quite easy to get nixpkgs commit access (I don't have commit access myself, I never wanted to inflict the nix language to myself), a supply chain attack would be easy to achieve in my opinion: there are so many commits done that it is impossible for a trustable group to review everything, and there are too many contributors to be sure they are all trustable.

# Alternative to Nix/NixOS?

If you do not like Nix/NixOS governance, it could be time to take a look at Guix, a Nix fork that happened in 2012.  It is a much smaller community than nix, but the tooling, packages set and community is not at rest.

Guix being a 100% libre software project, it does not target MacOS like nix, nor it will include/package proprietary software, however for the second "problem", there is an unofficial repository called guix-nonfree that contains many packages like firmware and proprietary software, most users will want to include this repo.

Guix is old school, people exchange over IRC and send git diff over email, please do not bother them if this is not your cup of tea.  On top of that, Guix uses the programming language Scheme (a Lisp-1 language) and if you want to work with this language, emacs is your best friend (try geiser mode!).

=> https://guix.gnu.org/ Guix official project webpage
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/nix-internal-crisis.gmi</guid>
  <link>gemini://perso.pw/blog//articles/nix-internal-crisis.gmi</link>
  <pubDate>Sat, 27 Apr 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>OpenBSD scripts to convert wg-quick VPN files</title>
  <description>
    <![CDATA[
<pre># Introduction

If you use commercial VPN, you may have noticed they all provide WireGuard configurations in the wg-quick format, this is not suitable for an easy use in OpenBSD.

As I currently work a lot for a VPN provider, I often have to play with configurations and I really needed a script to ease my work.

I made a shell script that turns a wg-quick configuration into a hostname.if compatible file, for a full integration into OpenBSD.  This is practical if you always want to connect to a given VPN server, not for temporary connections.

=> https://man.openbsd.org/hostname.if OpenBSD manual pages: hostname.if
=> https://git.sr.ht/~solene/wg-quick-to-hostname-if Sourcehut project: wg-quick-to-hostname-if

# Usage

It is really easy to use, download the script and mark it executable, then run it with your wg-quick configuration as a parameter, it will output the hostname.if file to the standard output.

wg-quick-to-hostname-if fr-wg-001.conf | doas tee /etc/hostname.wg0


In the generated file, it uses a trick to dynamically figure the current default route which is required to keep a non-vpn route to the VPN gateway.

# Short VPN sessions

When I shared my script on mastodon, Carlos Johnson shared their own script which is pretty cool and complementary to mine.

If you prefer to establish a VPN for a limited session, you may want to take a look at his script.

=> https://gist.github.com/callemo/aea83a8d0e1e09bb0d94ab85dc809675#file-wg-sh Carlos Johnson GitHub: file-wg-sh gist

# Prevent leaks

If you need your WireGuard VPN to be leakproof (= no network traffic should leave the network interface outside the VPN if it's not toward the VPN gateway), you should absolutely do the following:



=> https://dataswamp.org/~solene/2021-10-09-openbsd-wireguard-exit.html Older blog post: WireGuard and rdomains

# Conclusion

OpenBSD's ability to configure WireGuard VPNs with ifconfig has always been an incredible feature, but it was not always fun to convert from wg-quick files.  But now, using a commercial VPN got a lot easier thanks to a few piece of shell.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-wg-quick-converter.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-wg-quick-converter.gmi</link>
  <pubDate>Mon, 29 Apr 2024 00:00:00 GMT</pubDate>
</item>
<item>
  <title>A Stateless Workstation</title>
  <description>
    <![CDATA[
<pre># Introduction

I always had an interest for practical security on computers, being workstations or servers.  Many kinds of threats exist for users and system administrators, it's up to them to define a threat model to know what is acceptable or not.  Nowadays, we have choice in the operating system land to pick what works best for that threat model: OpenBSD with its continuous security mechanisms, Linux with hardened flags (too bad grsec isn't free anymore), Qubes OS to keep everything separated, immutable operating system like Silverblue or MicroOS (in my opinion they don't bring much to the security table though) etc...

My threat model always had been the following: some exploit on my workstation remaining unnoticed almost forever, stealing data and capturing keyboard continuously.  This one would be particularly bad because I have access to many servers through SSH, like OpenBSD servers.  Protecting against that is particularly complicated, the best mitigations I found so far is to use Qubes OS with disposable VMs or restricting outbound network, but it's not practical.

My biggest grip with computers always have been "states".  What is a state?  It is something that distinguish a computer from another: installed software, configuration, data at rest (pictures, documents etc…).  We use states because we don't want to lose work, and we want our computers to hold our preferences.

But what if I could go stateless?  The best defense against data stealer is to own nothing, so let's go stateless!

# Going stateless

My idea is to be able to use any computer around, and be able to use it for productive work, but it should always start fresh: stateless.

A stateless productive workstation obviously has challenges: How would it help with regard to security? How would I manage passwords? How would I work on a file over time? How to achieve this?

I have been able to address each of these questions.  I am now using a stateless system.

> States? Where we are going, we don't need states! (certainly Doc Brown in a different timeline)

## Data storage

It is obvious that we need to keep files for most tasks.  This setup requires a way to store files on a remote server.

Here are different methods to store files:



Encryption could be done locally with tools like cryfs or gocryptfs, so only encrypted files would be stored on the remote server.

Nextcloud end-to-end encryption should not be used as of April 2024, it is known to be unreliable.

Seafile, a less known alternative to Nextcloud but focused only on file storage, supports end-to-end encryption and is reliable.  I chose this one as I had a good experience with it 10 years ago.

Having access to the data storage in a stateless environment comes with an issue: getting the credentials to access the files.  Passwords should be handled differently.

## Password management

When going stateless, the first step that will be required after a boot will be to access the password manager, otherwise one would be locked outside.

The passwords must be reachable from anywhere on Internet, with a passphrase you know and/or hardware token you have (and why not 2FA).

A self-hosted solution is vaultwarden (it used to be named bitwarden_rs), it's an open source reimplementation of Bitwarden server.

Any proprietary service offering password management could work too.

A keepassxc database on a remote storage service for which you know the password could also be used, but it is less practical.

## Security

The main driving force for this project is to increase my workstation security, I had to think hard about this part.

Going stateless requires a few changes compared to a regular workstation:



This is mostly a paradigm change with pros and cons compared to a regular workstation.

Data and passwords stored in the cloud?  This is not really an issue when using end-to-end encryption, this is true as long as the software is trustable and its code is correct.

A bootable live operating system is quite simply to acquire.  There is a ton of choice of Linux distributions able to boot from a CD or from USB, and also non Linux live system exist.  A bootable USB device could be compromised while a CD is an immutable media, but there are USB devices such as the Kanguru FlashBlu30 with a physical switch to make the device read-only.  A USB device could be removed immediately after the boot, making it safe.  As for physically protecting the USB device in case you would not trust it anymore, just buy a new USB memory stick and resilver it.

=> https://www.kanguru.com/products/kanguru-flashblu30-usb3-flash-drive Product page: Kanguru FlashBlu30

As for installed programs, it is fine as long as they are packaged and signed by the distribution, the risks are the same as for a regular workstation.

The system should be more secure than a typical workstation because:



The system would be less secure than a typical workstation because:



To circumvent this, I only have the password manager service reachable from the Internet, which then allows me to create a VPN to reach all my other services.

## Ecology

I think it is a dimension that deserves to be analyzed for such setup.  A stateless system requires remote servers to run, and use bandwidth to reinstall programs at each boot.  It is less ecological than a regular workstation, but at the same time it may also enforce some kind of rationalization of computer usage because it is a bit less practical.

## State of the art

Here is a list of setup that already exist which could provide a stateless experience, with support for either a custom configuration or a mechanism to store files (like SSH or GPG keys, but an USB smart card would be better for those):



Otherwise, any live system could just work.

Special bonus to NixOS and Guix generated live-cd as you can choose which software will be in there, in latest version.  Similar bonus with Alpine and LBU, packages are always installed from a local cache which mean you can update them.

A live-cd generated a few months ago is certainly not really up to date.

# My experience

I decided to go with Alpine with its LBU mechanism, it is not 100% stateless but hit the perfect spot between "I have to bootstrap everything from scratch" and "I can reduce the burden to a minimum".

=> https://dataswamp.org/~solene/2023-07-14-alpine-linux-from-ram-but-persistent.html Earlier blog post: Alpine Linux from RAM but persistent

My setup requires two USB memory stick:



While it is not 100% stateless, the files on the second memory stick are just a way to have a working customized Alpine.

This is a pretty cool setup, it boots really fast as all the packages are already in cache on the second memory stick (packages are signed, so it is safe).  I made a Firefox profile with settings and extensions, so it is always fresh and ready when I boot.

I decided to go with the following stack, entirely self-hosted:



This setup offered me freedom.  Now, I can bootstrap into my files and passwords from any computer (a trustable USB memory stick is advisable though!).

I can also boot using any kind of operating system on any on my computer, it became so easy it's refreshing.

I do not make use of dotfiles or stored configurations because I use vanilla settings for most programs, a git repository could be used to fetch all settings quickly though.

=> https://github.com/dani-garcia/vaultwarden Vaultwarden official project website
=> https://www.seafile.com/en/home/ Seafile official project website
=> https://nextcloud.com/ Nextcloud official project website
=> https://kanboard.org/ Kanboard official project website
=> https://github.com/sissbruecker/linkding Linkding official project website

# Backups

A tricky part with this setup is to proceed with serious backups.  The method will depend on the setup you chose.

With my self-hosted stack, restic makes a daily backup to two remote locations, but I should be able to reach the backup if my services are not available due to a server failure.

If you use proprietary services, it is likely they should handle backups, but it is better not to trust them blindly and checkout all your data on a regular schedule to make a proper backup.

# Conclusion

This is an interesting approach to workstations management, I needed to try.  I really like how it freed me from worrying about each workstation, they are now all disposable.

I made a mind map for this project, you can view it below, it may be useful to better understand how things articulate.

=> static/stateless_computing-fs8.png Stateless computing mind mapping document
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/workstation-going-stateless.gmi</guid>
  <link>gemini://perso.pw/blog//articles/workstation-going-stateless.gmi</link>
  <pubDate>Tue, 23 Apr 2024 00:00:00 GMT</pubDate>
</item>

  </channel>
</rss>