💾 Archived View for perso.pw › blog › rss.xml captured on 2023-12-28 at 15:20:43.

View Raw

More Information

⬅️ Previous capture (2023-11-14)

➡️ Next capture (2024-02-05)

🚧 View Differences

-=-=-=-=-=-=-

<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Solene'%</title>
    <description></description>
    <link>gemini://perso.pw/blog/</link>
    <atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" />
    <item>
  <title>Qubes OS backup transfer from old to new computer</title>
  <description>
    <![CDATA[
<pre># Introduction

With the recent release of Qubes OS 4.2, I took the opportunity to migrate to a newer laptop (from a Thinkpad T470 to a NovaCustom NV41) so I had to backup all the qubes from the T470 and restore them on the NV41.

The fastest way to proceed is to create the backups on the new laptop directly from the old one, which is quite complicated to achieve due to Qubes OS compartmentalization.

In this guide, I'll share how I created a qube with a network file server to allow one laptop to send the backups to the new laptop.

=> https://qubes-os.org Qubes OS official project website

Of course, this whole process could be avoided by using a NAS or external storage, but they are in my opinion slower than directly transferring the files on the new machine, and you may not want to leave any trace of your backups.

# Explanation about the setup

As the new laptop has a very fast NVME disk, I thought it would be nice to use it for saving the backups as it will offload a bit of disk activity for the one doing backups, and it shouldn't be slowed down during the restore process even if it has to write and read the backups at the same time.

The setup consists in creating a dedicated qube on the new laptop offering an NFS v4 share, make the routing at the different levels, and mount this disk in a qube on the old laptop, so the backup could be saved there.

I used a direct Ethernet connection between the two computers as it allows to not think much about NFS security

# Preparing the backup receiver

## Storage qube configuration

On the new laptop, create a standalone qube with the name of your choice (I'll refer to it as `nfs`), the following commands have been tested with the fedora-38-xfce template. Make sure to give it enough storage space for the backup.

First we need to configure the NFS server, we need to install the related package first:

$ sudo dnf install nfs-utils


After this, edit the file `/etc/exports` to export the path `/home/user/backup` to other computers, using the following content:

/home/user/backup *(rw,sync)


Create the directory we want to export, and make `user` the owner of it:

install -d -o user /home/user/backup


Now, run the NFS server now and at boot time:

systemctl enable --now nfs-server


You can verify the service started successfully by using the command `systemctl status nfs-server`

You can check the different components of the NFS server are running correctly, if the two following commands have an output this mean it's working:



Allow the NFS server at the firewall level, run the following commands AND add them at the end of `/rw/config/rc.local`:

nft add rule qubes custom-input tcp dport 2049 accept

nft add rule qubes custom-input udp dport 111 accept


## Route the service from the physical LAN

Now the service is running within the qube, we need to allow the remote computer to reach it, by default the network should look like this:

We will make sys-net to nat the UDP port 111 and TCP port 2049 to sys-firewall, which will nat them to the nfs qube, which will already accept connections on those ports.

+------------------------------------------------+

+--------+ | DESTINATION SYSTEM |

| SOURCE | ethernet | +---------+ +--------------+ +-----+ |

| SYSTEM | <--------> | | sys-net | --> | sys-firewall | --> | nfs | |

+--------+ | +---------+ +--------------+ +-----+ |

+------------------------------------------------+


### sys-net routing

Write the following script inside the `sys-net` qube of the destination system, make sure to update the value of the variable `DESTINATION` with `sys-firewall`'s IP address, it can be found by looking at the qube settings.

!/bin/sh

PORT=111

DESTINATION=10.138.31.246

if ! nft -nn list table ip qubes | grep "chain nat {" ; then

nft add chain qubes nat { type nat hook prerouting priority dstnat\; }

fi

nft add rule qubes custom-input udp dport "${PORT}" accept

nft add rule qubes custom-forward udp dport "${PORT}" accept

nft add rule qubes nat iifname != "vif*" udp dport "${PORT}" dnat "${DESTINATION}"

PORT=2049

nft add rule qubes custom-input tcp dport "${PORT}" accept

nft add rule qubes custom-forward tcp dport "${PORT}" accept

nft add rule qubes nat iifname != "vif*" tcp dport "${PORT}" dnat "${DESTINATION}"


Make the script executable by running the command `chmod +x` on the script file. You will execute them later once the network is safe.

### sys-firewall routing

Write the following script inside the `sys-firewall` qube of the destination system, make sure to update the value of the variable `DESTINATION` with `nfs`'s IP address, it can be found by looking at the qube settings.

!/bin/sh

PORT=111

DESTINATION=10.137.0.10

if ! nft -nn list table ip qubes | grep "chain nat {" ; then

nft add chain qubes nat { type nat hook prerouting priority dstnat\; }

fi

nft add rule qubes custom-input udp dport "${PORT}" accept

nft add rule qubes custom-forward udp dport "${PORT}" accept

nft add rule qubes nat iifname != "vif*" udp dport "${PORT}" dnat "${DESTINATION}"

PORT=2049

nft add rule qubes custom-input tcp dport "${PORT}" accept

nft add rule qubes custom-forward tcp dport "${PORT}" accept

nft add rule qubes nat iifname != "vif*" tcp dport "${PORT}" dnat "${DESTINATION}"


Make the script executable by running the command `chmod +x` on the script file. You will execute them later once the network is safe.

# Backup process

On the source system, we need to have a running qube that will mount the remote NFS server, this can be a disposable qube, an AppVM qube with temporary changes, a standalone etc...

## Mounting qube

On the mounting qube, run the following command to install the NFS tools we need:

dnf install nfs-utils


## Configure both systems network

In this step, you need to configure the network with the direct Ethernet cable, so the two systems can speak to each other, please disconnect from any Wi-Fi connections as you didn't set any security for the file transfer (it's encrypted but still).

You can choose any address as long as the two hosts are in the same subnet, an easy pick could be `192.168.0.2` for the source system, and `192.168.0.3` for the new system.

Now, both systems should be able to ping each other, it's time to execute the scripts in `sys-firewall` and `sys-net` to enable the routing.

On the "mounting" qube, run the following command as root to mount the remote file system:

mount.nvfs4 192.168.0.3:/home/user/backup /mnt


You can verify it worked if the output of `df` shows a line starting by `192.168.0.3:/home/user/backup`, and you can ensure your user can actually write in this remote directory by running `touch /mnt/test` with the regular user `user`.

Now, we can start the backup tool to send the backup to the remote storage.

## Run the backup

In the source system dom0, run the Qubes OS backup tool, choose the qubes you want to transfer, uncheck "Compress backups" (except if you are tight on storage for the new system) and click on "Next".

In the field "Target qube", select the "mounting qube" and set the path to `/mnt/`, choose an encryption passphrase and run the backup.

If everything goes well, you should see a new file named `qubes-backup-YYYY-MM-DDThhmmss` in the directory `/home/user/backups/` of the `nfs` qube.

## Restore the backups

In the destination system dom0, you can run the Restore backup tool to restore all the qubes, if the old `sys-net` and `sys-firewall` have any value, you may want to delete yours first otherwise the restored one will be renamed.

## how to restore dom0 $home

When you backup and restore dom0, only the directory `/home/` is part of the backup, so it's only about the desktop settings themselves and not the Qubes OS system configuration. I actually use versioned files in the salt directories to have reproducible Qubes OS machines because the backups aren't enough.

=> https://dataswamp.org/~solene/2023-06-17-qubes-os-git-bundle.html Blog post: Using git bundle to synchronize a repository between Qubes OS dom0 and an AppVM
=> https://dataswamp.org/~solene/2023-06-04-qubes-os-version-control-dom0.html Blog post: Qubes OS dom0 files workflow using fossil

When you restore dom0, it creates a directory `/home/solene/home-restore-YYYY-MM-DDThhmmss` on the new dom0 that contains the previous `/home/` directory.

Restoring this directory verbatim requires some clever trick as you should not be logged in for the operation!



Your desktop environment should be like you left if during the backup. If you used some specific packages or desktop environment, make sure you also installed the according packages in the new dom0

# Cleaning up

After you restored your backups, you can remove the scripts in `sys-firewall` and `sys-net` and even delete the nfs qube.

# Conclusion

Moving my backup from the old system to the new one was pretty straightforward once the NFS server was established, I was able to quickly have a new working computer that looked identical to the previous one, ready to be used.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/qubes-os-migrate-vm-between-computers.gmi</guid>
  <link>gemini://perso.pw/blog//articles/qubes-os-migrate-vm-between-computers.gmi</link>
  <pubDate>Wed, 27 Dec 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>OpenBSD in a CI environment with sourcehut</title>
  <description>
    <![CDATA[
<pre># Introduction

If you ever required continuous integration pipelines to do some actions in an OpenBSD environment, you certainly figured that most Git "forge" didn't provide OpenBSD as a host environment for the CI.

It turns out that sourcehut is offering many environments, and OpenBSD is one among them, but you can also find Guix, NixOS, NetBSD, FreeBSD or even 9front!

Let's see how this works.

=> https://sourcehut.org/ sourcehut official website
=> https://man.sr.ht/builds.sr.ht/compatibility.md sourcehut: Documentation about host systems offering in CI

Note that the CI is only available to paid accounts, the minimal fee is "$2/month or $20/year".  There are no tiers, so as long as you pay something you have a paid account.  sourcehut is offering a clutter-free web interface, and developing an open source product that is also capable of running OpenBSD in a CI environment, I decided to support them (I really rarely subscribe to any kind of services).

PS: sourcehut supports Mercurial projects too.

# The CI

Upon each CI trigger, a new VM is created, it's possible to define the operating system and version you want for the environment, and then what to do in it.

The CI works when you have a "manifest" file in your project with the path `.build.yml` at the root of your project, it contains all the information about what to do.

=> https://man.sr.ht/builds.sr.ht/ sourcehut: Documentation about manifests and builds

# Secret management

When you run code in a CI, you often need secrets, and most often you require SSH keys if you want to push artefacts.

The SSH key secret is simplified, if sourcehut recognizes a secret to be a private SSH key, it will automatically save it at the right place.

=> https://man.sr.ht/builds.sr.ht/#secrets sourcehut: Documentation about secrets in CI

# Example

Here is a simple example of a manifest file I use to build a website using the static generator hugo, and then push the result on a remote server.

image: openbsd/latest

packages:

- hugo--

- rsync--

secrets:

- f20c67ec-64c2-46a2-a308-6ad929c5d2e7

sources:

- git@git.sr.ht:~solene/my-project

tasks:

- init: |

cd my-project

git clone https://github.com/adityatelange/hugo-PaperMod themes/PaperMod --depth=1

- build: |

cd my-project

echo 'web.perso.pw ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKRj0NK7ZPMQgkgqw8V4JUcoT4GP6CIS2kjutB6xdR1P' | tee -a ~/.ssh/known_hosts

make


On the example above, we can notice different parts:



If you use SSH, don't forget to either use `ssh-keyscan` to generate the content for `~/.ssh/known_hosts`, or add the known fingerprint like me that would require an update if the SSH host key changes.

A cool thing is when your CI job failed, the environment will continue to live for at least 10 minutes while offering an SSH access for debug purpose.

=> https://man.sr.ht/builds.sr.ht/build-ssh.md sourcehut: Documentation about SSH into build environments

# Conclusion

I finally found a Git forge that is ethic and supportive of niche operating system.  Its interface may be rude with fewer features, but it loads faster and is cleaner to understand.  The price ($20/year) is higher than the competition (GitHub or GitLab) which can be used freely (up to some point) but they don't offer the CI choice and the elegant workflow sourcehut has.

# Going further

You can self-host a sourcehut instance if you prefer, it's open source and packaged for some Linux distributions.

=> https://man.sr.ht/installation.md sourcehut: Documentation about the deployment process
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/sourcehut-and-openbsd-ci.gmi</guid>
  <link>gemini://perso.pw/blog//articles/sourcehut-and-openbsd-ci.gmi</link>
  <pubDate>Wed, 06 Dec 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Run your own Syncthing relay server on OpenBSD</title>
  <description>
    <![CDATA[
<pre># Introduction

In earlier blog posts, I covered the program Syncthing and its features, then how to self-host a discovery server.  I'll finish the series with the syncthing relay server.

The Syncthing relay is the component that receives file from a peer to transmit it to the other when two peers can't establish a direct connection, by default Syncthing uses its huge worldwide community pool of relays.  However, while data are encrypted, this leaks some information and some relays may be malicious and store files until it could be possible to make use of the content (weakness in encryption algorithm, better computers etc…).

Running your own Syncthing relay server will allow you to secure the whole synchronization between peers.

=> https://relays.syncthing.net/
=> https://docs.syncthing.net/users/strelaysrv.html Syncthing official documentation: relay server

Related blog posts

=> https://dataswamp.org/~solene/2023-10-04-potw-syncthing.html Presenting Syncthing features
=> https://dataswamp.org/~solene/2023-10-18-syncthing-discovery-server.html Blog post about the complementary discovery server

A simple use case for a relay: you have Syncthing configured between a smartphone on its WAN network and a computer behind a NAT, it's unlikely they will be able to communicate to each other directly, they will need a relay to synchronize.

# Setup

On OpenBSD, you will need the binary `strelaysrv` provided by the package `syncthing`.

pkg_add syncthing


There is no rc file to start the relay as a service on OpenBSD 7.3, I added it to -current and will be available from OpenBSD 7.5, create an rc file `/etc/rc.d/syncthing_relay` with the following content:

!/bin/ksh

daemon="/usr/local/bin/strelaysrv"

daemon_flags="-pools=''"

daemon_user="_syncthing"

. /etc/rc.d/rc.subr

rc_bg=YES

rc_reload=NO

rc_cmd $1


The special flag `-pools=''` is there to NOT join the community pool.  If you want to contribute to the pool, remove this flag.

There is nothing else to configure, except enabling the service at boot, and running it, at the exception the need to retrieve an information from its runtime output:

rcctl enable syncthing_relay

rcctl start -d syncthing_relay


In the output, you will have a line looking like this:

2023/11/02 11:07:25 main.go:259: URI: relay://0.0.0.0:22067/?id=SCRGZW4-AAGJH36-M71EAPW-6XK7NXA-5CC1C4R-R2TKL2F-FNFF2OW-ZWA6WK5&networkTimeout=2m0s&pingInterval=1m0s&statusAddr=%3A22070


You need to note down the displayed URI, this is your relay address, just replace `0.0.0.0` by the actual server IP.

# Firewall setup

You need to open the port TCP/22067 for the relay to work, in addition, you can open the port 22070 which can be used to display a JSON with statistics.

To reach the status page, you need to visit the page `http://$SERVER_IP:22070/status`

# Client configuration

On the client Web GUI, click on "Actions" and "Settings" to open the settings panel.

In the "Connections tab", you need to enter the relay URI in the first field "Sync Protocol Listen Addresses", you can add it after `default` by separating the two values with a comma, that would add your own relay in addition to the community pool.  You could entirely replace the value with the relay URI, in such situation, all peers must use the same relay, if they need a relay.

Don't forget to check the option "Enable relaying", otherwise the relay won't be used.

# Conclusion

Syncthing is greatly modular, it's pretty cool to be able to self-host all of its components separately.  In addition, it's also easy to contribute to the community pool if one decides to.

My relay is set up within a VPN where all my networks are connected, so my data are never leaving the VPN.

# Going further

It's possible to use a shared passphrase to authenticate with the remote relay, this can be useful in the situation where the relay is on a public IP, but you only want the nodes holding the shared secret to be able to use it.

=> https://docs.syncthing.net/users/strelaysrv.html#access-control-for-private-relays Syncthing relay server documentation: Access control for private relays
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/syncthing-relay-server.gmi</guid>
  <link>gemini://perso.pw/blog//articles/syncthing-relay-server.gmi</link>
  <pubDate>Mon, 06 Nov 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Read quoted-printable emails with qprint</title>
  <description>
    <![CDATA[
<pre># Introduction

You may already have encountered emails in raw text that contained weird characters sequences like `=E3` or `=09`, especially if you work with patch files embedded as text in emails.

There is nothing wrong with the text itself, or the sender email client.  In fact, this shows the email client is doing the right thing by applying the RFC 1521.  Non-ASCII character should be escaped in some way in emails.

=> https://www.rfc-editor.org/rfc/rfc1521 RFC 1521: MIME part one

This is where qprint enters in action, it can be used to encode using the quoted-printable, or decode such content.  The software can be installed on OpenBSD with the package named `qprint`.

=> https://www.fourmilab.ch/webtools/qprint/ qprint official website

I already introduced qprint in a blog post in a guide about OpenBSD pledge.

# What does quoted-printable look like?

If you search for an email from the OpenBSD mailing list, and display it in raw format, you may encounter this encoding.  There isn't much you can do with the file, it's hard to read and can't be used with the program patch.

=> https://marc.info/?l=openbsd-ports&m=169833007120486&q=raw Email example featuring quoted-printable characters

A sample of the email looks like that:

From italiano-=E6=97=A5=E6=9C=AC=E8=AA=9E (=E3=81=AB=E3=81=BB=E3=82=93=

=E3=81=94) FreeDict+WikDict dictionary ver.

2022.11.18 [itajpn]:

=09

ciao //'=CA=A7ao// <interjection>

=E3=81=93=E3=82=93=E3=81=AB=E3=81=A1=E3=81=AF

=09


If you pipe this content through the command `qprint -d`, you will obtain a much more interesting text:

From italiano-日本語 (にほんご) FreeDict+WikDict dictionary ver.

2022.11.18 [itajpn]:

ciao //'ʧao// <interjection>

こんにちは


There is little use in encoding content with qprint, but it could do it as well.

# Conclusion

If you ever encounter this kind of encoding, now you should be able to figure what it is, and how to read it.

Qprint may not be available on all systems, but compiling it is quite easy, as long as you have a C compiler and make installed.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/potw-qprint.gmi</guid>
  <link>gemini://perso.pw/blog//articles/potw-qprint.gmi</link>
  <pubDate>Mon, 30 Oct 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Run your own Syncthing discovery server on OpenBSD</title>
  <description>
    <![CDATA[
<pre># Introduction

In a previous article, I covered the software Syncthing and mentioned a specific feature named "discovery server".

The discovery server is used to allow clients to connect each other through NATs to help connect each other, this is NOT a relay server (which is a different service) that serves as a proxy between clients.

A motivation to run your own discovery server(s) would be for security, privacy or performance reasons.



Let's see how to install your own Syncthing discovery daemon on OpenBSD.

=> https://docs.syncthing.net/users/stdiscosrv.html Syncthing discovery daemon documentation

Related blog posts

=> https://dataswamp.org/~solene/2023-10-04-potw-syncthing.html Presenting Syncthing features
=> https://dataswamp.org/~solene/2023-11-03-syncthing-relay-server.html Blog post about the complementary Relay server

# Setup

On OpenBSD, the binary we need is provided by syncthing package.

pkg_add syncthing


The relay service is done by the binary `stdiscosrv`, you need to create a service file to enable it at boot.  We can use the syncthing service file as a template for the new one.  In OpenBSD-current and from OpenBSD 7.5 the rc file will be installed with the package.

sed '/^daemon=/ s/syncthing/stdiscosrv/ ; /flags/ s/".*"/""/' /etc/rc.d/syncthing > /etc/rc.d/syncthing_discovery


You created a service named `syncthing_discovery`, it's time to enable and start it.

rcctl enable syncthing_discovery


You need to retrieve the line "Server device IS is XXXX-XXXX......" from the output, keep the ID (which is the XXXX-XXXX-XXXX-XXXX part) because we will need to reuse it later.  We will start the service in debug mode to display the binary output in the terminal.

rcctl -d start syncthing_discovery


Make sure your firewall is correctly configured to let pass incoming connections on port TCP/8443 used by the discovery daemon.

# Client configuration

On the client Web GUI, click on "Actions" and "Settings" to open the settings panel.

In the "Connections tab", you need to change the value of "Global Discovery servers" from "Default" to `https://IP:8443/?id=ID` where IP is the IP address where the discovery daemon is running, and ID is the value retrieved at the previous step when running the daemon.

Depending on your use case, you may want to have the global discovery server plus yours, it's possible to use multiple servers, in which case you would use the value `default,https://IP:8443/?id=ID`.

# Conclusion

If you change the default discovery server by your own, make sure all the peers can reach it, otherwise your syncthing clients may not be able to connect to each other.

# Going further

By default, the discovery daemon will generate self-signed certificate, you could use a Let's Encrypt certificate if you prefer.

There are some other options like prometheus export for getting metrics or changing the connection port, you will find all the extra options in the documentation / man page.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/syncthing-discovery-server.gmi</guid>
  <link>gemini://perso.pw/blog//articles/syncthing-discovery-server.gmi</link>
  <pubDate>Sat, 21 Oct 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Port of the Week: Presenting Syncthing</title>
  <description>
    <![CDATA[
<pre># Introduction

Today's "port of the week" article is featuring Syncthing, a file synchronization software.

=> https://syncthing.net/ Syncthing official project website

Related blog posts:

=> https://dataswamp.org/~solene/2023-11-03-syncthing-relay-server.html Blog post about the complementary Relay server
=> https://dataswamp.org/~solene/2023-10-18-syncthing-discovery-server.html Blog post about the complementary discovery server

# Quick intro

As stated earlier, Syncthing is a network daemon that synchronize files between computers/phones.  Each Syncthing instance must know the other instance ID to trust them and find them over the network.  The transfer are encrypted and efficient, the storage itself can be encrypted.

Some Syncthing vocabulary:



# Interesting features

I gathered a list of interesting features that you may find interesting in Syncthing.

## Security: authentication and encryption

When you need to add a new remote, you need to add the remote's ID on a Syncthing and trust that one on the remote one.  The ID is a human representation of the Syncthing instance certificate fingerprint.  When you exchange ID, you are basically asked to review each certificate and allow each instance to trust the other.

All network transfers occurring between two Syncthing are encrypted using TLS, as the remote certificate can be checked, the incoming data integrity can be verified and authenticated.

=> https://docs.syncthing.net/users/security.html Syncthing official documentation about security principles in the software

## Relaying

I guess this is Syncthing killer feature.  Connecting two remotes is very easy and file transfer between them can bypass firewalls and NATs.

This works because the Syncthing offers a default discovery server which has two purposes:



The file transfer is still encrypted, but having a third party server involved may rise privacy issues, and security risks if a vulnerability can be exploited.

My next blog post will show how to self-host your own Syncthing relay, for better privacy and even more complicated setups!

Note that the discovery server or the relaying can be disabled!  You could also build a mesh VPN and run Syncthing on each node without using any relay or discovery server.

## Built-in file versioning

This may be my preferred feature in Syncthing!

On a given Syncthing instance, you can enable per shared folder a retention policy, aka file versioning in the interface.

Basically, if a file is modified / removed in the share by a remote, the local instance can keep a hidden copy for a while.

There are different versioning modes, from a simple "trash bin" style keeping the files for n days, or more elaborated policies like you could have in backup tools.

=> https://docs.syncthing.net/users/versioning.html Syncthing official documentation about file versioning

## Partial share synchronization

For each share, it's possible to write an exclusion filter, this allows you to either discard sync changes for some pattern (like excluding vim swap files) or entire directories if you don't want to retrieve all the shared folder.

The filter works in both way, if you accept a remote, you could write a filter before starting the synchronization and remove some huge directories you may not want locally.  But this could also allow preventing a directory to be sent to the remotes, like a temporary directory for instance.

This is a topic I covered with a very specific use case, only sync a single file in a directory.

=> https://dataswamp.org/~solene/2023-01-28-syncthing-single-file.html Earlier blog post: Configure Syncthing to sync a single file

## Encrypted remotes

A pretty cool feature I found recently was the support for encrypted shared folders per remote.  I'm using syncthing to keep my KeepassXC databases synchronized between my computers.

As I don't always have at least two of my computers turned ON at the same time, they can't always synchronize directly with each other, so I use a remote dedicated server as a buffer to hold the files, Syncthing encryption is activated for this remote, both my computers can exchange data with it, but on the server itself you can't get my KeepassXC databases.

This is also pretty cool as it doesn't leave any readable data on the storage drive if you use 3rd party systems.

Taking the opportunity here, KeepassXC has a cool feature that allows you to add a binary file as a key in addition to a password / FIDO key.  If this binary file isn't part of the synchronized directory, even someone who could access your KeepassXC database and steal your password shouldn't be able to use it.

## Data chunk based

When Syncthing scans a directory, it will hash all the file into chunks and synchronize all these chunks to other remotes, this is basically how BitTorrent work too.

This may sound boring, but basically, this allows Syncthing to move or rename files on a remote instead of transferring the data again when you rename / move files in a local shared directory.  Indeed, only the changed paths list is sent, and the chunks used in the files, as the files already exist on the remote, the data chunks don't have to be retrieved.

Note that this doesn't work for encrypted remotes as the chunks contain some path information, once encrypted, the same file with different paths will look as two different encrypted chunks.

## Bandwidth control

Syncthing GUI allows you to define inbound or outbound bandwidth limitation, either globally or per remote.  If like me, you have a slow ADSL line with slow upload, you may want to limit the bandwidth used to send data to set the non-local remotes.

## Support for all attributes synchronization

This may sound more niche, but it's important for some users: Syncthing can synchronize file permissions, ownership or even extended attributes.  This is not enabled by default as Syncthing requires elevated privileges (typically running as root) to make it work.

## Runs everywhere

Syncthing is a Go program, it's a small binary with no dependencies, it's quite portable and runs on Linux, all the BSD, Android, Windows, macOS etc...  There is nothing worse than a synchronization utility that can't be installed on a specific computer...

# Conclusion

I really love this software, especially since I figured the file versioning and the encrypted remotes, now I don't fear conflicts or lost files anymore when syncing my files between computers.

My computers also use a local discovery server that allows my Qubes OS to be kept in sync together over the LAN.

# Note for SystemD users

When you install Syncthing on your system, you can enable the service as your user, this will make Syncthing start properly when you log in with your user:

systemctl enable --user syncthing.service


# Note for OpenBSD users

Syncthing has to listen for each file change, you will need to increase the maximum opened files limit for your user, and maybe the limit in the kernel using the according sysctl.

You can find more detailed information about using Syncthing on OpenBSD in the file `/usr/local/share/doc/pkg-readmes/syncthing`.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/potw-syncthing.gmi</guid>
  <link>gemini://perso.pw/blog//articles/potw-syncthing.gmi</link>
  <pubDate>Sat, 07 Oct 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Introduction to the OpenBSD operating system</title>
  <description>
    <![CDATA[
<pre># Introduction

I often see a lot of confusion with regard to OpenBSD, either assimilate as a Linux distribution or mixed up with FreeBSD.

Let's be clear, OpenBSD is a stand alone operating system.  It came as a fork of NetBSD in 1994, there isn't much things in common between the two nowadays.

While OpenBSD and the other BSDs are independant projects, they share some very old roots in their core, and regularly see source code changes in one being imported to another, but this is really a very small amount of the daily code changes though.

# OpenBSD features in 60 seconds

Let's do it quick, what can you find in OpenBSD?



It's used with success on workstations, either for personal or professional use.  It's also widely used as a server, being for network services or just routing/filtering network!

=> https://www.openbsd.org/innovations.html All the innovations that happened in OpenBSD

# Give it a try?

## On a Live-CD

If you never used OpenBSD, you can easily give it a try using the community made LiveCD/LiveUSB FuguIta!

=> https://fuguita.org/ FuguIta project page
=> https://dataswamp.org/~solene/2020-11-18-fuguita.html Older blog page about FuguIta

## In a virtual machine

Another way to easily try OpenBSD is to run it in a virtual machine.

=> https://cdn.openbsd.org/pub/OpenBSD/7.3/amd64/INSTALL.amd64 Complete installation guide of OpenBSD

Please note that the VirtualBox additions are not available as their drivers never got written for OpenBSD.

## On a real system

You can install OpenBSD on your system, or a spare computers you don't use anymore.  You need at least 48 MB of memory for it to work, and many architectures are supported like arm64, amd64, i386, sparc64, powerpc, riscv...

=> https://cdn.openbsd.org/pub/OpenBSD/7.3/amd64/INSTALL.amd64 Complete installation guide of OpenBSD

## On a VPS

You can rent an OpenBSD VM on OpenBSD Amsterdam, a company doing OpenBSD hosting on OpenBSD servers using the OpenBSD hypervisor!  And they give money to the OpenBSD project for each VM they host!

=> https://openbsd.amsterdam/ OpenBSD Amsterdam hosting

# Installing GNOME

I made a tutorial showing how to install GNOME, it's fairly easy!

=> https://dataswamp.org/~solene/2022-04-23-openbsd-video-tutorial-installation.html How to install GNOME on OpenBSD (video tutorial)

# We play video games on OpenBSD!

This is actually possible, and always running native code to run video games.

=> https://videos.pair2jeux.tube/c/openbsd_gaming/videos OpenBSD Gaming video channel (peertube)
=> https://playonbsd.com/games/ PlayOnBSD Games compatibility list
=> https://www.reddit.com/r/openbsd_gaming/ OpenBSD_gaming subreddit community

# Going further

=> https://www.openbsd.org The OpenBSD project website
=> https://en.wikipedia.org/wiki/OpenBSD OpenBSD on Wikipedia
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/octopenbsd-2023-openbsd-intro.gmi</guid>
  <link>gemini://perso.pw/blog//articles/octopenbsd-2023-openbsd-intro.gmi</link>
  <pubDate>Sun, 01 Oct 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>This is OctOpenBSD month</title>
  <description>
    <![CDATA[
<pre># Introduction

We are in October 2023, let's celebrate the first OctOpenBSD event, the month where OpenBSD users show to the world our favorite operating system is still relevant.

The event will occur from 1st October up to 31st October.  A surprise will be revealed on the OpenBSD Webzine for the last day!

=> https://webzine.puffy.cafe The OpenBSD Webzine website

=> static/octopenbsd-2023.png A Puffy telling the hacker girl that sometimes we need to take a break
=> https://merveilles.town/@prahou Artwork by Prahou, the unix_surrealism artist

# What to do in OctOpenBSD?

There is a lot you can do!  It's just small things, that accumulated as a community will turn this into a great community event!

To contribute to OctOpenBSD, you can:



Let's celebrate!

# FAQ

If you have any question about the event, I'll answer them and gather the QA in this section.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/octopenbsd-2023.gmi</guid>
  <link>gemini://perso.pw/blog//articles/octopenbsd-2023.gmi</link>
  <pubDate>Sun, 01 Oct 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Firefox hardening with Arkenfox</title>
  <description>
    <![CDATA[
<pre># Introduction

Dear Firefox users, what if I told you it's possible to harden Firefox by changing a lot of settings?  Something really boring to explain and hard to reproduce on every computer.  Fortunately, someone did the job of automating all of that under the name Arkenfox.

Arkenfox design is simple, it's a Firefox configuration file (more precisely a `user.js` file), that you have to drop in your profile directory to override many Firefox defaults with a lot of curated settings to harden privacy and security.  Cherry on cake, it features an updater and a way to override some of its values with a user defined file.

This makes Arkenfox easy to use on any system (including Windows), but also easy to tweak or distribute across multiple computers.

=> https://github.com/arkenfox/user.js Arkenfox user.js GitHub project page
=> https://github.com/arkenfox/user.js/wiki Arkenfox user.js Documentation

# Setup

The official documentation contains more information, but basically the steps are the following:

1. find your Firefox profile directory: open `about:support` and search for an entry name profile directory
2. download latest Arkenfox user.js release archive
2. if the profile is not new, there is an extra step to clean it using `scratchpad-scripts/arkenfox-cleanup.js` which contains instructions at the top of the file
3. save the file `user.js` in the profile directory
4. add `update.sh` to the profile directory, so you can update `user.js` easily later
5. create `user-overrides.js` in the profile directory if you want to override some settings and keep them, the updater is required for the override

# Configuration

Basically, Arkenfox disables a lot of persistency such as cache storage, cookies, history.  But it also enforces a canvas of fixed size to render the content, reset the preferred languages to English only (that defines which language is used to display a multilingual website) and many more changes.

You may want to override some settings because you don't like them.  In the project's Wiki, you can find all Arkenfox overrides, with the explanation of its new value, and which value you may want to use in your own override.

=> https://github.com/arkenfox/user.js/wiki/3.2-Overrides-%5BCommon%5D Arkenfox user.js Wiki about common overrides

For instance, if you want to re-enable the cache storage, add the following code to the file `user-overrides.js`.

user_pref("browser.cache.disk.enable", true);

user_pref("privacy.clearOnShutdown.cache", false);


Now, run the updater script, that will verify that Arkenfox user.js file is the latest version, and will append your override to it.

# Tips

By default, cookies aren't saved, so if you don't want to log in every time you restart Firefox, you have to specifically allow cookies for each website.

The easiest method I found is to press `Ctrl+I`, visit the Permissions tab, and uncheck the "Default permissions" relative to cookies.  You could also do it by visiting Firefox settings, and search for an exception button in which you can enter a list of domains where cookies shouldn't be cleared on shutdown.

By default, entering text in the address bar won't trigger a search anymore, so instead of using Ctrl+L to type in the bar, you can use Ctrl+K to type for a search.

# Extensions

Arkenfox wiki recommends to use uBlock Origin and Skip redirect extensions only, with some details.  I agree they both work well and do the job.

It's possible to harden uBlock Origin by disabling 3rd party scripts / frames by default, and giving you the opportunity to allow per domain / globally some sources, this is called the blocking mode.  I found it to be way more usable than NoScript.js.

=> https://github.com/gorhill/uBlock/wiki/Blocking-mode:-medium-mode uBlock Origin blocking mode documentation

# Conclusion

I found that Arkenfox was a bit hard to use at first because I didn't fully understand the scope of its changes, but it didn't break any website even if it disables a lot of Firefox features that aren't really needed.

This reduces Firefox attack surface, and it's always a welcome improvement.

# Going further

Arkenfox user.js isn't the only set of Firefox settings around, there is also Betterfox (thanks prx!) which provides different profiles, even one for performance.  I didn't try any of these profiles yet, Arkenfox and Betterfox are parallel projects and not forks, it's actually complicated to compare which one would be better.

=> https://github.com/yokoffing/Betterfox Betterfox Github project page
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/harden-firefox-with-arkenfox.gmi</guid>
  <link>gemini://perso.pw/blog//articles/harden-firefox-with-arkenfox.gmi</link>
  <pubDate>Wed, 27 Sep 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Flatpak integration in Qubes OS templates</title>
  <description>
    <![CDATA[
<pre># Introduction

I recently wanted to improve Qubes OS accessibility to new users a bit, yesterday I found why GNOME Software wasn't working in the offline templates.

Today, I'll explain how to install programs from Flatpak in a template to provide to other qubes.  I really like flatpak as it provides extra security features and a lot of software choice, and all the data created by Flatpak packaged software are compartmentalized into their own tree in `~/.var/app/program.some.fqdn/`.

=> https://qubes-os.org Qubes OS official project website
=> https://www.flatpak.org/ Flatpak official project website
=> https://flathub.org/ Flathub: main flatpak repository

# Setup

All the commands in this guide are meant to be run in a Fedora or Debian template as root.

In order to add Flathub repository, you need to define the variable `https_proxy` so flatpak can figure how to reach the repository through the proxy:

export https_proxy=http://127.0.0.1:8082/

flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo


Make the environment variable persistent for the user `user`, this will allow GNOME Software to work with flatpak and all flatpak commands line to automatically pick the proxy.

mkdir -p /home/user/.config/environment.d/

cat <<EOF >/home/user/.config/environment.d/proxy.conf

https_proxy=http://127.0.0.1:8082/

EOF


In order to circumvent a GNOME Software bug, if you want to use it to install packages (Flatpak or not), you need to add the following line to `/rw/config/rc.local`:

ip route add default via 127.0.0.2


=> https://gitlab.gnome.org/GNOME/gnome-software/-/issues/2336 GNOME Software gitlab issue #2336 saying a default route is required to make it work

Restart the template, GNOME software is now able to install flatpak programs!

# Qubes OS integration

If you install or remove flatpak programs, either from the command line or with the Software application, you certainly want them to be easily available to add in the qubes menus.

Here is a script to automatically keep the applications list in sync every time a change is made to the flatpak applications.

## Inotify-tool

For the setup to work, you will have to install the package `inotify-tools` in the template, this will be used to monitor changes in a flatpak directory.

## Syncing app menu script

Create `/usr/local/sbin/sync-app.sh`:

!/bin/sh

when a desktop file is created/removed

- links flatpak .desktop in /usr/share/applications

- remove outdated entries of programs that were removed

- sync the menu with dom0

inotifywait -m -r \

-e create,delete,close_write \

/var/lib/flatpak/exports/share/applications/ |

while IFS=':' read event

do

find /var/lib/flatpak/exports/share/applications/ -type l -name "*.desktop" | while read line

do

ln -s "$line" /usr/share/applications/

done

find /usr/share/applications/ -xtype l -delete

/etc/qubes/post-install.d/10-qubes-core-agent-appmenus.sh

done


You have to mark this file as executable with `chmod +x /usr/local/sbin/sync-app.sh`.

## Start the file monitoring script at boot

Finally, you need to activate the script created above when the templates boots, this can be done by adding this snippet to `/rw/config/rc.local`:

start monitoring flatpak changes to reload icons

/usr/local/sbin/sync-app.sh &


## Updating

This solution will look for flatpak programs updates each time the template starts, which should occur regularly to update the template packages, and update them unconditionnaly.

Add this snippet to `/rw/config/rc.local`:

check for update

export https_proxy=http://127.0.0.1:8082/

flatpak upgrade -y --noninteractive


This could be enhanced by asking the user if they want to update or skip for later, but I still have to figure how to make `notify-send` from the root user, I opened a Qubes OS issue about this.

# Conclusion

With this setup, you can finally install programs from flatpak in a template to provide it to other qubes, with bells and whistles to not have to worry about creating desktop files or keeping them up to date.

Please note that while well-made Flatpak programs like Firefox will add extra security, the repository flathub allows anyone to publish programs.  You can browse flathub to see who is publishing which software, they may be the official project team (like Mozilla for Firefox) or some random people.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/flatpak-on-qubesos.gmi</guid>
  <link>gemini://perso.pw/blog//articles/flatpak-on-qubesos.gmi</link>
  <pubDate>Mon, 18 Sep 2023 00:00:00 GMT</pubDate>
</item>

  </channel>
</rss>