💾 Archived View for perso.pw › blog › rss.xml captured on 2023-06-16 at 16:13:17.

View Raw

More Information

⬅️ Previous capture (2023-06-14)

➡️ Next capture (2023-07-10)

-=-=-=-=-=-=-

<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Solene'%</title>
    <description></description>
    <link>gemini://perso.pw/blog/</link>
    <atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" />
    <item>
  <title>OpenKuBSD design document</title>
  <description>
    <![CDATA[
<pre># Introduction

I got an idea today (while taking a shower...) about _partially_ reusing Qubes OS design of using VMs to separate contexts and programs, but doing so on OpenBSD.

To make explanations CLEAR, I won't reimplement Qubes OS entirely on OpenBSD.  Qubes OS is an interesting operating system with a very strong focus on security (from a very practical point of view ), but it's in my opinion overkill for most users, and hence not always practical or usable.

In the meantime, I think the core design could be reused and made it easy for users, like we are used to do in OpenBSD.

# Why this project?

I like the way Qubes OS allows to separate things and to easily run a program using a VPN without affecting the rest of the system.  Using it requires a different mindset, one has to think about data silos, what do I need for which context?

However, I don't really like that Qubes OS has so many opened issues, governance isn't clear, and Xen seems to be creating a lot of troubles with regard to hardware compatibility.

I'm sure I can provide a similar but lighter experience, at the cost of "less" security.  My threat model is more preventing data leak in case of a compromised system/software, than protecting my computer from a government secret agency.

After spending two months using "immutables" distributions (openSUSE MicroOS, Vanilla OS, Silverblue), where they all want to you use root-less containers (with podman) through distrobox, I hate that idea, it integrates poorly with the host, it's a nightmare to maintain, can create issues due to different versions of programs altering your user data directory, and that just doesn't bring anything much to the table except allowing users to install software without being root (and without having to reboot on those systems).

# Key features

Here is a list of features that I think good to implement.



Some kind of quick diagram explaining relationship of various components.  This doesn't show the whole picture because it wouldn't be easy to represent (and I didn't had time to try doing so yet):

=> static/OpenKuBSD-design.svg OpenKuBSD design diagram

# What I don't plan to do



# Roadmap

The first step is to make a proof of concept:



# Trivia

I announced it as OpenKuBISD, but I prefer to name it OpenKuBSD :)
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openkubsd-design.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openkubsd-design.gmi</link>
  <pubDate>Tue, 06 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>The Old Computer Challenge V3</title>
  <description>
    <![CDATA[
<pre># Introduction

Hi! It's that time of the year when I announce a new Old Computer Challenge :)

If you don't know about it, it's a weird challenge I've did twice in the past 3 years that consists into limiting my computer performance using old hardware, or by limiting Internet access to 60 minutes a day.

=> https://dataswamp.org/~solene/tag-oldcomputerchallenge.html Blog posts tagged "oldcomputerchallenge"

# 2023's challenge

I want this challenge to be accessible.  The first one wasn't easy for many because it required to use an old machine, but many readers didn't have a spare old computer (weird right? :P).  The second one with Internet time limitation was hard to setup.

This one is a bit back to the roots: let's use a SLOW computer for 7 days.  This will be achieved by various means with any hardware:



I got the idea when I remembered a few people reporting these tricks to do the first challenge, like in this report:

=> https://portal.mozz.us/gemini/carcosa.net/journal/20210713-old-computer-challenge.gmi Carcosa's report of the first challenge (link via gemini http bridge)

You are encouraged to join the IRC channel #oldcomputerchallenge on libera.chat server to share about your experience.

Feel free to write reports, it's always fun to read about other going through the challenge.

# When

The challenge will start the 10th July 2023, and end the 16th July 2023 at the end of the day.

# Frequently asked questions


</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/old-computer-challenge-v3.gmi</guid>
  <link>gemini://perso.pw/blog//articles/old-computer-challenge-v3.gmi</link>
  <pubDate>Sun, 04 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Qubes OS dom0 files workflow using fossil</title>
  <description>
    <![CDATA[
<pre># Introduction

Since I'm using Qubes OS, I always faced an issue; I need a proper tracking of the configuration files for my systemthis can be done using Salt as I explained in a previous blog post.  But what I really want is a version control system allowing me to synchronize changes to a remote repository (it's absurd to backup dom0 for every change I make to a salt file).  So far, git is too complicated to achieve that.

I gave a try with fossil, a tool I like (I wrote about this one too ;) ), and it was surprisingly easy to setup remote access leveraging Qubes'qvm-run.

In this blog post, you will learn how to setup a remote fossil repository, and how to use it from your dom0.

=> https://dataswamp.org/~solene/2023-01-29-fossil-scm.html Previous article about Fossil cheatsheet

# Repository creation

On the remote system where you want to store the fossil repository (it's a single file), run `fossil init my-repo.fossil`.

The only requirement for this remote system is to be reachable over SSH by an AppVM in your Qubes OS.

# dom0 clone

Now, we will clone this remote repository in our dom0, I'm personnally fine with storing such files in `/root/` directory.

In the following example, the file `my-repo.fossil` was created on the machine `10.42.42.200` with the path `/home/solene/devel/my-repo.fossil`.  I'm using the AppVM `qubes-devel` to connect to the remote host using SSH.

[root@dom0 ~#] fossil clone --ssh-command "qvm-run --pass-io --no-gui -u user qubes-devel 'ssh'" ssh://10.42.42.200://home/solene/devel/my-repo.fossil /root/my-repo.fossil


This command clone a remote fossil repository by piping the SSH command through qubes-devel AppVM, allowing fossil to reach the remote host.

Cool fact with fossil's clone command, it keeps the proxy settings, so no further changes are required.

With a Split SSH setup, I'm asked everytime fossil is synchronizing; by default fossil has "autosync" mode enabled, for every commit done the database is synced with the remote repository.

# Open the repository (reminder about fossil usage)

As I said, fossil works with repository files.  Now you cloned the repository in `/root/my-repo.fossil`, you could for instance open it in `/srv/` to manage all your custom changes to the dom0 salt.

This can be achieved with the following command:

[root@dom0 ~#] cd /srv/

[root@dom0 ~#] fossil open --force /root/my-repo.fossil


The `--force` flag is needed because we need to open the repository in a non-empty directory.

# Conclusion

Finally, I figured a proper way to manage my dom0 files, and my whole host.  I'm very happy of this easy and reliable setup, especially since I'm already a fossil user.  I don't really enjoy git, so demonstrating alternatives working fine always feel great.

If you want to use Git, I have a hunch that something could be done using `git bundle`, but this requires some investigation.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/qubes-os-version-control-dom0.gmi</guid>
  <link>gemini://perso.pw/blog//articles/qubes-os-version-control-dom0.gmi</link>
  <pubDate>Wed, 07 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Install OpenBSD in Qubes OS</title>
  <description>
    <![CDATA[
<pre># Introduction

Here is a short guide explaining how to install OpenBSD in Qubes OS, as an HVM VM (fully virtualized, not integrated).

# Get OpenBSD

Download an ISO file to install OpenBSD, do it from an AppVM.  You can use the command `cksum -a sha256 install73.iso` in the AppVM to generate a checksum to compare with the file `SHA256` to be found in the OpenBSD mirror.

# Create a Qube

In the XFCE menu > Qubes Tools > Create Qubes VM GUI, choose a name, use the type "StandaloneVM (fully persistent)", use "none" as a template and check "Launch settings after creation".

# Configuration

In the "Basic" tab, configure the "system storage max size", that's the storage size OpenBSD will see at installation time.  OpenBSD storage management is pretty limited, if you add more space later it will be complicated to grow partitions, so pick something large enough for your task.

Still in the "Basic" tab, you have all the network information, keep them later (you can open the Qube settings after the VM booted) to configure your OpenBSD.

In "Firewall rules" tab, you can set ... firewall rules that happens at Qubes OS level (in the sys-firewall VM).

In the "Devices" tab, you can expose some internal devices to the VM (this is useful for networking VMs).

In the "Advanced" tab, choose the memory to use and the number of CPU.  In the "Virtualization" square, choose the mode "HVM" (it should already be selected).  Finally, click on "Boot qube from CD-ROM" and pick the downloaded file by choosing the AppVM where it is stored and its path.  The VM will directly boot when you validate.

# Installation

The installation process is straightforward, here is the list (in order of appearance) of questions that require a specific answer:



Whether you reboot or halt the VM, it will be halted, so start it again.

# Enjoy

You should get into your working OpenBSD VM with functional network.

Be careful, it doesn't have any specific integration with Qubes OS like the clipboard, USB passthrough etc...  However, it's a HVM system, so you could give it an USB controller or a dedicated GPU.

# Conclusion

It's perfectly possible to run OpenBSD in Qube OS with very decent performance, the setup is straightforward when you know where to look for the network information (and that the netmask is /8 and not /32 like on Linux).
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-in-qubes-os.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-in-qubes-os.gmi</link>
  <pubDate>Tue, 06 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Declaratively manage your Qubes OS</title>
  <description>
    <![CDATA[
<pre># Introduction

As a recent Qubes OS user, but also a NixOS user, I want to be able to reproduce my system configuration instead of fiddling with files everywhere by hand and being clueless about what I changed since the installation time.

Fortunately, Qubes OS is managed internally with Salt Stack (it's similar to Ansible if you didn't know about Salt), so we can leverage salt to modify dom0 or Qubes templates/VMs.

=> https://www.qubes-os.org/ Qubes OS official project website
=> https://docs.saltproject.io/en/getstarted/ Salt Stack project website
=> https://www.qubes-os.org/doc/salt/ Qubes OS documentation: Salt

# Simple setup

In this example, I'll show how to write a simple Salt state files, allowing you to create/modify system files, install packages, add repositories etc...

Everything will happen in dom0, you may want to install your favorite text editor in it.  Note that I'm still trying to figure a nice way to have a git repository to handle this configuration, and being able to synchronize it somewhere, but I still can't find a solution I like.

The dom0 salt configuration can be found in `/srv/salt/`, this is where we will write:



Quick extra explanation: there is a directory `/srv/pillar/`, where you store things named "pillars", see them as metadata you can associate to remote hosts (AppVM / Templates in the Qubes OS case).  We won't use pillars in this guide, but if you want to write more advanced configurations, you will surely need them.

# dom0 management

Let's use dom0 to manage itself 🙃.

Create a text file `/srv/salt/custom.top` with the content (YAML format):

base:

'dom0':

- dom0


This tells that hosts matching `dom0` (2nd line) will use the state named `dom0`.

We need to enable that .top file so it will be included when salt is applying the configuration.

qubesctl top.enable custom


Now, create the file `/srv/salt/dom0.sls` with the content (YAML format):

my packages:

pkg.installed:

- pkgs:

- kakoune

- git


This uses the salt module named `pkg`, and pass it options in order to install the packages "git" and "kakoune".

=> https://docs.saltproject.io/en/latest/ref/states/all/salt.states.pkg.html Salt Stack documentation about the pkg module

On my computer, I added the following piece of configuration to `/srv/salt/dom0.sls` to automatically assign the USB mouse to dom0 instead of being asked every time, this implements the instructions explained in the documentation link below:

=> https://www.qubes-os.org/doc/usb-qubes/#usb-mice Qubes OS documentation: USB mice

/etc/qubes-rpc/policy/qubes.InputMouse:

file.line:

- mode: ensure

- content: "sys-usb dom0 allow"

- before: "^sys-usb dom0 ask"


=> https://docs.saltproject.io/en/latest/ref/states/all/salt.states.file.html#salt.states.file.line Salt Stack documentation: file line

This snippet makes sure that the line `sys-usb dom0 allow` in the file `/etc/qubes-rpc/policy/qubes.InputMouse` is present above the line matching `^sys-usb dom0 ask`.  This is a more reproducible way of adding lines to configuration file than editing by hand.

Now, we need to apply the changes by running salt on dom0:

qubesctl --target dom0 state.apply


You will obtain a list of operations done by salt, with a diff for each task, it will be easy to know if something changed.

Note: state.apply used to be named state.highstate (for people who used salt a while ago, don't be confused, it's the same thing).

# Template management

Using the same method as above, we will add a match for the fedora templates in the custom top file:

In `/srv/salt/custom.top` add:

'fedora-*':

- globbing: true

- fedora


This example is slightly different than the one for dom0 where we matched the host named "dom0".  As I want my salt files to require the least maintenance possible, I won't write the template name verbatim, but I'd rather use a globbing (this is the name for simpler wildcard like `foo*`) matching everything starting by `fedora-`, I currently have fedora-37 and fedora-38 on my computer, so they are both matching.

Create `/srv/salt/fedora.sls`:

custom packages:

pkg.installed:

- pkgs:

- borgbackup

- dino

- evolution

- fossil

- git

- pavucontrol

- rsync

- sbcl

- tig


In order to apply, we can type `qubesctl --all state.apply`, this will work but it's slow as salt will look for changes in each VM / template (but we only added changes for fedora templates here, so nothing would change except for the fedora templates).

For a faster feedback loop, we can specify one or multiple targets, for me it would be `qubesctl --targets fedora-37,fedora-38 state.apply`, but it's really a matter of me being impatient.

# Auto configure Split SSH

An interesting setup with Qubes OS is to have your SSH key in a separate VM, and use Qubes OS internal RPC to use the SSH from another VM, with a manual confirmation on each use.  However, this setup requires modifying files at multiple places, let's see how to manage everything with salt.

=> https://github.com/Qubes-Community/Contents/blob/master/docs/configuration/split-ssh.md Qubes OS community documentation: Split SSH

Reusing the file `/srv/salt/custom.top` created earlier, we add `split_ssh_client.sls` for some AppVMs that will use the split SSH setup.  Note that you should not deploy this state to your Vault, it would self reference for SSH and would prevent the agent to start (been there :P):

base:

'dom0':

- dom0

'fedora-*':

- globbing: true

- fedora

'MyDevAppVm or MyWebBrowserAppVM':

- split_ssh_client


Create `/srv/salt/split_ssh_client.sls`: this will add two files to load the environment variables from `/rw/config/rc.local` and `~/.bashrc`.  It's actually easier to separate the bash snippets in separate files and use `source`, rather than using salt to insert the snippets directly in place where needed.

/rw/config/bashrc_ssh_agent:

file.managed:

- user: root

- group: wheel

- mode: 444

- contents: |

SSH_VAULT_VM="vault"

if [ "$SSH_VAULT_VM" != "" ]; then

export SSH_AUTH_SOCK="/home/user/.SSH_AGENT_$SSH_VAULT_VM"

fi

/rw/config/rclocal_ssh_agent:

file.managed:

- user: root

- group: wheel

- mode: 444

- contents: |

SSH_VAULT_VM="vault"

if [ "$SSH_VAULT_VM" != "" ]; then

export SSH_SOCK="/home/user/.SSH_AGENT_$SSH_VAULT_VM"

rm -f "$SSH_SOCK"

sudo -u user /bin/sh -c "umask 177 && exec socat 'UNIX-LISTEN:$SSH_SOCK,fork' 'EXEC:qrexec-client-vm $SSH_VAULT_VM qubes.SshAgent'" &

fi

/rw/config/rc.local:

file.append:

- text: source /rw/config/rclocal_ssh_agent

/rw/home/user/.bashrc:

file.append:

- text: source /rw/config/bashrc_ssh_agent


Edit `/srv/salt/dom0.sls` to add the SshAgent RPC policy:

/etc/qubes-rpc/policy/qubes.SshAgent:

file.managed:

- user: root

- group: wheel

- mode: 444

- contents: |

MyClientSSH vault ask,default_target=vault


Now, run `qubesctl --all state.apply` to configure all your VMs, which are the template, dom0 and the matching AppVMs.  If everything went well, you shouldn't have errors when running the command.

# Use a dedicated AppVM for web browsing

Another real world example, using Salt to configure your AppVMs to open links in a dedicated AppVM (named WWW for me):

=> https://github.com/Qubes-Community/Contents/blob/master/docs/common-tasks/opening-urls-in-vms.md Qubes OS Community Documentation: Opening URLs in VMs

In your custom top file `/srv/salt/custom.top`, you need something similar to this (please adapt if you already have top files or state files):

'dom0':

- dom0

'fedora-*':

- globbing: true

- fedora

'vault or qubes-communication or qubes-devel':

- default_www


Add the following text to `/srv/salt/dom0.sls`, this is used to configure the RPC:

/etc/qubes-rpc/policy/qubes.OpenURL:

file.managed:

- user: root

- group: wheel

- mode: 444

- contents: |

@anyvm @anyvm ask,default_target=WWW


Add this to `/srv/salt/fedora.sls` to create the desktop file in the template:

/usr/share/applications/browser_vm.desktop:

file.managed:

- user: root

- group: wheel

- mode: 444

- contents: |

[Desktop Entry]

Encoding=UTF-8

Name=BrowserVM

Exec=qvm-open-in-vm browser %u

Terminal=false

X-MultipleArgs=false

Type=Application

Categories=Network;WebBrowser;

MimeType=x-scheme-handler/unknown;x-scheme-handler/about;text/html;text/xml;application/xhtml+xml;application/xml;application/rss+xml;x-scheme-handler/http;x-scheme-handler/https;


Create `/srv/salt/default_www.sls` with the following content, this will run xdg-settings to set the default browser:

xdg-settings set default-web-browser browser_vm.desktop:

cmd.run:

- runas: user


Now, run `qubesctl --target fedora-38,dom0 state.apply`.

From there, you MUST reboot the VMs that will be configured to use the WWW AppVm as the default browser, they need to have the new file `browser_vm.desktop` available for `xdg-settings` to succeed.  Run `qubesctl --target vault,qubes-communication,qubes-devel state.apply`.

Congratulations, now you will have a RPC prompt when an AppVM wants to open a file to ask you if you want to open it in your browsing AppVM.

# Conclusion

This method is a powerful way to handle your hosts, and it's ready to use on Qubes OS.  Unfortunately, I still need to figure a nicer way to export the custom files written in /srv/salt/ and track the changes properly in a version control system.

Erratum: I found a solution to manage the files :-) stay tuned for the next article.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/reproducible-config-mgmt-qubes-os.gmi</guid>
  <link>gemini://perso.pw/blog//articles/reproducible-config-mgmt-qubes-os.gmi</link>
  <pubDate>Mon, 05 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Backport OpenBSD 7.3 pkg_add enhancement</title>
  <description>
    <![CDATA[
<pre># Introduction

Recently, OpenBSD package manager received a huge speed boost when updating packages, but it's currently only working in -current due to an issue.

Fortunately, espie@ fixed it for the next release, I tried it and it's safe to fix yourself.  It will be available in the 7.4 release, but for 7.3 users, here is how to apply the change.

=> https://github.com/openbsd/src/commit/fa222ab7fc13c118c838e0a7aaafd11e2e4fe53b.patch Link to the commit (GitHub)

# Fix

There is a single file modified, just download the patch and apply it on `/usr/libdata/perl5/OpenBSD/PackageRepository/Installed.pm` with the command `patch`.

cd /usr/libdata/perl5/OpenBSD/PackageRepository/

ftp -o /tmp/pkg_add.patch https://github.com/openbsd/src/commit/fa222ab7fc13c118c838e0a7aaafd11e2e4fe53b.patch

patch -C < /tmp/pkg_add.patch && patch < /tmp/pkg_add.patch && rm /tmp/pkg_add.patch


After that, running `pkg_add -u` should be at least 5 or 10 times faster, and will use a lot less bandwidth.

# Some explanations

On -current, there is a single directory to look for packages, but on release for architectures amd64, aarch64, sparc64 and i386, there are two directories: the packages generated for the release, and the packages-stable directory receiving updates during the release lifetime.

The code wasn't working with the two paths case, preventing `pkg_add` to build a local packages signature to compare the remote signature database in the "quirks" package in order to look for updates.  The old behavior was still used, making pkg_add fetching the first dozen kilobytes of each installed packages to compare their signature package by package, while now everything is stored in quirks.

# Disclaimer

If you have any issue, just revert the patch by adding `-R` to the patch command, and report the problem TO ME only.

This change is not officially supported for 7.3, so you are on your own if there is an issue, but it's not harmful to do.  If you were to have an issue, reporting it to me would help solving it for 7.4 for everyone, but really, it just work without being harmful in the worse case scenario.

# Conclusion

I hope you will enjoy this change so you don't have to wait for 7.4.  This makes OpenBSD pkg_add feeling a bit more modern, compared to some packages manager that are now almost instant to install/update packages.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-backport-pkg_add.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-backport-pkg_add.gmi</link>
  <pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Send XMPP messages from the command line</title>
  <description>
    <![CDATA[
<pre># Introduction

As a reed-alert user for monitoring my servers, while using emails works efficiently, I wanted to have more instant notifications for critical issues.  I'm also an happy XMPP user, so I looked for a solution to send XMPP messages from a command line.

=> https://dataswamp.org/~solene/tag-reed-alert.html More about reed-alert on the blog
=> https://tildegit.org/solene/reed-alert/ Reed-alert project git repository

I will explain how to use the program go-sendxmpp to send messages from a command line, this is a newer drop-in replacement for the old perl sendxmpp that doesn't seem to work anymore.

=> https://salsa.debian.org/mdosch/go-sendxmpp go-sendxmpp project git repository

# Installation

Following go-sendxmpp documentation, you need go to be installed, and then run `go install salsa.debian.org/mdosch/go-sendxmpp@latest` to compile the binary in `~/go/bin/go-sendxmpp`.  Because it's a static binary, you can move it to a directory in `$PATH`.

If I'm satisfied of it, I'll import go-sendxmpp into the OpenBSD ports tree to make it available as a package for everyone.

# Configuration

Open a shell with the user that is going to run go-sendxmpp, prepare the configuration file in its default location:

mkdir -p ~/.config/go-sendxmpp

touch ~/.config/go-sendxmpp/config

chmod 400 ~/.config/go-sendxmpp/config


Edit the file `~/.config/go-sendxmpp/config` to add the two lines:

username: myuser@myserver

password: hunter2_oryourpassword


Now, your user should be ready to use `go-sendxmpp`, I recommend always enabling the flag `-t` to use TLS to connect to the server, but you should really choose an XMPP server providing TLS-only.

The program usage is simple: `echo "this is a message for you" | go-sendxmpp dest@remote`, and you are done.  It's easy to integrate it in shell tasks.

Note that go-sendxmpp allows you to get the password for a command instead of storing it in plain text, this may be more convenient and secure in some scenarios.

# Reed-alert configuration

Back to reed-alert, using go-sendxmpp is as easy as declaring a new alert type, especially using the email template:

(alert xmpp "echo -n '[%state%] Problem with %function% %date% %params%' | go-sendxmpp user@remote")

;; example of use

(=> xmpp ping :host "dataswamp.org" :desc "Ping to dataswamp.org")


# Conclusion

XMPP is a very reliable communication protocol, I'm happy that I found go-sendxmpp, a modern, working and simple way to programmatically send me alerts using XMPP.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/xmpp-from-commandline.gmi</guid>
  <link>gemini://perso.pw/blog//articles/xmpp-from-commandline.gmi</link>
  <pubDate>Mon, 29 May 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>How to install Nix in a Qubes OS AppVM</title>
  <description>
    <![CDATA[
<pre># Intro

I'm still playing with Qubes OS, today I had to figure how to install Nix because I rely on it for some tasks.  It turned out to be a rather difficult task for a Qubes beginner like me when not using a fully persistent VM.

Here is how to install Nix in an AppVm (only /home/ is persistent) and some links to the documentation about `bind-dirs`, an important component of Qubes OS that I didn't know about.

=> https://www.qubes-os.org/doc/bind-dirs/ Qubes OS documentation: How to make any file persistent (bind-dirs)
=> https://nixos.org Nix project website

# bind-dirs

Behind this unfriendly name is a smart framework to customize templates or AppVM.  It allows running commands upon VM start, but also make directories explicitly persistent.

The configuration can be done at the local or template level, in our case, we want to create `/nix` and make it persistent in a single VM, so that when we install nix packages, they will be stay after a reboot.

The implementation is rather simple, the persistent directory is under the `/rw` partition in ext4, which allows mounting subdirectories.  So, if the script finds `/rw/bind-dirs/nix` it will mount this directory on `/nix` on the root filesystem, making it persistent and without having to copy at start and sync on stop.

# Setup

A limitation for this setup is that we need to install nix in single user mode, without the daemon.  I suppose it should be possible to install Nix with the daemon, but it should be done at the template level as it requires adding users, groups and systemd units (service and socket).

In your AppVM, run the following commands as root:

mkdir -p /rw/config/qubes-bind-dirs.d/

echo "binds+=( '/nix' )" > /rw/config/qubes-bind-dirs.d/50_user.conf

install -d -o user -g user /rw/bind-dirs/nix


This creates an empty directory `nix` owned by the regular Qubes user named `user`, and we tell bind-dirs that this directory is persistent.

/!\ It's not clear if it's a bug or a documentation issue, but the creation of `/rw/bind-dirs/nix` wasn't obvious.  Someone already filled a bug about this, and funny enough, they reported it using Nix installation as an example.
=> https://github.com/QubesOS/qubes-issues/issues/5862 GitHub issue: clarify bind-dirs documentation

Now, reboot your VM, you should have a `/nix` directory that is owned by your user.  This mean it's persistent, and you can confirm that by looking at `mount | grep /nix` output which should have a line.

Finally, install nix in single user mode, using the official method:

sh <(curl -L https://nixos.org/nix/install) --no-daemon


Now, we need to fix the bash code to load Nix into your environment.  The installer modified `~/.bash_profile`, but it isn't used when you start a terminal from dom0, it's only used when using a full shell login with `bash -l`, which doesn't happen on Qubes OS.

Copy the last line of `~/.bash_profile` in `~/.bashrc`, this should look like that:

if [ -e /home/user/.nix-profile/etc/profile.d/nix.sh ]; then . /home/user/.nix-profile/etc/profile.d/nix.sh; fi # added by Nix installer


Now, open a new shell, you have a working Nix in your environment \o/

You can try it using `nix-shell -p hello` and run `hello`.  If you reboot, the same command should work immediately without need to download packages again.

# Configuration

In your Qube settings, you should increase the disk space for the "Private storage" which is 2 GB by default.

# Conclusion

Installing Nix in a Qubes OS AppVM is really easy, but you need to know about some advanced features like bind-dirs.  This is a powerful feature that will allow me to make lot of fun stuff with Qubes now, and using nix is one of them!

# Going further

If you plan to use Nix like this in multiple AppVM, you may want to set up a local substituter cache in a dedicated VM, this will make your bandwidth usage a lot more efficient.

=> https://dataswamp.org/~solene/2022-06-02-nixos-local-cache.html How to make a local NixOS cache server
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/qubes-os-install-nix.gmi</guid>
  <link>gemini://perso.pw/blog//articles/qubes-os-install-nix.gmi</link>
  <pubDate>Thu, 18 May 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Create a custom application entry in Qubes OS</title>
  <description>
    <![CDATA[
<pre># Introduction

If you use Qubes OS, you already know that installed software in templates are available in your XFCE menu for each VM, and can be customized from the Qubes Settings panel.

=> https://www.qubes-os.org/doc/how-to-install-software Qubes OS documentation about How to install software

However, if you want to locally install a software, either by compiling it, or using a tarball, you won't have a application entry in the Qubes Settings, and running this program from dom0 will require using an extra terminal in the VM.  But we can actually add the icon/shortcut by creating a file at the right place.

In this example, I'll explain how I made a menu entry for the program DeltaChat, "installed" by downloading an archive containing the binary.

# Desktop files

In the VM (with a non-volatile /home) create the file `/home/user/.local/share/applications/deltachat.desktop`, or in a TemplateVM (if you need to provide this to multiple VMs) in the path `/usr/share/applications/deltachat.desktop`:

[Desktop Entry]

Encoding=UTF-8

Version=1.0

Type=Application

Terminal=False

Exec=/home/user/Downloads/deltachat-desktop-1.36.4/deltachat-desktop

Name=DeltaChat


This will create a desktop entry for the program named DeltaChat, with the path to the executable and a few other information.  You can add an `Icon=` attribute with a link toward an image file, I didn't have one for DeltaChat.

# Qubes OS integration

With the .desktop file created, open the Qubes settings and refresh the applications list, you should find an entry with the Name you used.  Voilà!

# Conclusion

Knowing how to create desktop entries is useful, not even on Qubes OS but for general Linux/BSD use.  Being able to install custom programs with a launcher in Qubes dom0 is better than starting yet another terminal to run a GUI program from there.

# Going further

If you want to read more about the .desktop files specifications, you can read the links below:

=> https://specifications.freedesktop.org/desktop-entry-spec/desktop-entry-spec-latest.html Desktop entry specifications
=> https://wiki.archlinux.org/title/desktop_entries Arch Linux wiki about Desktop entries
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/qubes-os-custom-application.gmi</guid>
  <link>gemini://perso.pw/blog//articles/qubes-os-custom-application.gmi</link>
  <pubDate>Wed, 17 May 2023 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Making Qubes OS backups more efficient</title>
  <description>
    <![CDATA[
<pre># Introduction

These days, I've been playing a lot with Qubes OS, it has an interesting concept of deploying VMs (using Xen) in a well integrated and transparent manner in order to hardly separate every tasks you need.

By default, you get default environments such as Personal, Work and an offline Vault, plus specials VMs to handle USB proxy, network and firewall.  What is cool here is that when you run a program from a VM, only the window is displayed in your window manager (xfce), and not the whole VM desktop.

The cool factor with this project is their take on the real world privacy and security need, allowing users to run what they need to run (proprietary software, random binaries), but still protect them.  Its goal is totally different from OpenBSD and Tails.  Did I say you can also route a VM network through Tor out of the box? =D

If you want to learn more, you can visit Qubes OS website (or ask if you want me to write about it):

=> https://www.qubes-os.org/ Qubes OS official website
=> https://www.qubes-os.org/news/2022/10/28/how-to-organize-your-qubes/ New user guide: How to organize your cubes (nice reading to understand Qubes OS)

# Backups

If you know me, you should know I'm really serious about backups.  This is incredibly important to have backups.

Qubes OS has a backup tool that can be used out of the box, it just dump the VMs storage into an encrypted file, it's easy but not efficient or practical enough for me.

If you want to learn more about the format used by Qubes OS (and how to open them outside of Qubes OS), they wrote some documentation:

=> https://www.qubes-os.org/doc/backup-emergency-restore-v4/ Qubes OS: backup emergency restore

Now, let's see how to store the backups in Restic or Borg in order to have proper backups.

/!\ While both software support deduplication, this doesn't work well in this case because the stored data are compressed + encrypted already, which has a very high entropy (it's hard to find duplicated patterns).

# Backup tool

Qubes OS backup tool offers compression and encryption out of the box, but when it comes to the storage location, we can actually use a command to send the backups to the command's stdin, and guess what, both restic and borg support receiving data on their standard input!

I'll demonstrate how to proceed both with restic and borg with a simple example, I recommend to build your own solution on top of it the way you need.

=> static/backup.png Screenshot of Qubes backup tool

# Create a backup VM

As we are running Qubes OS, I prefer to create a dedicated backup VM using the Fedora template, it will contain the passphrase to the repository and an SSH key for remote backup.

You need to install restic/borg in the template to make it available in that VM.

If you don't know how to install software in a template, it's well documented:

=> https://www.qubes-os.org/doc/how-to-install-software/ Qubes OS: how to install software

Generate an SSH key if you want to store your data on a remote server using SSH, and deploy it on the remote server.

# Write a backup script

In order to simplify the backup command configuration in the backup tool (it's a single input line), but don't sacrifice on features like pruning, we will write a script on the backup VM doing everything we need.

While I'm using a remote repository in the example, nothing prevents you from using a local/external drive for your backups!

The script usage will be simple enough for most tasks:



## Restic

Write a script in `/home/user/restic.sh` in the backup VM, it will allow simple customization of the backup process.

!/bin/sh

export RESTIC_PASSWORD=mysecretpass

double // is important to make the path absolute

export RESTIC_REPOSITORY=sftp://solene@10.42.42.150://var/backups/restic_qubes

KEEP_HOURLY=1

KEEP_DAYS=5

KEEP_WEEKS=1

KEEP_MONTHS=1

KEEP_YEARS=0

case "$1" in

init)

restic init

;;

list)

restic snapshots

;;

restore)

restic restore --target . $2

;;

backup)

cat | restic backup --stdin

restic forget \

--keep-hourly $KEEP_HOURLY \

--keep-daily $KEEP_DAYS \

--keep-weekly $KEEP_WEEKS \

--keep-monthly $KEEP_MONTHS \

--keep-yearly $KEEP_YEARS \

--prune

;;

esac


Obviously, you have to change the password, you can even store it in another file and use the according restic option to load the passphrase from a file (or from a command).  Although, Qubes OS backup tool enforces you to encrypt the backup (which will be store in restic), so encrypting the restic repository won't add any more security, but it can add privacy by hiding what's in the repo.

/!\ You need to run the script with the parameter "init" the first time, in order to create the repository:

$ chmod +x restic.sh

$ ./restic.sh init


## Borg

Write a script in `/home/user/borg.sh` in the backup VM, it will allow simple customisation of the backup process.

!/bin/sh

export BORG_PASSPHRASE=mysecretpass

export BORG_REPO=ssh://solene@10.42.42.150/var/solene/borg_qubes

KEEP_HOURLY=1

KEEP_DAYS=5

KEEP_WEEKS=1

KEEP_MONTHS=1

KEEP_YEARS=0

case "$1" in

init)

borg init --encryption=repokey

;;

list)

borg list

;;

restore)

borg extract ::$2

;;

backup)

cat | borg create ::{now} -

borg prune \

--keep-hourly $KEEP_HOURLY \

--keep-daily $KEEP_DAYS \

--keep-weekly $KEEP_WEEKS \

--keep-monthly $KEEP_MONTHS \

--keep-yearly $KEEP_YEARS

;;

esac


Same explanation as with restic, you can save the password elsewhere or get it from a command, but Qubes backup already encrypt the data, so the repo encryption will mostly only add privacy.

/!\ You need to run the script with the parameter "init" the first time, in order to create the repository:

$ chmod +x borg.sh

$ ./borg.sh init


## Configure Qubes backup

Now, configure the Qubes backup tool:



# Restoring a backup

While it's nice to have backups, it's important to know how to use them.  The setup doesn't add much complexity, and the helper script will ease your life.

On the backup VM, run `./borg.sh list` (or the restic version) to display available snapshots in the repository, then use `./borg.sh restore $snap` with the second parameter being a snapshot identifier listed in the earlier command.

You will obtain a file named `stdin`, this is the file to use in Qubes OS restore tool.

# Warning

If you don't always backup all the VMs, if you keep the retention policy like in the example above, you may lose data.

For example, if you have a KEEP_HOURLY=1, create a backup of all your VMs, and just after, you specifically want to backup a single VM, you will lose the previous full backup due to the retention policy.

In some cases, it may be better to not have any retention policy, or simply time based (keep snapshots which date < n days).

# Conclusion

Using this configuration, you get all the features of a industry standard backup solution such as integrity check, retention policy or remote encrypted storage.

# Troubleshoot

In case of an issue with the backup command, Qubes backup will display a popup message with the command output, this helps a lot debugging problems.

An easy way to check if the script works by hand is to run it from the backup VM:

echo test | ./restic.sh backup


This will create a new backup with the data "test" (and prune older backups, so take care!), if it doesn't work this is a simple way to trigger a new backup to solve your issue.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/qubes-os-efficient-backup.gmi</guid>
  <link>gemini://perso.pw/blog//articles/qubes-os-efficient-backup.gmi</link>
  <pubDate>Tue, 16 May 2023 00:00:00 GMT</pubDate>
</item>

  </channel>
</rss>