💾 Archived View for perso.pw › blog › rss.xml captured on 2023-07-10 at 13:27:24.
View Raw
More Information
⬅️ Previous capture (2023-06-14)
➡️ Next capture (2023-07-22)
🚧 View Differences
-=-=-=-=-=-=-
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Solene'%</title>
<description></description>
<link>gemini://perso.pw/blog/</link>
<atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" />
<item>
<title>Old Computer Challenge v3: day 1</title>
<description>
<![CDATA[
<pre># Day 1
Hi! Today, I started the 3rd edition of the Old Computer Challenge. And it's not going well, I didn't prepare a computer before, because I wanted to see how easy it would be.
=> https://dataswamp.org/~solene/2023-06-04-old-computer-challenge-v3.html Old Computer Challenge v3
- main computer (Ryzen 5 5600X with 32 GB of memory) running Qubes OS: well, Qubes OS may be the worse OS for that challenge because it needs so much memory as everything is done in virtual machines, just handling USB devices requires 400 MB of memory
- main laptop (a t470) running OpenBSD 7.3: for some reasons, the memory limitation isn't working, maybe it's due to the hardware or the 7.3 kernel
- main laptop running OpenSUSE MicroOS (in dual boot): reducing the memory to 512MB prevent the system to unlock the LUKS drive!
The thing is that I have some other laptops around, but I'd have to prepare them with full disk encryption and file synchronization to have my passwords, GPG and SSH keys around.
With this challenge, in its first hour, I realized my current workflows don't allow me to use computers with 512 MB of memory, this is quite sad. A solution would be to use the iBook G4 laptop that I've been using since the beginning of the challenges, or my T400 running OpenBSD -current, but they have really old hardware, and the challenge is allowing some more fancy systems.
I'd really like to try Alpine Linux for this challenge, let's wrap something around this idea.
# Extra / Tips
If you joined the challenge, here is a previous guide to limit the memory of your system:
=> https://occ.deadnet.se/how/ occ.deadnet.se: Tips & Tricks
For this challenge, you also need to use a single core at lowest frequency.
On OpenBSD, limiting the CPU frequency is easy:
- stop obsdfreqd if you use it: rcctl stop obsdfreqd && rcctl disable obsdfreqd
- rcctl enable apmd
- rcctl set apmd flags -L
- rcctl restart apmd
Still on OpenBSD, limiting your system to a single core can be done by booting on the bsd.sp kernel, which doesn't support multithreading.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/old-computer-challenge-v3-part1.gmi</guid>
<link>gemini://perso.pw/blog//articles/old-computer-challenge-v3-part1.gmi</link>
<pubDate>Mon, 10 Jul 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Using anacron to run periodic tasks</title>
<description>
<![CDATA[
<pre># Introduction
When you need to regularly run a program on your workstation that isn't powered 24/7 or even not every day, you can't rely on cronjob for that task.
Fortunately, there is a good old tool for this job (first release June 2000), it's called anacron and it will track when was the last time each configured tasks have been running.
I'll use OpenBSD as an example for the setup, but it's easily adaptable to any other Unix-like system.
=> https://anacron.sourceforge.net Anacron official website
# Installation
The first step is to install the package `anacron`, this will provide the program `/usr/local/sbin/anacron` we will use later. You can also read OpenBSD specific setup instructions in `/usr/local/share/doc/pkg-readmes/anacron`.
Configure root's crontab to run anacron at system boot, we will use the flag `-d` to not run anacron as a daemon, and `-s` to run each task in a sequence instead of in parallel.
The crontab entry would look like this:
@reboot /usr/local/sbin/anacron -ds
If your computer is occasionally on for a few days, anacron won't run at all after the boot, so it would make sense to run it daily too just in case:
at each boot
@reboot /usr/local/sbin/anacron -ds
at 01h00 if the system is up
0 1 * * * /usr/local/sbin/anacron -ds
# Anacron file format
Now, you will configure the tasks you want to run, and at which frequency. This is configured in the file `/etc/anacrontab` using a specific format, different from crontab.
There is a man page named `anacrontab` for official reference.
The format consists of the following ordered fields:
- the frequency in days at which the task should be started
- the delay in minutes after which the task should be started
- a readable name (used as an internal identifier)
- the command to run
I said it before but it's really important to understand, the purpose of anacron is to run daily/weekly/monthly scripts on a system that isn't always on, where cron wouldn't be reliable.
Usually, anacron is started at the system boot and run each task from its anacrontab file, this is why a delay field is useful, you may not want your backup to start immediately upon reboot, while the system is still waiting to have a working network connection.
Some variables can be used like in crontab, the most important are `PATH` and `MAILTO`.
Anacron keeps the last run date of each task in the directory `/var/spool/anacron/` using the identifier field as a filename, it will contain the last run date in the format YYYYMMDD.
# Example for OpenBSD periodic maintenance
I really like the example provided in the OpenBSD package. By default, OpenBSD has some periodic tasks to run every day, week and month at night, we can use anacron to run those maintenance scripts on our workstations.
Edit `/etc/anacrontab` with the following content:
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin
MAILTO=""
1 5 daily_maintenance /bin/sh /etc/daily
7 5 weekly_maintenance /bin/sh /etc/weekly
30 5 monthly_maintenance /bin/sh /etc/monthly
You can manually run anacron if you want to check it's working instead of waiting for a reboot, just type `doas anacron -ds`.
What does the example mean?
- every day, after 5 minutes (after anacron invokation) run `/bin/sh /etc/daily`
- every 7 days, after 5 minutes, run `/bin/sh /etc/weekly`
- every 30 days, after 5 minutes, run `/bin/sh /etc/monthly`
# Useful examples
Here is a list of tasks I think useful to run regularly on a workstation, that couldn't be handled by a cron job.
- Backups: you may want to have a backup every day, or every few days
- OpenBSD snapshot upgrade: use `sysupgrade -ns` every n days to download the sets, they will be installed at the next boot
- OpenBSD packages update: use `pkg_add -u` every day
- OpenBSD system update: use `syspatch` every day
- Repositories update: keep your cloned git / fossil / cvs / svn repository up to date without doing it aggressively
# Conclusion
Anacron is a simple and effective way to keep your periodic tasks done even if you don't use your computer very often.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/anacron.gmi</guid>
<link>gemini://perso.pw/blog//articles/anacron.gmi</link>
<pubDate>Fri, 30 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Ban scanners IPs from OpenSMTP logs</title>
<description>
<![CDATA[
<pre># Introduction
If you are an OpenBSD running an OpenSMTP email server, you may want to ban IPs used by bots trying to bruteforce logins. OpenBSD doesn't have fail2ban available in packages, and sshguard isn't extensible enough to support the multiline log format used by OpenSMTP.
Here is a short script that looks for authentication failures in `/var/mail/maillog` and will add the IPs into the PF table `bot` after too many failed login.
# Setup
## PF
Add this rule to your PF configuration:
block in quick on egress from <bot> to any
This will block any connection from banned IPs, on all ports, not only smtp. I see no reason to allow them to try other doors.
## Script
Write the following content in an executable file, this could be `/usr/local/bin/ban_smtpd` but this doesn't really matter.
!/bin/sh
TRIES=10
EXPIRE_DAYS=5
awk -v tries="$TRIES" '
/ smtp connected / {
ips[$6]=substr($9, 9)
}
/ smtp authentication / && /result=permfail/ {
seen[ips[$6]]++
}
END {
for(ip in seen) {
if(seen[ip] > tries) {
print ip
}
}
}' /var/log/maillog | xargs pfctl -T add -t bot
if the file exists, remove IPs listed there
if [ -f /etc/mail/ignore.txt ]
then
cat /etc/mail/ignore.txt | xargs pfctl -T delete -t bot
fi
remove IPs from the table after $EXPIRE_DAYS days
pfctl -t bot -T expire "$(( 60 * 60 * 24 * $EXPIRE_DAYS ))"
This parses the maillog file, so by default it has a rotation every day, you could adapt the script to your log rotation policy to match what you want, users failing with permfail are banned after some tries, configurable with `$TRIES`.
I added support for an ignore list, to avoid blocking yourself out, just add IP addresses in `/etc/mail/ignore.txt`.
Finally, banned IPs are unbanned after 5 days, you can change it using the variable EXPIRE_DAYS.
## Cronjob
Now, edit root's crontab, you want to run this script at least every hour, and get a log if it fails.
~ * * * * -sn /usr/local/bin/ban_smtpd
This cron job will run every hour at a random minute (defined each time crond restarts, so it stays consistent for a while). The periodicity may depend on the number of scan your email server receives and also the log size vs the CPU power.
# Conclusion
This would be better to have an integrated banning system supporting multiple logfiles / daemons, such as fail2ban, but in the current state it's not possible. This script is simple, fast, extensible and does the job.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/opensmtpd-block-attempts.gmi</guid>
<link>gemini://perso.pw/blog//articles/opensmtpd-block-attempts.gmi</link>
<pubDate>Sun, 25 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Why one would use Qubes OS?</title>
<description>
<![CDATA[
<pre># Intro
Hello, I've been talking a lot about Qubes OS lately but I never explained why I got hooked to its offer. It's time to tell why I like it.
=> https://www.qubes-os.org/ Qubes OS official project website
# Presentation
Qubes OS is like a meta system emphasizing on security and privacy. You start on an almost empty XFCE interface on a system called dom0 (Xen hypervisor) with no network access: this is your desktop from which you will start virtual machines integrating into dom0 display in order to do what you need to do with your computer.
Virtual Machines in Qubes OS are called qubes, most of the time, you want them to be using a template (Debian or Fedora for the official ones). If you install a program in the template, it will be available in a Qube using that template. When a Qube is set to only have a persistent /home directory, it's called an AppVM. In that case, any change done outside /home will be discarded upon reboot.
By default, the system network devices are assigned to a special Qube named sys-net which is special in that it gets the physical network devices attached to the VM. sys-net purpose is to be disposable and provide network access to the outside to the VM named sys-firewall which will be doing some filtering.
All your qubes using Internet will have to use sys-firewall as their network provider. A practical use case if you want to use a VPN but not globally is to create a sys-vpn Qube (pick the name you want), connect it to the Internet using sys-firewall, and now you can use sys-vpn as the network source for qubes that should use your VPN, it's really effective.
If you need to use an USB device like a microphone and webcam in a Qube, you have a systray app to handle USB pass-through, from the special Qube sys-usb managing the physical USB controllers, to attach the USB device into a Qube. This allows you to plug anything USB into the computer, and if you need to analyze it, you can start a disposable VM and check what's in there.
=> https://www.qubes-os.org/attachment/site/qubes-trust-level-architecture.png Qubes OS trust level architecture diagram
## Pros
- Efficient VM management due to the use of templates.
- Efficient resource usage due to Xen (memory ballooning, para-virtualization).
- Built for being secure.
- Disposables VMs.
- Builtin integration with Tor (using whonix).
- Secure copy/paste between VMs.
- Security (network is handled by a VM which gets the physical devices attached, hypervisor is not connected).
- Practical approach: if you need to run a program you can't trust because you have too (this happens sometimes), you can do that in a disposable VM and not worry.
- Easy update management + rollback ability in VMs.
- Easy USB pass-through to VMs.
- Easy file transfer between VMs.
- Incredible VM windows integration into the host.
- Qubes-rpc to setup things like split-ssh where the ssh key is stored in an offline VM, with user approval for each use.
- Modular networking, I can make a VPN in a VPN and assign it to other VM but not all.
- Easily extensible as all templates and VMs are managed by Salt Stack.
## Cons
- No GPU acceleration for rendering (no 3D programs, high CPU usage for video/conferencing).
- Limited hardware support due to Xen.
- Requires a powerful system (high CPU requirement + the more RAM the better).
- Qubes OS could be a choice by default because there is no competitor (yet).
- The project seems a bit understaffed.
- Hard learning curve.
- Limited templates offer: Fedora, Debian and whonix are officials. The community provides extra templates based on Gentoo, Kali or Cent OS 8.
- It's meant for a single person use only for a workstation.
# My use case
I tried Qubes OS early 2022, it felt very complicated and not efficient so I abandoned it only after a few hours. This year, I did want to try again for a longer time, reading documentation, trying to understand everything.
The more I used it, the more I got hooked by the idea, and how clean it was. I basically don't really want to use a different workflow anymore, that's why I'm currently implementing OpenKuBSD to have a similar experience on OpenBSD (even if I don't plan to have as many features as Qubes OS).
My workflow is the following, this doesn't mean it's the best one, but it fits my mindset and the way I want to separate things:
- a Qube for web browsing with privacy plugins and Arkenfox user.js, this is what I use to browse websites in general
- a Qube for communication: emails, XMPP and Matrix
- a Qube for development which contains my projects source code
- a Qube for each work client which contains their projects source code
- an OpenBSD VM to do ports work (it's not as integrated as the other though)
- a Qube without network for the KeePassXC databases (personal and per-client), SSH and GPG keys
- a Qube using a VPN for some specific network tasks, it can be connected 24/7 without having all the programs going through the VPN (or without having to write complicated ip rules to use this route only in some case)
- disposable VMs at hand to try things
I've configured my system to use split-SSH and split-GPG, so some qubes can request the use of my SSH key in the dom0 GUI, and I have to manually accept that one-time authorization on each use. It may appear annoying, but at least it gives me a visual indicator that the key is requested, from which VM, and it's not automatically approved (I only have to press Enter though).
I'm not afraid of mixing up client work with my personal projects due to different VM use. If I need to make experimentation, I can create a new Qube or use a disposable one, this won't affect my working systems. I always feel dirty and unsafe when I need to run a package manager like npm to build a program in a regular workstation...
Sometimes I want to experiment a new program, but I have no idea if it's safe when installing it manually or with "curl | sudo bash". In a dispoable, I just don't care, everything is destroyed when I close its terminal, and it doesn't contain any information.
What I really like is that when I say I'm using Qubes OS, for real I'm using Fedora, OpenBSD and NixOS in VMs, not "just" Qubes OS.
However, Qubes OS is super bad for multimedia in general. I have a dual boot with a regular Linux if I want to watch videos or use 3D programs (like Stellarium or Blender).
=> https://www.qubes-os.org/news/2022/10/28/how-to-organize-your-qubes/ Qubes OS blog: how to organize your qubes: different users share their workflows
# Why would you use Qubes OS?
This is a question that seems to pop quite often on the project forum. It's hard to reply because Qubes OS has an important learning curve, it's picky with regard to hardware compatibility and requirements, and the pros/cons weight can differ greatly depending on your usage.
When you want important data to be kept almost physically separated from running programs, it's useful.
When you need to run programs you don't trust, it's useful.
When you prefer to separate contexts to avoid mixing up files / clipboard, like sharing some personal data in your workplace Slack, this can be useful.
When you want to use your computer without having to think about security and privacy, it's really not for you.
When you want to play video games, use 3D programs, benefit from GPU hardware acceleration (for machine learning, video encoding/decoding), this won't work, although with a second GPU you could attach it to a VM, but it requires some time and dedication to get it working fine.
# Security
Qubes OS security model relies on a virtualization software (currently XEN), however they are known to regularly have security issues. It can be debated whether virtualization is secure or not.
=> https://www.qubes-os.org/security/xsa/ Qubes OS security advisory tracker
# Conclusion
I think Qubes OS has an unique offer with its compartmentalization paradigm. However, the required mindset and discipline to use it efficiently makes me warn that it's not for everyone, but more for a niche user base.
The security achieved here is relatively higher than in other systems if used correctly, but it really hinders the system usability for many common tasks. What I like most is that Qubes OS gives you the tools to easily solve practical problems like having to run proprietary and untrusted software.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/qubes-os-why.gmi</guid>
<link>gemini://perso.pw/blog//articles/qubes-os-why.gmi</link>
<pubDate>Tue, 20 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Using git bundle to synchronize a repository between Qubes OS dom0 and an AppVM</title>
<description>
<![CDATA[
<pre># Introduction
In a previous article, I explained how to use Fossil version control system to version the files you may write in dom0 and sync them against a remote repository.
I figured how to synchronize a git repository between an AppVM and dom0, then from the AppVM it can be synchronized remotely if you want. This can be done using the git feature named bundle, which bundle git artifacts into a single file.
=> https://qubes-os.org Qubes OS project official website
=> https://git-scm.com/docs/git-bundle Git bundle documentation
=> https://dataswamp.org/~solene/2023-06-04-qubes-os-version-control-dom0.html Using fossil to synchronize data from dom0 with a remote fossil repository
# What you will learn
In this setup, you will create a git repository (this could be a clone of a remote repository) in an AppVM called Dev, and you will clone it from there into dom0.
Then, you will learn how to send and receive changes between the AppVM repo and the one in dom0, using git bundle.
# Setup
The first step is to have git installed in your AppVM and in dom0.
For the sake of simplicity for the guide, the path `/tmp/repo/` refers to the git repository location in both dom0 and the AppVM, don't forget to adapt to your setup.
In the AppVM Dev, create a git repository using `cd /tmp/ && git init repo`. We need a first commit for the setup to work because we can't bundle commits if there is nothing. So, commit at least one file in that repo, if you have no idea, you can write a short README.md file explaining what this repository is for.
In dom0, use the following command:
qvm-run -u user --pass-io Dev "cd /tmp/repo/ && git bundle create - master" > /tmp/git.bundle
cd /tmp/ && git clone -b master /tmp/git.bundle repo
Congratulations, you cloned the repository into dom0 using the bundle file, the path `/tmp/git.bundle` is important because it's automatically set as URL for the remote named "origin". If you want to manage multiple git repositories this way, you should use a different name for this exchange file for each repo.
[solene@dom0 repo]$ git remote -v
origin /tmp/git.bundle (fetch)
origin /tmp/git.bundle (push)
Back to the AppVM Dev, run the following command in the git repository, this will configure the bundle file to use for the remote dom0. Like previously, you can pick the name you prefer.
git remote add dom0 /tmp/dom0.bundle
# Workflow
Now, let's explain the workflow to exchange data between the AppVM and dom0. From here, we will only use dom0.
Create a file `push.sh` in your git repository with the content:
!/bin/sh
REPO="/tmp/repo/"
BRANCH=master
setup on the AppVM
git remote add dom0 /tmp/dom0.bundle
git bundle create - origin/master..master | \
qvm-run -u user --pass-io Dev "cat > /tmp/dom0.bundle"
qvm-run -u user --pass-io Dev "cd ${REPO} && git pull -r dom0 ${BRANCH}"
Create a file `pull.sh` in your git repository with the content:
!/bin/sh
REPO="/tmp/repo/"
BRANCH=master
init the repo on dom0
git clone -b ${BRANCH} /tmp/git.bundle
qvm-run -u user --pass-io Dev "cd ${REPO} && git bundle create - dom0/master..${BRANCH}" > /tmp/git.bundle
git pull -r
Make the files `push.sh` and `pull.sh` executable.
If you don't want to have the files committed in your repository, add their names to the file `.gitignore`.
Now, you are able to send changes to the AppVM repo using `./push.sh`, and receive changes using `./pull.sh`.
If needed, those scripts could be made more generic and moved in a directory in your PATH instead of being used from within the git repository.
## Explanations
Here are some explanations about those two scripts.
### Push.sh
In the script `push.sh`, `git bundle` is used to send a bundle file over stdout containing artifacts from the remote AppVM last known commit up to the latest commit in the current repository, hence origin/master..master range. This data is piped into the file `/tmp/dom0.bundle` in the AppVm, and was configured earlier as a remote for the repository.
Then, the command `git pull -r dom0 master` is used to fetch the changes from the bundle, and rebase the current repository, exactly like you would do with a "real" remote over the network.
### Pull.sh
In the script `pull.sh`, we run the `git bundle` from within the AppVM Dev to generate on stdout the bundle from the last known state of dom0 up to the latest commit in the branch master, and pipe into the dom0 file `/tmp/git.bundle`, remember that this file is the remote origin in dom0's clone.
After the bundle creation, a regular `git pull -r` is used to fetch the changes, and rebase the repository.
### Using branches
If you use different branches, this could require adding an extra parameter to the script to make the variable BRANCH configurable.
# Conclusion
I find this setup really elegant, the safe `qvm-run` is used to exchange static data between dom0 and the AppVM, no network is involved in the process. Now there is no reason to have dom0 configuration file not properly tracked within a version control system :)
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/qubes-os-git-bundle.gmi</guid>
<link>gemini://perso.pw/blog//articles/qubes-os-git-bundle.gmi</link>
<pubDate>Mon, 19 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>OpenKuBSD progress report</title>
<description>
<![CDATA[
<pre># Introduction
Here is a summary of my progress for writing OpenKuBSD. So far, I've had a few blockers but I've been able to find solutions, more or less simple and nice, but overall I'm really excited about how the project is turning out.
=> https://tildegit.org/solene/openkubsd OpenKuBSD source code on tildegit.org (current branch == PoC)
As a quick introduction to OpenKuBSD in its current state, it's a program to install on top of OpenBSD, using mostly base system tools.
- OpenBSD templates can be created and configured
- Kubes (VMs) inherit an OpenBSD template for the disk, except for a dedicated persistent /home, any changes outside of /home will be reset on each boot
- Kubes have a nice name like "www.kube" to connect to
- NFS storage per Kube in /shared/ , this allows data to be shared with the host, which can then move files between Kubes via the shared directories
- Xephyr based compartimentalization for GUI display. Each program run has its own Xephyr server.
- Clipboard manipulation tool: a utility for copying the clipboard from one Xephyr to another one. This is a secure way to share the clipboard between Kubes without leakage.
- On-demand start and polling for ssh connection, so you don't have to pre-start a Kube before running a program.
- Executable `/home/openkubsd/rc.local` script at boot time to customize an environment at kube level rather than template level
- Desktop entry integration: a script is available to create desktop entries to run program X on Kube Y, directly from the menu
The Xephyr trick was hard to figure and implement correctly. Originally, I used `ssh -Y` which worked fine, and integrated very well with the desktop however:
- ssh -Y allows any window to access the X server, meaning any hacked VM could access all other running programs
- ssh -X is secure, but super bad: slow, can't have a custom layout, crashes when trying to do access X in some cases. (fun fact, on Fedora, ForwardX11Trusted seems to be set to Yes by default, so ssh -X does ssh -Y!)
- Xephyr worked, but running a program in it didn't use the full display, so a window manager was required. But all the tiling window managers I used (to automatically use all the screen) couldn't resize when Xephyr was resized.... except stumpwm!
- Stumpwm custom configuration to quit when it has no more window displayed. If you exit your programs then stumpwm quits then Xephyr stops.
# Demo videos
=> https://perso.pw/solene/openkubsd.mp4 OpenKuBSD: easily running programs from VMs
=> https://perso.pw/solene/openkubsd-nfs-desktop.mp4 OpenKuBSD: NFS shares and desktop entries
=> https://perso.pw/solene/openkubsd-clipboard.mp4 OpenKuBSD: Xephyr implementation and clipboard helper
# Roadmap
I'm really getting satisfied with the current result. It's still far from being ready to ship or feature complete, but I think the foundations are quite cool.
Next steps:
- tighten the network access for each Kube using PF (only NAT + host access + prevent spoofing)
- allow a Kube to not have NAT (communication would be restricted to the host only for ssh access), this is the most "no network" implementation I can achieve.
- allow a Kube to have a NAT from another Kube (to handle a Kube VPN for a specific list of Kubes)
- figure how to make a Tor VPN Kube
- allow to make disposable Kubes using the Tor VPN Kube network
Mid term steps:
- support Alpine Linux (with features matching what OpenBSD Kubes have)
Long term steps:
- rewrite all OpenKuBSD shell implementation into a daemon/client model, easier to install, more robust
- define a configuration file format to declare all the infrastructure
- release to wider audience
- open a bug tracker
# Conclusion
The project is still in its beginning, but I made important progress over the last two weeks, I may reduce the pace here a bit to get everything stabilized. I started using OpenKuBSD on my own computer, this helps a lot to refine the workflow and see what feature matter, and which design is wrong or correct.
I hope you like that project as much as I do.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/openkubsd-progress-1.gmi</guid>
<link>gemini://perso.pw/blog//articles/openkubsd-progress-1.gmi</link>
<pubDate>Fri, 16 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>OpenKuBSD design document</title>
<description>
<![CDATA[
<pre># Introduction
I got an idea today (while taking a shower...) about _partially_ reusing Qubes OS design of using VMs to separate contexts and programs, but doing so on OpenBSD.
To make explanations CLEAR, I won't reimplement Qubes OS entirely on OpenBSD. Qubes OS is an interesting operating system with a very strong focus on security (from a very practical point of view ), but it's in my opinion overkill for most users, and hence not always practical or usable.
In the meantime, I think the core design could be reused and made it easy for users, like we are used to do in OpenBSD.
# Why this project?
I like the way Qubes OS allows to separate things and to easily run a program using a VPN without affecting the rest of the system. Using it requires a different mindset, one has to think about data silos, what do I need for which context?
However, I don't really like that Qubes OS has so many opened issues, governance isn't clear, and Xen seems to be creating a lot of troubles with regard to hardware compatibility.
I'm sure I can provide a similar but lighter experience, at the cost of "less" security. My threat model is more preventing data leak in case of a compromised system/software, than protecting my computer from a government secret agency.
After spending two months using "immutables" distributions (openSUSE MicroOS, Vanilla OS, Silverblue), where they all want to you use root-less containers (with podman) through distrobox, I hate that idea, it integrates poorly with the host, it's a nightmare to maintain, can create issues due to different versions of programs altering your user data directory, and that just doesn't bring anything much to the table except allowing users to install software without being root (and without having to reboot on those systems).
# Key features
Here is a list of features that I think good to implement.
- vmd based OpenBSD and Alpine template (installation automated), with the help of qcow2 format for VMs, it's possible to create a disk based on another, a must for using templates
- disposable VMs, they are started from the template but using a derived disk of the template, destroyed after use
- AppVM, a VM created with a persistent /home, and the rest of the system is inherited from the template using a derived qcow2 from template
- VPN VMs that could be used by other VMs as their network source (Tor VPN template should be provided)
- Simple configuration file describing your templates, your VMS, packages installed (in templates), and which network source to use for which VM
- Installing software in templates will create .desktop files in menus to easily start programs (over `ssh -Y`)
- OpenBSD host should be USABLE (hardware acceleration, network handling, no perf issues)
- OpenBSD host should be able to transfer files between VMs using ssh
- Audio disabled by default on VMs, sndio could be allowed (by the user in a configuration file) to send the sound to the host
- Should work with at least 4 GB of memory (I would like to make just 2 as a requirement if possible)
Some kind of quick diagram explaining relationship of various components. This doesn't show the whole picture because it wouldn't be easy to represent (and I didn't had time to try doing so yet):
=> static/OpenKuBSD-design.svg OpenKuBSD design diagram
# What I don't plan to do
- HVM support and passthrough, this could be done one day if vmd supports passthrough, but this creates too much problems, and only help security for niche use case I don't want to focus on
- USB passthrough, too complex to implement, too niche use case
- VM RPC, except for the host being able to copy files from one vm to the other using ssh
- An OpenBSD distribution, OpenKuBSD must be installable on top of OpenBSD with the least friction possible, not as a separate system
- Support Windows guests
# Roadmap
The first step is to make a proof of concept:
- generate the OpenBSD template automatically
- being able to start a disposable VM using the OpenBSD template
- generate an OpenBSD Tor template
- being able to use it in the disposable VM
# Trivia
I announced it as OpenKuBISD, but I prefer to name it OpenKuBSD :)
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/openkubsd-design.gmi</guid>
<link>gemini://perso.pw/blog//articles/openkubsd-design.gmi</link>
<pubDate>Tue, 06 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>The Old Computer Challenge V3</title>
<description>
<![CDATA[
<pre># Introduction
Hi! It's that time of the year when I announce a new Old Computer Challenge :)
If you don't know about it, it's a weird challenge I've did twice in the past 3 years that consists into limiting my computer performance using old hardware, or by limiting Internet access to 60 minutes a day.
=> https://dataswamp.org/~solene/tag-oldcomputerchallenge.html Blog posts tagged "oldcomputerchallenge"
# 2023's challenge
I want this challenge to be accessible. The first one wasn't easy for many because it required to use an old machine, but many readers didn't have a spare old computer (weird right? :P). The second one with Internet time limitation was hard to setup.
This one is a bit back to the roots: let's use a SLOW computer for 7 days. This will be achieved by various means with any hardware:
- Limit your computer's CPU to use only 1 core. This can be set in the BIOS most of the time, and on Linux you can use `maxcores=1` in the boot command line, on OpenBSD you can use `bsd.sp` kernel for the duration of the challenge.
- Limit your computer's memory to 512 MB of memory (no swap limit). This can be set on Linux using the boot command line `mem=512MB`. On OpenBSD, this can be achieved a bit similarly by using `datasize-max=512M` in login.conf for your user's login class.
- Set your CPU frequency to the lowest minimum (which is pretty low on modern hardware!). On Linux, use the "powersave" frequency governor, in modern desktop environments the battery widget should offer an easy way to set the governor. On OpenBSD, run `apm -L` (while apmd service is running). On Windows, in the power settings, set the frequency to minimum.
I got the idea when I remembered a few people reporting these tricks to do the first challenge, like in this report:
=> https://portal.mozz.us/gemini/carcosa.net/journal/20210713-old-computer-challenge.gmi Carcosa's report of the first challenge (link via gemini http bridge)
You are encouraged to join the IRC channel #oldcomputerchallenge on libera.chat server to share about your experience.
Feel free to write reports, it's always fun to read about other going through the challenge.
# When
The challenge will start the 10th July 2023, and end the 16th July 2023 at the end of the day.
# Frequently asked questions
- If you use a computer to work, it isn't affected by the challenge, keep your job please. But don't use it to circumvent your regular slow computer.
- If you use a computer with lower specs, this is compliant with the challenge rules.
- Feel free to ask me questions, I want this to be easy to everyone to have fun together. I can update this blog post to make things clearer if needed.
- Gnome desktop doesn't start with 512 MB of memory :D
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/old-computer-challenge-v3.gmi</guid>
<link>gemini://perso.pw/blog//articles/old-computer-challenge-v3.gmi</link>
<pubDate>Sun, 04 Jun 2023 00:00:00 GMT</pubDate>
</item>
<item>
<title>Qubes OS dom0 files workflow using fossil</title>
<description>
<![CDATA[
<pre># Introduction
Since I'm using Qubes OS, I always faced an issue; I need a proper tracking of the configuration files for my systemthis can be done using Salt as I explained in a previous blog post. But what I really want is a version control system allowing me to synchronize changes to a remote repository (it's absurd to backup dom0 for every change I make to a salt file). So far, git is too complicated to achieve that.
I gave a try with fossil, a tool I like (I wrote about this one too ;) ), and it was surprisingly easy to setup remote access leveraging Qubes'qvm-run.
In this blog post, you will learn how to setup a remote fossil repository, and how to use it from your dom0.
=> https://dataswamp.org/~solene/2023-01-29-fossil-scm.html Previous article about Fossil cheatsheet
# Repository creation
On the remote system where you want to store the fossil repository (it's a single file), run `fossil init my-repo.fossil`.
The only requirement for this remote system is to be reachable over SSH by an AppVM in your Qubes OS.
# dom0 clone
Now, we will clone this remote repository in our dom0, I'm personnally fine with storing such files in `/root/` directory.
In the following example, the file `my-repo.fossil` was created on the machine `10.42.42.200` with the path `/home/solene/devel/my-repo.fossil`. I'm using the AppVM `qubes-devel` to connect to the remote host using SSH.
[root@dom0 ~#] fossil clone --ssh-command "qvm-run --pass-io --no-gui -u user qubes-devel 'ssh'" ssh://10.42.42.200://home/solene/devel/my-repo.fossil /root/my-repo.fossil
This command clone a remote fossil repository by piping the SSH command through qubes-devel AppVM, allowing fossil to reach the remote host.
Cool fact with fossil's clone command, it keeps the proxy settings, so no further changes are required.
With a Split SSH setup, I'm asked everytime fossil is synchronizing; by default fossil has "autosync" mode enabled, for every commit done the database is synced with the remote repository.
# Open the repository (reminder about fossil usage)
As I said, fossil works with repository files. Now you cloned the repository in `/root/my-repo.fossil`, you could for instance open it in `/srv/` to manage all your custom changes to the dom0 salt.
This can be achieved with the following command:
[root@dom0 ~#] cd /srv/
[root@dom0 ~#] fossil open --force /root/my-repo.fossil
The `--force` flag is needed because we need to open the repository in a non-empty directory.
# Conclusion
Finally, I figured a proper way to manage my dom0 files, and my whole host. I'm very happy of this easy and reliable setup, especially since I'm already a fossil user. I don't really enjoy git, so demonstrating alternatives working fine always feel great.
If you want to use Git, I have a hunch that something could be done using `git bundle`, but this requires some investigation.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/qubes-os-version-control-dom0.gmi</guid>
<link>gemini://perso.pw/blog//articles/qubes-os-version-control-dom0.gmi</link>
<pubDate>Wed, 07 Jun 2023 00:00:00 GMT</pubDate>
</item>
</channel>
</rss>