💾 Archived View for perso.pw › blog › rss.xml captured on 2024-05-26 at 14:37:27.
View Raw
More Information
⬅️ Previous capture (2024-05-10)
➡️ Next capture (2024-06-16)
🚧 View Differences
-=-=-=-=-=-=-
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Solene'%</title>
<description></description>
<link>gemini://perso.pw/blog/</link>
<atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" />
<item>
<title>OpenBSD mirror over Tor / I2P</title>
<description>
<![CDATA[
<pre># Introduction
For an upcoming privacy related article about OpenBSD I needed to setup an access to an OpenBSD mirror both from a Tor hidden service and I2P.
The server does not contain any data, it only act as a proxy fetch files from a random existing OpenBSD mirror, so it does not waste bandwidth mirroring everything, the server does not have the storage required anyway. There is a little cache to keep most requested files locally.
=> https://en.wikipedia.org/wiki/I2P Wikipedia page about I2P protocol
=> https://en.wikipedia.org/wiki/The_Tor_Project Wikipedia page about Tor
It is only useful if you can not reach OpenBSD mirrors, or if you really need to hide your network activity. Tor or I2P will be much slower than connecting to a mirror using HTTP(s).
However, as they exist now, let me explain how to start using them.
# Tor
Using a client with tor proxy enabled, you can reach the following address to download installers or sets.
=> http://kdzlr6wcf5d23chfdwvfwuzm6rstbpzzefkpozp7kjeugtpnrixldxqd.onion/pub/OpenBSD/ OpenBSD onion mirror over Tor
If you want to install or update your packages from tor, you can use the onion address in `/etc/installurl`. However, it will not work for sysupgrade and syspatch, and you need to export the variable `FETCH_CMD="/usr/local/bin/curl -L -s -q -N -x socks5h://127.0.0.1:9050"` in your environment to make `pkg_*` programs able to use the mirror.
To make sysupgrade or syspatch able to use the onion address, you need to have the program `torsocks` installed, and patch the script to use torsocks:
- `sed -i 's,ftp -N,/usr/local/bin/torsocks &,' /usr/sbin/sysupgrade` for sysupgrade
- `sed -i 's,ftp -N,/usr/local/bin/torsocks &,' /usr/sbin/syspatch` for syspatch
These patches will have to be reapplied after each sysupgrade run.
# I2P
If you have a client with i2p proxy enabled, you can reach the following address to download installers or sets.
=> http://2st32tfsqjnvnmnmy3e5o5y5hphtgt4b2letuebyv75ohn2w5umq.b32.i2p:8081/pub/OpenBSD/ OpenBSD mirror address over I2P
If you want to install or update your packages from i2p, install i2pd with `pkg_add i2pd`, edit the file `/etc/i2pd/i2pd.conf` to set `notransit = true` except if you want to act as an i2p relay (high cpu/bandwidth consumption).
Replace the file `/etc/i2pd/tunnels.conf` by the following content (or adapt your current tunnels.conf if you configured it earlier):
[MIRROR]
type = client
address = 127.0.0.1
port = 8080
destination = 2st32tfsqjnvnmnmy3e5o5y5hphtgt4b2letuebyv75ohn2w5umq.b32.i2p
destinationport = 8081
keys = mirror.dat
Now, enable and start i2pd with `rcctl enable i2pd && rcctl start i2pd`.
After a few minutes to let i2pd establish tunnels, you should be able to browse the mirror over i2p using the address `http://127.0.0.1:8080/`. You can configure the port 8080 to another you prefer by modifying the file `tunnels.conf`.
You can use the address `http://127.0.0.1:8080/pub/OpenBSD/` in `/etc/installurl` to automatically use the I2P mirror for installing/updating packages, or keeping your system up to date with syspatch/sysupgrade.
# Conclusion
There were no method to download OpenBSD files over Tor and I2P for people really needing it, it is now a thing.
If you encounter issues with the service, please let me know.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/openbsd-privacy-friendly-mirror.gmi</guid>
<link>gemini://perso.pw/blog//articles/openbsd-privacy-friendly-mirror.gmi</link>
<pubDate>Sat, 25 May 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>Organize your console with tmuxinator</title>
<description>
<![CDATA[
<pre># Introduction
This article is about the program tmuxinator, a tool to script the generation of tmux sessions from a configuration file.
=> https://github.com/tmuxinator/tmuxinator tmuxinator official project website on GitHub
This program is particularly useful when you have repeated tasks to achieve in a terminal, or if you want to automate your tmux session to save your fingers from always typing the same commands.
tmuxinator is packaged in most distributions and requires tmux to work.
# Configuration
tmuxinator requires a configuration file for each "session" you want to manage with it. It provides a command line parameter to generate a file from a template:
$ tmuxinator new name_here
By default, it will create the yaml file for this project in `$HOME/.config/tmuxinator/name_here.yml`, if you want the project file to be in a directory (to make it part of a versioned project repository?), you can add the parameter `--local`.
# Real world example
Here is a tmuxinator configuration file I use to automatically do the following tasks, the commands include a lot of monitoring as I love watching progress and statistics:
- update my ports tree using git before any other task
- run a script named dpb.sh
- open a shell and cd into a directory
- run an infinite loop displaying ccache statistics
- run an infinite loop displaying a MFS mount point disk usage
- display top
- display top for user _pbuild
I can start all of this using `tmuxinator start dpb`, or stop only these "parts" of tmux with `tmuxinator stop dpb` which is practical when using tmux a lot.
Here is my file `dpb.yml`:
name: dpb
root: ~/
Runs on project start, always
on_project_start: cd /usr/ports && doas -u solene git pull -r
windows:
- dpb:
layout: tiled
panes:
- dpb:
- cd /root/packages/packages
- ./dpb.sh -P list.txt -R
- watcher:
- cd /root/logs
- ls -altrh locks
- date
- while true ; do clear && env CCACHE_DIR=/build/tmp/pobj/.ccache/ ccache -s ; sleep 5 ; done
- while true ; do df -h /build/tmp/pobj_mfs/ | grep % ; sleep 10 ; done
- top
- top -U _pbuild
# Going further
Tmuxinator could be used to ssh into remote servers, connect to IRC, open your email client, clean stuff, there are no limits.
This is particularly easy to configure as it does not try to run commands, but only send the keys to each tmux panes, which mean it will send keystrokes like if you typed them. In the example above, you can see how the pane "dpb" can cd into a directory and then run a command, or how the pane "watcher" can run multiple commands and leave the shell as is.
# Conclusion
I knew about tmuxinator for a while, but I never gave it a try before this week. I really regret not doing it earlier. Not only it allows me to "script" my console usage, but I can also embed some development configuration into my repositories. While you can use it as an automation method, I would not rely too much on it though, it only types blindly on the keyboard.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/potw-tmuxinator.gmi</guid>
<link>gemini://perso.pw/blog//articles/potw-tmuxinator.gmi</link>
<pubDate>Mon, 20 May 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>What is going on in Nix community?</title>
<description>
<![CDATA[
<pre># Introduction
You may have heard about issues within the Nix/NixOS community, this blog post will try to help you understand what is going on.
Please note that it is hard to get a grasp of the big picture, it is a more long term feeling that the project governance was wrong (or absent?) and people got tired.
This blog posts was written with my knowledge and feelings, I clearly do not represent the community.
=> https://save-nix-together.org/ Save Nix Together: an open letter to the NixOS foundation
=> https://xeiaso.net/blog/2024/much-ado-about-nothing/ Xe blog post: Much ado about nothing
There is a maintainer departure milestone in the Nixpkgs GitHub project.
=> https://github.com/NixOS/nixpkgs/milestone/27 GitHub milestone 27: Maintainers leaving
# Project structure
First, it is important to understand how the project works.
Nix (and NixOS, but it is not the core of the project), was developed by Eelco Dolstra early 2000. The project is open source, available on GitHub and everyone can contribute.
Nix is a tool to handle packaging in a certain way, and it has another huge repository (top 10 GitHub repo) called nixpkgs that contains all packages definition. nixpkgs is known to be the most up-to-date repository and biggest repository of packages, thanks to heavy automation and a huge community.
The NixOS foundation (that's the name of the entity managing the project) has a board that steer the project in some direction and handle questions. First problem is that it is known to be slow to act and response.
Making huge changes to Nix or Nixpkgs requires making an RFC (Request For Comment), explaining the rationale behind a change and a consensus has to be found with others to agree (it is somewhat democratic). Eelco decided a while ago to introduce a huge change in Nix (called Flakes) without going through the whole RFC process, this introduced a lot of tension and criticism because they should have gone through the process like every other people, and the feature is half-baked but got some traction and now Nix paradigm was split between two different modes that are not really compatible.
=> https://github.com/NixOS/rfcs/pull/49#issuecomment-659372623 GitHub Pull request to introduce Flakes: Eelco Dolstra mentioning they could merge it as experimental
There are also issues related to some sponsors in the Nix conferences, like companies related to militaries, but this is better explained in the links above, so I will not make a recap.
# Company involvement
This point is what made me leave NixOS community. I worked for a company called Tweag, involved into Nix for a while and paying people to contribute to Nix and Nixpkgs to improve the user experience for their client. This made me realize the impact of companies into open source, and the more I got involved into this, the more I realized that Nix was mostly driven by companies paying developers to improve the tool for business.
Paying people to develop features or fixing bug is fine, but when a huge number of contributors are paid by companies, this lead to poor decisions and conflicts of interest.
In the current situation, Eelco Dolstra published a blog post to remember the project is open source and belong to its contributors.
=> https://determinate.systems/posts/on-community-in-nix/ Eelco Dolstra blog post
The thing in this blog post, that puzzles me, is that most people at Determinate Systems (Eelco co-founded company) are deeply involved into Nix in various way. In this situation, it is complicated for contributors to separate what they want for the project from what their employer wants. It is common for nix contributors to contribute with both hats.
# Conclusion
Unfortunately, I am not really surprised this is happening. When a huge majority of people spending their free time contributing to a project they love and that companies relentlessly quiet their voice, it just can't work.
I hope Nix community will be able to sort this out and keep contributing to the project they love. This is open source and libre software, most affected people contribute because they like doing so, they do not deserve what is happening, but it never came with any guarantees either.
# Extra: Why did I stop using Nix?
I don't think this deserved a dedicated blog post, so here are some words.
From my experience, contributing to Nix was complicated. Sometimes, changes could be committed in minutes, leaving no time for other to review a change, and sometimes a PR could take months or years because of nitpicking and maintainer losing faith.
Another reason I stopped using nix was that it is quite easy to get nixpkgs commit access (I don't have commit access myself, I never wanted to inflict the nix language to myself), a supply chain attack would be easy to achieve in my opinion: there are so many commits done that it is impossible for a trustable group to review everything, and there are too many contributors to be sure they are all trustable.
# Alternative to Nix/NixOS?
If you do not like Nix/NixOS governance, it could be time to take a look at Guix, a Nix fork that happened in 2012. It is a much smaller community than nix, but the tooling, packages set and community is not at rest.
Guix being a 100% libre software project, it does not target MacOS like nix, nor it will include/package proprietary software, however for the second "problem", there is an unofficial repository called guix-nonfree that contains many packages like firmware and proprietary software, most users will want to include this repo.
Guix is old school, people exchange over IRC and send git diff over email, please do not bother them if this is not your cup of tea. On top of that, Guix uses the programming language Scheme (a Lisp-1 language) and if you want to work with this language, emacs is your best friend (try geiser mode!).
=> https://guix.gnu.org/ Guix official project webpage
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/nix-internal-crisis.gmi</guid>
<link>gemini://perso.pw/blog//articles/nix-internal-crisis.gmi</link>
<pubDate>Sat, 27 Apr 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>OpenBSD scripts to convert wg-quick VPN files</title>
<description>
<![CDATA[
<pre># Introduction
If you use commercial VPN, you may have noticed they all provide WireGuard configurations in the wg-quick format, this is not suitable for an easy use in OpenBSD.
As I currently work a lot for a VPN provider, I often have to play with configurations and I really needed a script to ease my work.
I made a shell script that turns a wg-quick configuration into a hostname.if compatible file, for a full integration into OpenBSD. This is practical if you always want to connect to a given VPN server, not for temporary connections.
=> https://man.openbsd.org/hostname.if OpenBSD manual pages: hostname.if
=> https://git.sr.ht/~solene/wg-quick-to-hostname-if Sourcehut project: wg-quick-to-hostname-if
# Usage
It is really easy to use, download the script and mark it executable, then run it with your wg-quick configuration as a parameter, it will output the hostname.if file to the standard output.
wg-quick-to-hostname-if fr-wg-001.conf | doas tee /etc/hostname.wg0
In the generated file, it uses a trick to dynamically figure the current default route which is required to keep a non-vpn route to the VPN gateway.
# Short VPN sessions
When I shared my script on mastodon, Carlos Johnson shared their own script which is pretty cool and complementary to mine.
If you prefer to establish a VPN for a limited session, you may want to take a look at his script.
=> https://gist.github.com/callemo/aea83a8d0e1e09bb0d94ab85dc809675#file-wg-sh Carlos Johnson GitHub: file-wg-sh gist
# Prevent leaks
If you need your WireGuard VPN to be leakproof (= no network traffic should leave the network interface outside the VPN if it's not toward the VPN gateway), you should absolutely do the following:
- your WireGuard VPN should be on rdomain 0
- WireGuard VPN should be established on another rdomain
- use PF to block traffic on the other rdomain that is not toward the VPN gateway
- use the VPN provider DNS or a no-log public DNS provider
=> https://dataswamp.org/~solene/2021-10-09-openbsd-wireguard-exit.html Older blog post: WireGuard and rdomains
# Conclusion
OpenBSD's ability to configure WireGuard VPNs with ifconfig has always been an incredible feature, but it was not always fun to convert from wg-quick files. But now, using a commercial VPN got a lot easier thanks to a few piece of shell.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/openbsd-wg-quick-converter.gmi</guid>
<link>gemini://perso.pw/blog//articles/openbsd-wg-quick-converter.gmi</link>
<pubDate>Mon, 29 Apr 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>A Stateless Workstation</title>
<description>
<![CDATA[
<pre># Introduction
I always had an interest for practical security on computers, being workstations or servers. Many kinds of threats exist for users and system administrators, it's up to them to define a threat model to know what is acceptable or not. Nowadays, we have choice in the operating system land to pick what works best for that threat model: OpenBSD with its continuous security mechanisms, Linux with hardened flags (too bad grsec isn't free anymore), Qubes OS to keep everything separated, immutable operating system like Silverblue or MicroOS (in my opinion they don't bring much to the security table though) etc...
My threat model always had been the following: some exploit on my workstation remaining unnoticed almost forever, stealing data and capturing keyboard continuously. This one would be particularly bad because I have access to many servers through SSH, like OpenBSD servers. Protecting against that is particularly complicated, the best mitigations I found so far is to use Qubes OS with disposable VMs or restricting outbound network, but it's not practical.
My biggest grip with computers always have been "states". What is a state? It is something that distinguish a computer from another: installed software, configuration, data at rest (pictures, documents etc…). We use states because we don't want to lose work, and we want our computers to hold our preferences.
But what if I could go stateless? The best defense against data stealer is to own nothing, so let's go stateless!
# Going stateless
My idea is to be able to use any computer around, and be able to use it for productive work, but it should always start fresh: stateless.
A stateless productive workstation obviously has challenges: How would it help with regard to security? How would I manage passwords? How would I work on a file over time? How to achieve this?
I have been able to address each of these questions. I am now using a stateless system.
> States? Where we are going, we don't need states! (certainly Doc Brown in a different timeline)
## Data storage
It is obvious that we need to keep files for most tasks. This setup requires a way to store files on a remote server.
Here are different methods to store files:
- Nextcloud
- Seafile
- NFS / CIFS over VPN
- iSCSI over VPN
- sshfs / webdav mount
- Whatever works for you
Encryption could be done locally with tools like cryfs or gocryptfs, so only encrypted files would be stored on the remote server.
Nextcloud end-to-end encryption should not be used as of April 2024, it is known to be unreliable.
Seafile, a less known alternative to Nextcloud but focused only on file storage, supports end-to-end encryption and is reliable. I chose this one as I had a good experience with it 10 years ago.
Having access to the data storage in a stateless environment comes with an issue: getting the credentials to access the files. Passwords should be handled differently.
## Password management
When going stateless, the first step that will be required after a boot will be to access the password manager, otherwise one would be locked outside.
The passwords must be reachable from anywhere on Internet, with a passphrase you know and/or hardware token you have (and why not 2FA).
A self-hosted solution is vaultwarden (it used to be named bitwarden_rs), it's an open source reimplementation of Bitwarden server.
Any proprietary service offering password management could work too.
A keepassxc database on a remote storage service for which you know the password could also be used, but it is less practical.
## Security
The main driving force for this project is to increase my workstation security, I had to think hard about this part.
Going stateless requires a few changes compared to a regular workstation:
- data should be stored on a remote server
- passwords should be stored on a remote server
- a bootable live operating system
- programs to install
This is mostly a paradigm change with pros and cons compared to a regular workstation.
Data and passwords stored in the cloud? This is not really an issue when using end-to-end encryption, this is true as long as the software is trustable and its code is correct.
A bootable live operating system is quite simply to acquire. There is a ton of choice of Linux distributions able to boot from a CD or from USB, and also non Linux live system exist. A bootable USB device could be compromised while a CD is an immutable media, but there are USB devices such as the Kanguru FlashBlu30 with a physical switch to make the device read-only. A USB device could be removed immediately after the boot, making it safe. As for physically protecting the USB device in case you would not trust it anymore, just buy a new USB memory stick and resilver it.
=> https://www.kanguru.com/products/kanguru-flashblu30-usb3-flash-drive Product page: Kanguru FlashBlu30
As for installed programs, it is fine as long as they are packaged and signed by the distribution, the risks are the same as for a regular workstation.
The system should be more secure than a typical workstation because:
- the system never have access to all data at once, user is supposed to only pick what they will need for a given task
- any malware that would succeed to reach the system would not persist to the next boot
The system would be less secure than a typical workstation because:
- remote servers could be exploited (or offline, not a security issue but…), this is why end-to-end encryption is a must
To circumvent this, I only have the password manager service reachable from the Internet, which then allows me to create a VPN to reach all my other services.
## Ecology
I think it is a dimension that deserves to be analyzed for such setup. A stateless system requires remote servers to run, and use bandwidth to reinstall programs at each boot. It is less ecological than a regular workstation, but at the same time it may also enforce some kind of rationalization of computer usage because it is a bit less practical.
## State of the art
Here is a list of setup that already exist which could provide a stateless experience, with support for either a custom configuration or a mechanism to store files (like SSH or GPG keys, but an USB smart card would be better for those):
- NixOS with impermanence, this is an installed OS, but almost everything on disk is volatile
- NixOS live-cd generated from a custom config
- Tails, comes with a mechanism to locally store encrypted files, privacy-oriented, not really what I need
- Alpine with LBU, comes with a mechanism to locally store encrypted files and cache applications
- FuguITA, comes with a mechanism to locally store encrypted files (OpenBSD based)
- Guix live-cd generated from a custom config
- Arch Linux generated live-cd
- Ubuntu live-cd, comes with a mechanism to retrieve files from a partition named "casper-rw"
Otherwise, any live system could just work.
Special bonus to NixOS and Guix generated live-cd as you can choose which software will be in there, in latest version. Similar bonus with Alpine and LBU, packages are always installed from a local cache which mean you can update them.
A live-cd generated a few months ago is certainly not really up to date.
# My experience
I decided to go with Alpine with its LBU mechanism, it is not 100% stateless but hit the perfect spot between "I have to bootstrap everything from scratch" and "I can reduce the burden to a minimum".
=> https://dataswamp.org/~solene/2023-07-14-alpine-linux-from-ram-but-persistent.html Earlier blog post: Alpine Linux from RAM but persistent
My setup requires two USB memory stick:
- one with Alpine installer, upgrading to a newer Alpine version only requires me to write the new version on that stick
- a second to store the packages cache and some settings such as the package list and specific changes in /etc (user name, password, services)
While it is not 100% stateless, the files on the second memory stick are just a way to have a working customized Alpine.
This is a pretty cool setup, it boots really fast as all the packages are already in cache on the second memory stick (packages are signed, so it is safe). I made a Firefox profile with settings and extensions, so it is always fresh and ready when I boot.
I decided to go with the following stack, entirely self-hosted:
- Vaultwarden for passwords
- Seafile for data (behind VPN)
- Nextcloud for calendar and contacts (behind VPN)
- Kanboard for task management (behind VPN)
- Linkding for bookmarks (behind VPN)
- WireGuard for VPN
This setup offered me freedom. Now, I can bootstrap into my files and passwords from any computer (a trustable USB memory stick is advisable though!).
I can also boot using any kind of operating system on any on my computer, it became so easy it's refreshing.
I do not make use of dotfiles or stored configurations because I use vanilla settings for most programs, a git repository could be used to fetch all settings quickly though.
=> https://github.com/dani-garcia/vaultwarden Vaultwarden official project website
=> https://www.seafile.com/en/home/ Seafile official project website
=> https://nextcloud.com/ Nextcloud official project website
=> https://kanboard.org/ Kanboard official project website
=> https://github.com/sissbruecker/linkding Linkding official project website
# Backups
A tricky part with this setup is to proceed with serious backups. The method will depend on the setup you chose.
With my self-hosted stack, restic makes a daily backup to two remote locations, but I should be able to reach the backup if my services are not available due to a server failure.
If you use proprietary services, it is likely they should handle backups, but it is better not to trust them blindly and checkout all your data on a regular schedule to make a proper backup.
# Conclusion
This is an interesting approach to workstations management, I needed to try. I really like how it freed me from worrying about each workstation, they are now all disposable.
I made a mind map for this project, you can view it below, it may be useful to better understand how things articulate.
=> static/stateless_computing-fs8.png Stateless computing mind mapping document
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/workstation-going-stateless.gmi</guid>
<link>gemini://perso.pw/blog//articles/workstation-going-stateless.gmi</link>
<pubDate>Tue, 23 Apr 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>Lessons learned with XZ vulnerability</title>
<description>
<![CDATA[
<pre># Intro
Yesterday Red Hat announced that xz library was compromised badly, and could be use as a remote execution code vector. It's still not clear exactly what's going on, but you can learn about this on the following GitHub discussion that also links to original posts:
=> https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27 Discussion about xz being compromised
# What's the state?
As far as we currently know, xz-5.6.0 and xz-5.6.1 contains some really obfsucated code that would trigger only in sshd, this only happen in the case of:
- the system is running systemd
- openssh is compiled with a patch to add a feature related to systemd
- the system is using glibc (this is mandatory for systemd systems afaik anyway)
- xz package was built using release tarballs published on GitHub and not auto-generated tarballs, the malicious code is missing in the git repository
So far, it seems openSUSE Tumbleweed, Fedora 40 and 41 and Debian sid were affected and vulnerable. Nobody knows what the vulnerability is doing exactly yet, when security researchers get their hands on it, we will know more.
OpenBSD, FreeBSD, NixOS and Qubes OS (dom0 + official templates) are unaffected. I didn't check for other but Alpine and Guix shouldn't be vulnerable either.
=> https://security.gentoo.org/glsa/202403-04 Gentoo security advisory (unaffected)
# What lessons could we learn?
This is really unfortunate that a piece of software as important and harmless in appareance got compromised. This made me think about how could we protect the most against this kind of issues, I came to the conclusion:
- packages should be built from source code repository instead of tarballs whenever possible (sometimes tarballs contain vendoring code which would be cumbersome to pull otherwise), at least we would know what to expect
- public network services that should be only used by known users (like openssh, imap server in small companies etc..) should be run behind a VPN
- OpenBSD style to have a base system developed as a whole by a single team is great, such kind of vulnerability is barely possible to happen (on base system only, ports aren't audited)
- whenever possible, separate each network service within their own operating system instance (using hardware machines, virtual machines or even containers)
- avoid daemons running as root as possible
- use opensnitch on workstations (linux only)
- control outgoing traffic whenever you can afford to
I don't have much opinion about what could be done to protect supply chain. As a packager, it's not possible to audit code of each software we update. My take on this is we have to deal with it, xz may certainly not be the only one vulnerable library running in production.
However, the risks could be reduced by:
- using less programs
- using less complex programs
- compiling programs with less options to pull in less dependencies (FreeBSD and Gentoo both provide this feature and it's great)
# Conclusion
I actually have two systems that were running the vulnerable libs on openSUSE MicroOS which updates very aggressively (daily update + daily reboot). There are no magic balance between "update as soon as possible" and "wait for some people to take the risks first".
I'm going to rework my infrastructure and expose the bare minimum to the Internet, and use a VPN for all my services that are for known users. The peace of mind will obtained be far greater than the burden of setting up WireGuard VPNs.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/lessons-learned-xz-vuln.gmi</guid>
<link>gemini://perso.pw/blog//articles/lessons-learned-xz-vuln.gmi</link>
<pubDate>Sat, 30 Mar 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>Cloud gaming review using Playstation Plus</title>
<description>
<![CDATA[
<pre># Introduction
While testing the cloud gaming service GeForce Now, I've learned that PlayStation also had an offer.
Basically, if you use a PlayStation 4 or 5, you can subscribe to the first two tiers to benefit some services and games library, but the last tier (premium) adds more content AND allows you to play video games on a computer with their client, no PlayStation required. I already had the second tier subscription, so I paid the small extra to switch to premium in order to experiment with the service.
=> https://www.playstation.com/en-us/ps-plus/ PlayStation Plus official website
# Game library
Compared to GeForce Now, while you are subscribed you have a huge game library at hand. This makes the service a lot cheaper if you are happy with the content. The service costs 160$€ / year if you take for 12 months, this is roughly the price of 2 AAA games nowadays...
# Streaming service
The service is only available using the PlayStation Plus Windows program. It's possible to install it on Linux, but it will use more CPU because hardware decoding doesn't seem to work on Wine (even wine-staging with vaapi compatibility checked).
There are no clients for Android, and you can't use it in a web browser. The Xbox Game Pass streaming and GeForce now services have all of that.
Sadness will start here. The service is super promising, but the application is currently a joke.
If you don't plug a PS4 controller (named a dualshock 4), you can't use the "touchpad" button, which is mandatory to start a game in Tales of Arise, or very important in many games. If you have a different controller, on Windows you can use the program "DualShock 4 emulator" to emulate it, on Linux it's impossible to use, even with a genuine controller.
A PS5 controller (dualsense) is NOT compatible with the program, the touchpad won't work.
=> https://github.com/r57zone/DualShock4-emulator DualShock4 emulator GitHub project page
Obviously, you can't play without a controller, except if you use a program to map your keyboard/mouse to a fake controller.
# Gaming quality
There are absolutely no settings in the application, you can run a game just by clicking on it, did I mention there are no way to search for a game?
I guess games are started in 720p, but I'm not sure, putting the application full screen didn't degrade the quality, so maybe it's 1080p but doesn't go full screen when you run it...
Frame rate... this sucks. Games seem to run on a PS4 fat, not a PS4 pro that would allow 60 fps. On most games you are stuck with 30 fps and an insane input lag. I've not been able to cope with AAA games like God of War or Watch Dogs Legion as it was horrible.
Independent games like Alex Kidd remaster, Monster Boy or Rain World did feel very smooth though (60fps!), so it's really an issue with the hardware used to run the games.
Don't expect any PS5 games in streaming from Windows, there are none.
The service allows PlayStation users to play all games from the library (including PS5 games) in streaming up to 2160p@120fps, but not the application users. This feature is only useful if you want to try a game before installing it, or if your PlayStation storage is full.
# Cloud saving
This is fun here too. There are game saves in the PlayStation Plus program cloud, but if you also play on a PlayStation, their saves are sent to a different storage than the PlayStation cloud saves.
There is a horrible menu to copy saves from one pool to the other.
This is not an issue if you only use the stream application or the PlayStation, but it gets very hard to figure where is your save if you play on both.
# Conclusion
I have been highly disappointed by the streaming service (outside PlayStation use). The Windows programs required to sign in twice before working (I tried on 5 devices!), most interesting games run poorly due to a PS4 hardware, there is no way to enable the performance mode that was added to many games to support the PS4 Pro. This is pretty curious as the streaming from a PlayStation device is a stellar experience, it's super smooth, high quality, no input lag, no waiting, crystal clear picture.
No Android application? Curious... No support for a genuine PS5 controller, WTF?
The service is still young, I really hope they will work at improving the streaming ecosystem.
At least, it works reliably and pretty well for simpler games.
It could be a fantastic service if the following requirements were met:
- proper hardware to run games at 60fps
- greater controller support
- allow playing in a web browser, or at least allow people to run it on smartphones with a native application
- an open source client while there
- merged cloud saves
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/playstation-plus-streaming-review.gmi</guid>
<link>gemini://perso.pw/blog//articles/playstation-plus-streaming-review.gmi</link>
<pubDate>Sat, 16 Mar 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>Cloud gaming review using Geforce Now</title>
<description>
<![CDATA[
<pre># Introduction
I'm finally done with ADSL now as I got access to optical fiber last week! It was time for me to try cloud gaming again and see how it improved since my last use in 2016.
If you are not familiar with cloud gaming, please do not run away, here is a brief description. Cloud gaming refers to a service allowing one to play locally a game running on a remote machine (either locally or over the Internet).
There are a few commercial services available, mainly: GeForce Now, PlayStation Plus Premium (other tiers don't have streaming), Xbox game pass Ultimate and Amazon Luna. Two major services died in the long run: Google Stadia and Shadow (which is back now with a different formula).
A note on Shadow, they are now offering access to an entire computer running Windows, and you do what you want with it, which is a bit different from other "gaming" services listed above. It's expensive, but not more than renting an AWS system with equivalent specs (I know some people doing that for gaming).
This article is about the service Nvidia GeForce Now (not sponsored, just to be clear).
I tried the free tier, premium tier and ultimate tier (thanks to people supporting me on Patreon, I could afford the price for this review).
=> https://www.nvidia.com/en-us/geforce-now/ Geforce Now official page
=> https://play.geforcenow.com/mall/ Geforce Now page where you play (not easy to figure after a login)
# The service
This is the first service I tried in 2016 when I received an Nvidia Shield HTPC, the experience was quite solid back in the days. But is it good in 2024?
The answer is clear, yes, it's good, but it has limitations you need to be aware of. The free tier allows playing for a maximum of 1 hour in a single session, and with a waiting queue that can be fast (< 1 minute) or long (> 15 minutes), but the average waiting time I had was like 9 minutes. The waiting queue also displays ads now.
The premium tier at 11€$/month removes the queue system by giving you priority over free users, always assigns an RTX card and allows playing up to 6 hours in a single session (you just need to start a new session if you want to continue).
Finally, the ultimate tier costs 22€$/month and allows you to play in 4K@120fps on a RTX 4080, up to 8h.
The tiers are quite good in my opinion, you can try and use the service for free to check if it works for you, then the premium tier is affordable to be used regularly. The ultimate tier will only be useful to advanced gamers who need 4K, or higher frame rates.
Nvidia just released a new offer early March 2024, a premium daily pass for $3.99 or ultimate daily pass for 8€. This is useful if you want to evaluate a tier before deciding if you pay for 6 months. You will understand later why this daily pass can be useful compared to buying a full month.
# Operating system support
I tried the service using a Steam Deck, a Linux computer over Wi-Fi and Ethernet, a Windows computer over Ethernet and in a VM on Qubes OS. The latency and quality were very different.
If you play in a web browser (Chrome based, Edge, Safari), make sure it supports hardware acceleration video decoding, this is the default for Windows but a huge struggle on Linux, Chrome/Chromium support is recent and can be enabled using `chromium --enable-features=VaapiVideoDecodeLinuxGL --use-gl=angle`. There is a Linux Electron App, but it does nothing more than bundling the web page in chromium, without acceleration.
On a web browser, the codec used is limited to h264 which does not work great with dark areas, it is less effective than advanced codecs like av1 or hevc (commonly known as h265). If you web browser can't handle the stream, it will lose packets and then Geforce service will instantly reduce the quality until you do not lose packets, which will make things very ugly until it recover, until it drops again. Using hardware acceleration solves the problem almost entirely!
Web browser clients are also limited to 60 fps (so ultimate tier is useless), and Windows web browsers can support 1440p but no more.
On Windows and Android you can install a native Geforce Now application, and it has a LOT more features than in-browser. You can enable Nvidia reflex to remove any input lag, HDR for compatible screens, 4K resolution, 120 fps frame rate etc... There is also a feature to add color filters for whatever reason... The native program used AV1 (I only tried with the ultimate tier), games were smooth with stellar quality and not using more bandwidth than in h264 at 60 fps.
I took a screenshot while playing Baldur's Gate 3 on different systems, you can compare the quality:
=> static/geforce_now/windows_steam_120fps_natif.png Playing on Steam native program, game set to maximum quality
=> static/geforce_now/windows_av1_120fps_natif_sansupscale_gamma_OK.png Playing on Geforce Now on Windows native app, game set to maximum quality
=> static/geforce_now/linux_60fps_chrome_acceleration_maxquality_gammaok.png Playing on Geforce Now on Linux with hardware acceleration, game set to maximum quality
In my opinion, the best looking one is surprisingly the Geforce Now on Windows, then the native run on Steam and finally on Linux where it's still acceptable. You can see a huge difference in terms of quality in the icons in the bottom bar.
# Tier system
When I upgraded from free to premium tier, I paid for 1 month and was instantly able to use the service as a premium user.
Premium gives you priority in the queues, I saw the queue display a few times for a few seconds, so there is virtually no queue, and you can play for 6 hours in a row.
When I upgraded from premium to ultimate tier, I was expecting to pay the price difference between my current subscription and the new one, but it was totally different. I had to pay for a whole month of ultimate tier, and my current remaining tier was converted as an ultimate tier, but as ultimate costs a bit more than twice premium, a pro rata was applied to the premium time, resulting in something like 12 extra days of ultimate for the premium month.
Ultimate tier allows reaching a 4K resolution and 120 fps refresh rate, allow saving video settings in games, so you don't have to tweak them every time you play, and provide an Nvidia 4080 for every session, so you can always set the graphics settings to maximum. You can also play up to 8 hours in a row. Additionaly, you can record gaming sessions or the past n minutes, there is a dedicated panel using Ctrl+G. It's possible to achieve 240 fps for compatible monitors, but only for 1080p resolution.
Due to the tier upgrade method, the ultimate pass can be interesting, if you had 6 months of premium, you certainly don't want to convert it into 2 months of ultimate + paying 1 month of ultimate just to try.
# Gaming quality
As a gamer, I'm highly sensitive to latency, and local streaming has always felt poor with regard to latency, and I've been very surprised to see I can play an FPS game with a mouse on cloud gaming. I had a ping of 8-75 ms with the streaming servers, which was really OK. Games featuring "Nvidia reflex" have no sensitive input lag, this is almost magic.
When using a proper client (native Windows client or a web browser with hardware acceleration), the quality was good, input lag barely noticeable (none in the app), it made me very happy :-)
Using the free tier, I always had a rig good enough to put the graphics quality on High or Ultra, which surprised me for a free service. On premium and later, I had an Nvidia 2080 minimum which is still relevant nowadays.
The service can handle multiple controllers! You can use any kind of controller, and even mix Xbox / PlayStation / Nintendo controllers, no specific hardware required here. This is pretty cool as I can visit my siblings, bring controllers and play together on their computer <3.
Another interesting benefit is that you can switch your gaming session from a device to another by connecting with the other device while already playing, Geforce Now will switch to the new connecting device without interruption.
# Games library
This is where GeForce now is pretty cool, you don't need to buy games to them. You can import your own libraries like Steam, Ubisoft, Epic store, GOG (only CD Projekt Red games) or Xbox Game Pass games. Not all games from your libraries will be playable though! And for some reasons, some games are only available when run from Windows (native app or web browser), like Genshin Impact which won't appear in the games list if connected from non-Windows client?!
If you already own games (don't forget to claim weekly free Epic store games), you can play most of them on GeForce Now, and thanks to cloud saves, you can sync progression between sessions or with a local computer.
There are a bunch of free-to-play games that are good (like Warframe, Genshin Impact, some MMOs), so you could enjoy playing video games without having to buy one (until you get bored?).
# Cost efficiency
If you don't currently own a modern gaming computer, and you subscribe to the premium tier (9.17 $€/month when signing for 6 months), this costs you 110 $€ / year.
Given an equivalent GPU costs at least 400€$ and could cope with games in High quality for 3 years (I'm optimistic), the GPU alone costs more than subscribing to the service. Of course, a local GPU can be used for data processing nowadays, or could be sold second hand, or be used for many years on old games.
If you add the whole computer around the GPU, renewed every 5 or 6 years (we are targeting to play modern games in high quality here!), you can add 1200 $€ / 5 years (or 240 $€ / year).
When using the ultimate tier, you instantly get access to the best GPU available (currently a Geforce 4080, retail value of 1300€$). Cost wise, this is impossible to beat with owned hardware.
I did some math to figure how much money you can save from electricity saving: the average gaming rig draws approximately 350 Watts when playing, a Geforce now thin client and a monitor would use 100 Watts in the worst case scenario (a laptop alone would be more around 35 Watts). So, you save 0.25 kWh per hour of gaming, if one plays 100 hours per month (that's 20 days playing 5h, or 3.33 hours / day) they would save 25 kWh. The official rate in France is 0.25 € / kWh, that would result in a 6.25€ saving in electricity. The monthly subscription is immediately less expensive when taking this into account. Obviously, if you are playing less, the savings are less important.
# Bandwidth usage and ecology
Most of the time, the streaming was using between 3 and 4 MB/s for a 1080p@60fps (full-hd resolution, 1920x1080, at 60 frames per second) in automatic quality mode. Playing at 30 fps or on smaller resolutions will use drastically less bandwidth. I've been able to play in 1080p@30 on my old ADSL line! (quality was degraded, but good enough). Playing at 120 fps slightly increased the bandwidth usage by 1 MB/s.
I remember a long tech article about ecology and cloud gaming which concluded cloud gaming is more "eco-friendly" than running locally if you play it less than a dozen hours. However, it always assumed you had a capable gaming computer locally that was already there, whether you use the cloud gaming or not, which is a huge bias in my opinion. It also didn't account that one may install a video games multiple times and that a single game now weights 100 GB (which is equivalent to 20h of cloud gaming bandwidth wise!). The biggest cons was the bandwidth requirements and the whole worldwide maintenance to keep high speed lines for everyone. I do think Cloud gaming is way more effective as it allows pooling gaming devices instead of having everyone with their own hardware.
As a comparison, 4K streaming at Netflix uses 25 Mbps of network (~ 3.1 MB/s).
# Playing on Android
Geforce Now allows you to play any compatible game on Android, is it worth? I tried it with a Bluetooth controller on my BQ Aquaris X running LineageOS (it's a 7 years old phone, average specs with a 720p screen).
I was able to play in Wi-Fi using the 5 GHz network, it felt perfect except that I had to put the smartphone screen in a comfortable way. This was drawing the battery at a rate of 0.7% / minute, but this is an old phone, I expect newer hardware to do better.
On 4G, the battery usage was less than Wi-Fi with 0.5% / minute. The service at 720p@60fps used an average of 1.2 MB/s of data for a gaming session of Monster Hunter world. At this rate, you can expect a data usage of 4.3 GB / hour of gameplay, which could be a lot or cheap depending on your usage and mobile subscription.
Globally, playing on Android was very good, but only if you have a controller. There are interesting folding controllers that sandwich the smartphone between two parts, turning it into something looking like a Nintendo Switch, this can be a very interesting device for players.
# Tips
You can use "Ctrl+G" to change settings while in game or also display information about the streaming.
In GeForce Now settings (not in-game), you can choose the servers location if you want to try a different datacenter. I set to choose the nearest otherwise I could land on a remote one with a bad ping.
GeForce Now even works on OpenBSD or Qubes OS qubes (more on that later on Qubes OS forum!).
=> https://forum.qubes-os.org/t/cloud-gaming-with-geforce-now/24964 Qubes OS forum discussion
# Conclusion
GeForce Now is a pretty neat service, the free tier is good enough for occasional gamers who would play once in a while for a short session, but also provide a cheaper alternative than having to keep a gaming rig up-to-date. I really like that they allow me to use my own library instead of having to buy games on their own store.
I'm preparing another blog post about local and self hosted cloud gaming, and I have to admit I haven't been able to do better than Geforce Now even on local network... Engineers at Geforce Now certainly know their stuff!
The experience was solid even on a 10 years old laptop, and enjoyable. A "cool" feature when playing is the surrounding silence, as no CPU/GPU are crunching for rendering! My GPU is still capable to handle modern games at an average quality at 60 FPS, I may consider using the premium tier in the future instead of replacing my GPU.
</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/geforce-now-review.gmi</guid>
<link>gemini://perso.pw/blog//articles/geforce-now-review.gmi</link>
<pubDate>Sat, 09 Mar 2024 00:00:00 GMT</pubDate>
</item>
<item>
<title>Script NAT on Qubes OS</title>
<description>
<![CDATA[
<pre># Introduction
As a daily Qubes OS user, I often feel the need to expose a port of a given qube to my local network. However, the process is quite painful because it requires doing the NAT rules on each layer (usually net-vm => sys-firewall => qube), it's a lost of wasted time.
I wrote a simple script that should be used from dom0 that does all the job: opening the ports on the qube, and for each NetVM, open and redirect the ports.
=> https://git.sr.ht/~solene/qubes-os-nat Qubes OS Nat git repository
# Usage
It's quite simple to use, the hardest part will be to remember how to copy it to dom0 (download it in a qube and use `qvm-run --pass-io` from dom0 to retrieve it).
Make the script executable with `chmod +x nat.sh`, now if you want to redirect the port 443 of a qube, you can run `./nat.sh qube 443 tcp`. That's all.
Be careful, the changes ARE NOT persistent. This is on purpose, if you want to always expose ports of a qube to your network, you should script its netvm accordingly.
# Limitations
The script is not altering the firewall rules handled by `qvm-firewall`, it only opens the ports and redirect them (this happens at a different level). This can be cumbersome for some users, but I decided to not touch rules that are hard-coded by users in order to not break any expectations.
Running the script should not break anything. It works for me, but it was only slightly tested though.
# Some useful ports
## Avahi daemon port
The avahi daemon uses the UDP port 5353. You need this port to discover devices on a network. This can be particularly useful to find network printers or scanners and use them in a dedicated qube.
# Evolutions
It could be possible to use this script in qubes-rpc, this would allow any qube to ask for a port forwarding. I was going to write it this way at first, but then I thought it may be a bad idea to allow a qube to run a dom0 script as root that requires reading some untrusted inputs, but your mileage may vary.</pre>
]]>
</description>
<guid>gemini://perso.pw/blog//articles/qubes-os-nat.gmi</guid>
<link>gemini://perso.pw/blog//articles/qubes-os-nat.gmi</link>
<pubDate>Sat, 09 Mar 2024 00:00:00 GMT</pubDate>
</item>
</channel>
</rss>