💾 Archived View for perso.pw › blog › rss.xml captured on 2021-12-05 at 23:47:19.
⬅️ Previous capture (2021-12-04)
-=-=-=-=-=-=-
<?xml version="1.0" encoding="UTF-8"?> <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"> <channel> <title>Solene'%</title> <description></description> <link>gemini://perso.pw/blog/</link> <atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" /> <item> <title>Nvidia card in eGPU and NixOS</title> <description> <![CDATA[ <pre># Introduction I previously wrote about using an eGPU on Gentoo Linux. It was working when using the eGPU display but I never got it to work for accelerating games using the laptop display. Now, I'm back on NixOS and I got it to work! # What is it about? My laptop has a thunderbolt connector and I'm using a Razer Core X external GPU case that is connected to the laptop using a thunderbolt cable. This allows to use an external "real" GPU on a laptop but it has performance trade off and on Linux also compatibility issues. There are three ways to use the nvidia eGPU: - run the nvidia driver and use it as a normal card with its own display connected to the GPU, not always practical with a laptop - use optirun / primerun to run programs within a virtual X server on that GPU and then display it on the X server (very clunky, originally created for Nvidia Optimus laptop) - use Nvidia offloading module (it seems recent and I learned about it very recently) The first case is easy, just install nvidia driver and use the right card, it should work on any setup. This is the setup giving best performance. The most complicated setup is to use the eGPU to render what's displayed on the laptop, meaning the video signal has to come back from the thunderbolt cable, reducing the bandwidth. # Nvidia offloading Nvidia made work in their proprietary driver to allow a program to have its OpenGL/Vulkan calls to be done in a GPU that is not the one used for the display. This allows to throw optirun/primerun for this use case, which is good because they added performance penalty, complicated setup and many problems. => https://download.nvidia.com/XFree86/Linux-x86_64/435.17/README/primerenderoffload.html Official documentation about offloading with nvidia driver # NixOS I really love NixOS and for writing articles it's so awesome, because instead of a set of instructions depending on conditions, I only have to share the piece of config required. This is the bits to add to your /etc/nixos/configuration.nix file and then rebuild system:
hardware.nvidia.modesetting.enable = true;
hardware.nvidia.prime.sync.allowExternalGpu = true;
hardware.nvidia.prime.offload.enable = true;
hardware.nvidia.prime.nvidiaBusId = "PCI:10:0:0";
hardware.nvidia.prime.intelBusId = "PCI:0:2:0";
services.xserver.videoDrivers = ["nvidia" ];
A few notes about the previous chunk of config: - only add nvidia to the list of video drivers, at first I was adding modesetting but this was creating troubles - the PCI bus ID can be found with lspci, it has to be translated in decimal, here my nvidia id is 10:0:0 but in lspci it's 0a:00:00 with 0a being 10 in hexadecimal => https://nixos.wiki/wiki/Nvidia#offload_mode NixOS wiki about nvidia offload mode # How to use it The use of offloading is controlled by environment variables. What's pretty cool is that if you didn't connect the eGPU, it will still work (with integrated GPU). ## Running a command We can use glxinfo to be sure it's working, add the environment as a prefix:
__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo
## In Steam Modify the command line of each game you want to run with the eGPU (it's tedious), by:
__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia %command%
## In Lutris Lutris has a per-game or per-runner setting named "Enable Nvidia offloading", you just have to enable it. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/nixos-egpu.gmi</guid> <link>gemini://perso.pw/blog//articles/nixos-egpu.gmi</link> <pubDate>Sun, 05 Dec 2021 00:00:00 GMT</pubDate> </item> <item> <title>Using awk to pretty-display OpenBSD packages update changes</title> <description> <![CDATA[ <pre># Introduction You use OpenBSD and when you upgrade your packages you often wonder which one is a rebuild and which one is a real version update? The packages updates are logged in /var/log/messages and using awk it's easy to achieve some kind of report. # Command line The typical update line will display the package name, its version, a "->" and the newer version of the installed package. By verifying if the newer version is different from the original version, we can report updated packages. awk is already installed in OpenBSD, so you can run this command in your terminal without any other requirement.
awk -F '-' '/Added/ && /->/ { sub(">","",$0) ; if( $(NF-1) != $NF ) { $NF=" => "$NF ; print }}' /var/log/messages
The output should look like this (after a pkg_add -u):
Dec 4 12:27:45 daru pkg_add: Added quirks 4.86 => 4.87
Dec 4 13:01:01 daru pkg_add: Added cataclysm dda 0.F.2v0 => 0.F.3p0v0
Dec 4 13:01:05 daru pkg_add: Added ccache 4.5 => 4.5.1
Dec 4 13:04:47 daru pkg_add: Added nss 3.72 => 3.73
Dec 4 13:07:43 daru pkg_add: Added libexif 0.6.23p0 => 0.6.24
Dec 4 13:40:41 daru pkg_add: Added kakoune 2021.08.28 => 2021.11.08
Dec 4 13:43:27 daru pkg_add: Added kdeconnect kde 1.4.1 => 21.08.3
Dec 4 13:46:16 daru pkg_add: Added libinotify 20180201 => 20211018
Dec 4 13:51:42 daru pkg_add: Added libreoffice 7.2.2.2p0v0 => 7.2.3.2v0
Dec 4 13:52:37 daru pkg_add: Added mousepad 0.5.7 => 0.5.8
Dec 4 13:52:50 daru pkg_add: Added munin node 2.0.68 => 2.0.69
Dec 4 13:53:01 daru pkg_add: Added munin server 2.0.68 => 2.0.69
Dec 4 13:53:14 daru pkg_add: Added neomutt 20211029p0 gpgme sasl 20211029p0 gpgme => sasl
Dec 4 13:53:20 daru pkg_add: Added nethack 3.6.6p0 no_x11 3.6.6p0 => no_x11
Dec 4 13:58:53 daru pkg_add: Added ristretto 0.12.0 => 0.12.1
Dec 4 14:01:07 daru pkg_add: Added rust 1.56.1 => 1.57.0
Dec 4 14:02:33 daru pkg_add: Added sysclean 2.9 => 3.0
Dec 4 14:03:57 daru pkg_add: Added uget 2.0.11p4 => 2.2.2p0
Dec 4 14:04:35 daru pkg_add: Added w3m 0.5.3pl20210102p0 image 0.5.3pl20210102p0 => image
Dec 4 14:05:49 daru pkg_add: Added yt dlp 2021.11.10.1 => 2021.12.01
# Limitations The command seems to mangle the separators when displaying the result and doesn't work well with flavors packages that will always be shown as updated. At least it's a good start, it requires a bit more polishing but that's already useful enough for me. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/openbsd-package-update-report.gmi</guid> <link>gemini://perso.pw/blog//articles/openbsd-package-update-report.gmi</link> <pubDate>Sat, 04 Dec 2021 00:00:00 GMT</pubDate> </item> <item> <title>The state of Steam on OpenBSD</title> <description> <![CDATA[ <pre># Introduction There is a very common question within the OpenBSD community, mostly from newcomers: "How can I install Steam on OpenBSD?". The answer is: You can't, there is no way, this is impossible, period. # Why? Steam is a closed source program, while it's now also available on Linux doesn't mean it run on OpenBSD. The Linux Steam version is compiled for linux and without the sources we can't port it on OpenBSD. Even if Steam was able to be installed and could be launched, games are not made for OpenBSD and wouldn't work either. On FreeBSD it may be possible to install Windows Steam using Wine, but Wine is not available on OpenBSD because it require some specific Kernel memory management we don't want to implement for security reasons (I don't have the whole story), but FreeBSD also has a Linux compatibility mode to run Linux binaries, allowing to use programs compiled for Linux. This linux emulation layer has been dropped in OpenBSD a few years ago because it was old and unmaintained, bringing more issues than helping. So, you can't install Steam or use it on OpenBSD. If you need Steam, use a supported operating system. I wanted to make an article about this in hope my text will be well referenced within search engines, to help people looking for Steam on OpenBSD by giving them a reliable answer. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/openbsd-steam.gmi</guid> <link>gemini://perso.pw/blog//articles/openbsd-steam.gmi</link> <pubDate>Wed, 01 Dec 2021 00:00:00 GMT</pubDate> </item> <item> <title>Nethack: end of Sery the Tourist</title> <description> <![CDATA[ <pre>Hello, if you remember my previous publications about Nethack and my character "Sery the tourist", I have bad news. On OpenBSD, nethack saves are stored in /usr/local/lib/nethackdir-3.6.0/logfile and obviously I didn't save this when changing computer a few months ago. I'm very sad of this data loss because I was enjoying a lot telling the story of the character while playing. Sery reached 7th floor while being a Tourist, which is incredible given all the nethack plays I've done and this one was going really well. I don't know if you readers enjoyed that kind of content, if so please tell me so I may start a new game and write about it. As an end, let's say Sery stayed too long in 7th floor and the Langoliers came to eat the Time of her reality. => https://stephenking.fandom.com/wiki/Langoliers Langoliers on Stephen King wiki fandom </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/nethack-end-of-sery.gmi</guid> <link>gemini://perso.pw/blog//articles/nethack-end-of-sery.gmi</link> <pubDate>Sat, 27 Nov 2021 00:00:00 GMT</pubDate> </item> <item> <title>Simple network dashboard with vnstat</title> <description> <![CDATA[ <pre># Introduction Hi! If you run a server or a router, you may want to have a nice view of the bandwidth usage and statistics. This is easy and quick to achieve using vnstat software. It will gather data regularly from network interfaces and store it in rrd files, it's very efficient and easy to use, and its companion program vnstati can generate pictures, perfect for easy visualization. => static/vnstat-dashboard.png My simple router network dashboard with vnstat => https://humdi.net/vnstat/ vnstat project homepage # Setup (on OpenBSD) Simply install vnstat and vnstati packages with pkg_add. All the network interfaces will be added to vnstatd databases to be monitored.
Create a script in /var/www/htdocs/dashboard and make it executable:
cd /var/www/htdocs/dashboard/ || exit 1
vnstati --fiveminutes 60 -o 5.png
vnstati -c 60 -vs -o vs.png
vnstati -c 60 --days 14 -o d.png
vnstati -c 300 --months 5 -o m.png
and create a simple index.html file to display pictures:
<html>
<body>
<div style="display: inline-block;">
<img src="vs.png" /><br />
<img src="d.png" /><br />
<img src="m.png" /><br />
</div>
<img src="5.png" /><br />
</body>
</html>
Add a cron as root to run the script every 10 minutes using _vnstat user.
My personal crontab runs only from 8h to 23h because I will never look at my dashboard while I'm sleeping so I don't need to keep it updated, just replace * by 8-23 for the hour field. # Http server Obviously you need to serve /var/www/htdocs/dashboard/ from your http server, I won't cover this step in the article. # Conclusion Vnstat is fast, light and easy to use, but yet it produces nice results. As an extra, you can run the vnstat commands (without the i) and use the raw text output to build an pure text dashboard if you don't want to use pictures (or http). </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/simple-bandwidth-dashboard.gmi</guid> <link>gemini://perso.pw/blog//articles/simple-bandwidth-dashboard.gmi</link> <pubDate>Thu, 25 Nov 2021 00:00:00 GMT</pubDate> </item> <item> <title>OpenBSD and Linux comparison: data transfer benchmark</title> <description> <![CDATA[ <pre># Introduction I had a high suspicion about something but today I made measurements. My feeling is that downloading data from OpenBSD use more "upload data" than on other OS I originally thought about this issue when I found that using OpenVPN on OpenBSD was limiting my download speed because I was reaching the upload limit of my DSL line, but it was fine on Linux. From there, I've been thinking since then that OpenBSD was using more out data but I never measured anything before. # Testing protocol Now that I have an OpenBSD router it was easy to make the measures with a match rule and a label. I'll be downloading a specific file from a specific server a few times with each OS, so I'm adding a rule matching this connection.
match proto tcp from 10.42.42.32 to 145.238.169.11 label benchmark
Then, I've been downloading this file three times per OS and resetting counter after each download and saved the results from "pfctl -s labels" command. => http://ftp.fr.openbsd.org/pub/OpenBSD/7.0/amd64/comp70.tgz OpenBSD comp70.tgz file from an OpenBSD mirror The variance of each result per OS was very low, I used the average of each columns as the final result per OS. # Raw results
OS total packets total bytes packets OUT bytes OUT packets IN bytes IN
----- ------------- ----------- ----------- --------- ---------- --------
OpenBSD 175348 158731602 72068 3824812 10328 154906790
OpenBSD 175770 158789838 72486 3877048 10328 154912790
OpenBSD 176286 158853778 72994 3928988 10329 154924790
Linux 154382 157607418 51118 2724628 10326 154882790
Linux 154192 157596714 50928 2713924 10326 154882790
Linux 153990 157584882 50728 2705092 10326 154879790
# About the results A quick look will show that OpenBSD sent +42% OUT packets compared to Linux and also +42% OUT bytes, meanwhile the OpenBSD/Linux IN bytes ratio is nearly identical (100.02%). => static/network-usage-packets.png Chart showing the IN and OUT packets of Linux and OpenBSD side by side # Conclusion I'm not sure what to conclude except that now, I'm sure there is something here requiring investigation. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/openbsd-network-usage-mystery.gmi</guid> <link>gemini://perso.pw/blog//articles/openbsd-network-usage-mystery.gmi</link> <pubDate>Sun, 14 Nov 2021 00:00:00 GMT</pubDate> </item> <item> <title>How I ended up liking GNOME</title> <description> <![CDATA[ <pre># Introduction Hi! This was a while without much activity on my blog, the reason is that I stabbed through my right index with a knife by accident, the injury was so bad I can barely use my right hand because I couldn't move my index at all without pain. So I've been stuck with only my left hand for a month now. Good news, it's finally getting better :) Which leads me to the topic of this article, why I ended liking GNOME! # Why I didn't use GNOME I will first start about why I didn't use it before. I like to try everything all the time, I like disruption, I like having an hostile (desktop/shell/computer) environment to stay sharp and not being stuck on ideas. My current setup was using Fvwm or Stumpwm, mostly keyboard driven, with many virtual desktop to spatially regroup different activities. However, with an injured hand, I've been facing a big issue, most of my key binding were for two hands and it seemed too weird for me to change the bindings to work with one hand. I tried to adapt using only one hand, but I got poor results and using the cursor was not very efficient because stumpwm is hostile to cursor and fvwm is not really great for this either. # The road to GNOME With only one hand to use my computer, I found the awesome program ibus-typing-booster to help me typing by auto completing words (a bit like on touchscreen phones), it worked out of the box with GNOME due to the ibus integration working well. I used GNOME to debug the package but ended liking it in my current condition. How do I like it now, while I was pestling about it a few months ago as I found it very confusing? Because it's easy to use and spared me movements with my hands, absolutely.
function fzf-histo {
RES=$(fzf --tac --no-sort -e < $HISTFILE)
test -n "$RES" || exit 0
eval "$RES"
}
bind -m ^R=fzf-histo^J
Reload your file or start a new shell, Ctrl+R should now run fzf for a more powerful history search. Don't forget to install fzf package. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/ksh-fzf.gmi</guid> <link>gemini://perso.pw/blog//articles/ksh-fzf.gmi</link> <pubDate>Sun, 17 Oct 2021 00:00:00 GMT</pubDate> </item> <item> <title>Typing faster with assistive technology</title> <description> <![CDATA[ <pre># Introduction This article is being written only using my left hand with the help of ibus-typing-booster program. => https://mike-fabian.github.io/ibus-typing-booster/ ibus-typing-booster project The purpose of this tool is to assist the user by proposing words while typing, a bit like smartphones do. It can be trained with a dictionary, a text file but also learn from user inputs over time. A package for OpenBSD is on the tracks. # Installation This program requires ibus to work, on Gnome it is already enabled but in other environments some configuration are required. Because this may be subject to change over time and duplicating information is bad, I'll give the links for configuring ibus-typing-booster. => https://mike-fabian.github.io/ibus-typing-booster/docs/user/#1 How to enable ibus-typing-booster # How to use Once you have setup ibus and ibus-typing-booster you should be able to switch from normal input to assisted input using "super"+space. When you type with ibus-typing-booster enabled, with default settings, the input should be underlined to show a suggestion can be triggered using TAB key. Then, from a popup window you can pick a word by using TAB to cycle between the suggestions and pressing space to validate, or use the F key matching your choice number (F1 for first, F2 for second etc...) and that's all. # Configuration There are many ways to configure it, suggestions can be done inline while typing which I think is more helpful when you type slowly and you want a quick boost when the suggestion is correct. The suggestions popup can be vertical or horizontal, I personally prefer horizontal which is not the default. Colors and key bindings can changed. # Performance While I type very fast when I have both my hands, using one hand requires me to look the keyboard and make a lot of moves with my hand. This work fine and I can type reasonably fast but this is extremely exhausting and painful for my hand. With ibus-typing-booster I can type full sentences with less efforts but a bit slower. However this is a lot more comfortable than typing everything using my hand. # Conclusion This is an assistive technology easy to setup and that can be a life changer for disabled users who can make use of it. This is not the first time I'm temporarily disabled in regards to using a keyboard, I previously tried a mirrored keyboard layout reverting keys when pressing caps lock, and also Dasher which allow to make words from simple movements such as moving mouse cursor. I find this ibus plugin to be easier to integrate for the brain because I just type with my keyboard in the programs, with Dasher I need to cut and paste content, and with mirrored layout I need to focus on the layout change. I am very happy of it.</pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/ibus-typing-booster.gmi</guid> <link>gemini://perso.pw/blog//articles/ibus-typing-booster.gmi</link> <pubDate>Sat, 16 Oct 2021 00:00:00 GMT</pubDate> </item> <item> <title>Full WireGuard setup with OpenBSD</title> <description> <![CDATA[ <pre># Introduction We want all our network traffic to go through a WireGuard VPN tunnel automatically, both WireGuard client and server are running OpenBSD, how to do that? While I thought it was simple at first, it soon became clear that the "default" part of the problem was not easy to solve, fortunately there are solutions. This guide should work from OpenBSD 6.9. => https://man.openbsd.org/pf.conf#nat-to pf.conf man page about NAT => https://man.openbsd.org/wg WireGuard interface man page => https://man.openbsd.org/ifconfig#WIREGUARD ifconfig man page, WireGuard section # Setup For this setup I assume we have a server running OpenBSD with a public IP address (1.2.3.4 for the example) and an OpenBSD computer with Internet connectivity. Because we want to use the WireGuard tunnel as the default route, we can't define a default route through WireGuard as this, that would prevent our interface to reach the WireGuard endpoint to make the tunnel working. We could play with the routing table by deleting the default route found on the interface, create a new route to reach the WireGuard server and then create a default route through WireGuard, but the whole process is fragile and there is no right place to trigger a script doing this. Instead, we can assign the network interface used to access the Internet to the rdomain 1, configure WireGuard to reach its remote peer through rdomain 1 and create a default route through WireGuard on the rdomain 0. Quick explanation about rdomain: they are different routing tables, default is rdomain 0 but we can create new routing tables and run commands using a specific routing table with "route -T 1 exec ping perso.pw" to make a ping through rdomain 1.
+-------------+
| server | wg0: 192.168.10.1
| |---------------+
+-------------+ |
| public IP |
| 1.2.3.4 |
| |
| |
/\/\/\/\/\/\/\ |WireGuard
| internet | |VPN
\/\/\/\/\/\/\/ |
| |
| |
|rdomain 1 |
+-------------+ |
| computer |---------------+
+-------------+ wg0: 192.168.10.2
rdomain 0 (default)
# Configuration The configuration process will be done in this order: 1. create the WireGuard interface on your computer to get its public key 2. create the WireGuard interface on the server to get its public key 3. configure PF to enable NAT and enable IP forwarding 4. reconfigure computer's WireGuard tunnel using server's public key 5. time to test the tunnel 6. make it default route Our WireGuard server will accept connections on address 1.2.3.4 at the UDP port 4433, we will use the network 192.168.10.0/24 for the VPN, the server IP on WireGuard will be 192.168.10.1 and this will be our future default route. ## On your computer We will make a simple script to generate the configuration file, you can easily understand what is being done. Replace "1.2.3.4 4433" by your IP and UDP port to match your setup.
PRIVKEY=$(openssl rand -base64 32)
cat <<EOF > /etc/hostname.wg0
wgkey $PRIVKEY
wgpeer wgendpoint 1.2.3.4 4433 wgaip 0.0.0.0/0
inet 192.168.10.2/24
up
EOF
sh /etc/netstart wg0
PUBKEY=$(ifconfig wg0 | grep 'wgpubkey' | cut -d ' ' -f 2)
echo "You need $PUBKEY to setup the remote peer"
## On the server ### WireGuard Like we did on the computer, we will use a script to configure the server. It's important to get the PUBKEY displayed in the previous step.
PUBKEY=PASTE_PUBKEY_HERE
PRIVKEY=$(openssl rand -base64 32)
cat <<EOF > /etc/hostname.wg0
wgkey $PRIVKEY
wgpeer $PUBKEY wgaip 192.168.10.0/24
inet 192.168.10.1/24
wgport 4433
up
EOF
sh /etc/netstart wg0
PUBKEY=$(ifconfig wg0 | grep 'wgpubkey' | cut -d ' ' -f 2)
echo "You need $PUBKEY to setup the local peer"
Keep the public key for next step. ## Firewall We want to enable NAT so we can reach the Internet through the server using WireGuard, edit /etc/pf.conf to add the following line (after the skip lines):
pass out quick on egress from wg0:network to any nat-to (egress)
Reload with "pfctl -f /etc/pf.conf". NOTE: if you block all incoming traffic by default, you need to open UDP port 4433. You will also need to either skip firewall on wg0 or configure PF to open what you need. This is beyond the scope of this guide. ## IP forwarding We need to enable IP forwarding because we will pass packets from an interface to another, this is done with "sysctl net.inet.ip.forwarding=1" as root. To make it persistent across reboot, add "net.inet.ip.forwarding=1" to /etc/sysctl.conf (you may have to create the file). From now, the server should be ready. ## On your computer Edit /etc/hostname.wg0 and paste the public key between "wgpeer" and "wgaip", the public key is wgpeer's parameter. Then run "sh /etc/netstart wg0" to reconfigure your wg0 tunnel. After this step, you should be able to ping 192.168.10.1 from your computer (and 192.168.10.2 from the server). If not, please double check the WireGuard and PF configurations on both side. ## Default route This simple setup for the default route will truly make WireGuard your default route. You have to understand services listening on all interfaces will only attach to WireGuard interface because it's the only address in rdomain 0, if needed you can use a specific routing table for a service as explained in rc.d man page. Replace the line "up" with the following:
wgrtable 1
up
!route add -net default 192.168.10.1
Your configuration file should look like this:
wgkey YOUR_KEY
wgpeer YOUR_PUBKEY wgendpoint REMOTE_IP 4433 wgaip 0.0.0.0/0
inet 192.168.10.2/24
wgrtable 1
up
!route add -net default 192.168.10.1
Now, add "rdomain 1" to your network interface used to reach the Internet, in my setup it's /etc/hostname.iwn0 and it looks like this.
join network wpakey superprivatekey
join home wpakey notsuperprivatekey
rdomain 1
up
autoconf
Now, you can restart network with "sh /etc/netstart" and all the network should pass through the WireGuard tunnel. # Handling DNS Because you may use a nameserver in /etc/resolv.conf that was provided by your local network, it's not reachable anymore. I highly recommend to use unwind (in every case anyway) to have a local resolver, or modify /etc/resolv.conf to use a public resolver. unwind can be enabled with "rcctl enable unwind" and "rcctl start unwind", from OpenBSD 7.0 you should have resolvd running by default that will rewrite /etc/resolv.conf if unwind is started, otherwise you need to write "nameserver 127.0.0.1" in /etc/resolv.conf # Bypass VPN If you need for some reason to run a program and not route its traffic through the VPN, it is possible. The following command will run firefox using the routing table 1, however depending on the content of your /etc/resolv.conf you may have issues resolving names (because 127.0.0.1 is only reachable on rdomain 0!). So a simple fix would be to use a public resolver if you really need to do so often.
route -T 1 exec firefox
=> https://man.openbsd.org/route.8#exec route man page about exec command # WireGuard behind a NAT If you are behind a NAT you may need to use the KeepAlive option on your WireGuard tunnel to keep it working. Just add "wgpka 20" to enable a KeepAlive packet every 20 seconds in /etc/hostname.wg0 like this:
wgpeer YOUR_PUBKEY wgendpoint REMOTE_IP 4433 wgaip 0.0.0.0/0 wgpka 20
[....]
=> https://man.openbsd.org/ifconfig#wgpka ifconfig man page explaining wgpka parameter # Conclusion WireGuard is easy to deploy but making it a default network interface adds some complexity. This is usually simpler for protocols like OpenVPN because the OpenVPN daemon can automatically do the magic to rewrite the routes (and it doesn't do it very well) and won't prevent non-VPN access until the VPN is connected. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/openbsd-wireguard-exit.gmi</guid> <link>gemini://perso.pw/blog//articles/openbsd-wireguard-exit.gmi</link> <pubDate>Sat, 09 Oct 2021 00:00:00 GMT</pubDate> </item> <item> <title>Port of the week: foliate</title> <description> <![CDATA[ <pre># Introduction Today I wanted to share with you about the program Foliate, a GTK Ebook reader with interesting features. First, there aren't many epub readers available on OpenBSD (and also on Linux). => https://johnfactotum.github.io/foliate/ Foliate project website # How to install On OpenBSD, a simple "pkg_add foliate" and you are done. # Features Foliate supports multiple features such as:
Policy Compile time Idle time
------ ------------ ---------
powersaving 1123 0
auto 871 252
=> static/freq-time.png Chart showing the difference in time spent for the two policies ## Energy used We see that the powersaving used more energy for the duration of the compilation of gzdoom, 5.9 Wh vs 5.6 Wh, but as we don't turn off the computer after the compilation is done, the auto mode also spent a few minutes idling and used 0.74 Wh in that time.
Policy Compile power Idle power Total (Wh)
------ ------------ --------- ----------
powersaving 5,90 0,00 5,90
auto 5,60 0,74 6,34
=> static/freq-power.png Chart showing the difference in energy used for the two policies # Conclusion For the same job done: compiling games/gzdoom and stay on for 18 minutes and 43 seconds, the powersaving policy used 5.90 Wh while the auto mode used 6.34 Wh. This is a saving of 6.90% of power. This is a testing policy I made for testing purposes, it may be too conservative for most people, I don't know. I'm currently playing with this and with a reproducible benchmark like this one I'm able to compare results between changes in the scheduler. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/openbsd-power-usage.gmi</guid> <link>gemini://perso.pw/blog//articles/openbsd-power-usage.gmi</link> <pubDate>Sun, 26 Sep 2021 00:00:00 GMT</pubDate> </item> <item> <title>Reuse of OpenBSD packages for trying runtime</title> <description> <![CDATA[ <pre># Introduction So, I'm currently playing with OpenBSD trying each end user package (providing binaries) and see if they work when installed alone. I needed a simple way to keep packages downloaded and I didn't want to go the hard way by using rsync on a package mirror because it would waste too much bandwidth and would take too much time. The most efficient way I found rely on a cache and ordering the source of packages. # pkg_add mastery pkg_add has a special variable named PKG_CACHE that when it's set, downloaded packages are copied in this directory. This is handy because every time I will install a package, all the packages downloaded by will kept in that directory. The other variable that interests us for the job is PKG_PATH because we want pkg_add to first look up in $PKG_CACHE and if not found, in the usual mirror. I've set this in my /root/.profile
export PKG_CACHE=/home/packages/
export PKG_PATH=${PKG_CACHE}:http://ftp.fr.openbsd.org/pub/OpenBSD/snapshots/packages/amd64/
Every time pkg_add will have to get a package, it will first look in the cache, if not there it will download it in the mirror and then store it in the cache. # Saving time removing packages Because I try packages one by one, installing and removing dependencies takes a lot of time (I'm using old hardware for the job). Instead of installing a package, deleting it and removing its dependencies, it's easier to work with manually installed packages and once done, remove dependencies, this way you will keep already installed dependencies that will be required for the next package.
KEEP=$(echo $* | awk '{ gsub(" ","|",$0); printf("(%s)", $0) }')
for pkg in $(pkg_info -mz | grep -vE "$KEEP")
do
# instead of deleting the package
# mark it installed automatically
pkg_add -aa $pkg
done
pkg_add $*
pkg_delete -a
This way, I can use this script (named add.sh) "./add.sh gnome" and then reuse it with "./add.sh xfce", the common dependencies between gnome and xfce packages won't be removed and reinstalled, they will be kept in place. # Conclusion There are always tricks to make bandwidth and storage more efficient, it's not complicated and it's always a good opportunity to understand simple mechanisms available in our daily tools. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/openbsd-quick-package-work.gmi</guid> <link>gemini://perso.pw/blog//articles/openbsd-quick-package-work.gmi</link> <pubDate>Sun, 19 Sep 2021 00:00:00 GMT</pubDate> </item> <item> <title>How to use cpan or pip packages on Nix and NixOS</title> <description> <![CDATA[ <pre># Introduction When using Nix/NixOS and requiring some development libraries available in pip (for python) or cpan (for perl) but not available as package, it can be extremely complicated to get those on your system because the usual way won't work. # Nix-shell The command nix-shell will be our friend here, we will define a new environment in which we will have to create the package for the libraries we need. If you really think this library is useful, it may be time to contribute to nixpkgs so everyone can enjoy it :) The simple way to invoke nix-shell is to use packages, for example the command ` nix-shell -p python38Packages.pyyaml` will give you access to the python library pyyaml for Python 3.8 as long as you run python from this current shell. The same way for Perl, we can start a shell with some packages available for databases access, multiples packages can be passed to "nix-shell -p" like this: `nix-shell -p perl532Packages.DBI perl532Packages.DBDSQLite`. # Defining a nix-shell Reading the explanations found on a blog and help received on Mastodon, I've been able to understand how to use a simple nix-shell definition file to declare new cpan or pip packages. => https://ghedam.at/15978/an-introduction-to-nix-shell Mattia Gheda's blog: Introduction to nix-shell => https://social.coop/@cryptix/106952010198335578 Mastodon toot from @cryptix@social.coop how to declare a python package on the fly What we want is to create a file that will define the state of the shell, it will contain new packages needed but also the list of packages. # Skeleton Create a file with the nix extension (or really, whatever the file name you want), special file name "shell.nix" will be automatically picked up when using "nix-shell" instead of passing the file name as parameter.
with (import <nixpkgs> {});
let
# we will declare new packages here
in
mkShell {
buildInputs = [ ]; # we will declare package list here
}
Now we will see how to declare a python or perl library. ## Python For python, we need to know the package name on pypi.org and its version. Reusing the previous template, the code would look like this for the package Crossplane
with (import <nixpkgs> {}).pkgs;
let
crossplane = python37.pkgs.buildPythonPackage rec {
pname = "crossplane";
version = "0.5.7";
src = python37.pkgs.fetchPypi {
inherit pname version;
sha256 = "a3d3ee1776bcccebf7a58cefeb365775374ab38bd544408117717ccd9f264f60";
};
meta = { };
};
in
mkShell {
buildInputs = [ crossplane python37 ];
}
If you need another library, replace crossplane variable name but also pname value by the new name, don't forget to update that name in buildInputs at the end of the file. Use the correct version value too. There are two references to python37 here, this implies we need python 3.7, adapt to the version you want. The only tricky part is the sha256 value, the only way I found to find it easily is the following. 1. declare the package with a random sha256 value (like echo hello | sha256) 2. run nix-shell on the file, see it complaining about the wrong checksum 3. get the url of the file, download it and run sha256 on it 4. update the file with the new value ## Perl For perl, it is required to use a script available in the official git repository when packages are made. We will only download the latest checkout because it's quite huge. In this example I will generate a package for Data::Traverse.
$ git clone --depth 1 https://github.com/nixos/nixpkgs
$ cd nixpkgs/maintainers/scripts
$ nix-shell -p perlPackages.{CPANPLUS,perl,GetoptLongDescriptive,LogLog4perl,Readonly}
$ ./nix-generate-from-cpan.pl Data::Traverse
attribute name: DataTraverse
module: Data::Traverse
version: 0.03
package: Data-Traverse-0.03.tar.gz (Data-Traverse-0.03, DataTraverse)
path: authors/id/F/FR/FRIEDO
downloaded to: /home/solene/.cpanplus/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz
sha-256: dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f
unpacked to: /home/solene/.cpanplus/5.34.0/build/EB15LXwI8e/Data-Traverse-0.03
runtime deps:
build deps:
description: Unknown
license: unknown
License 'unknown' is ambiguous, please verify
RSS feed: https://metacpan.org/feed/distribution/Data-Traverse
===
DataTraverse = buildPerlPackage {
pname = "Data-Traverse";
version = "0.03";
src = fetchurl {
url = "mirror://cpan/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz";
sha256 = "dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f";
};
meta = {
};
};
We will only reuse the part after the ===, this is nix code that defines a package named DataTraverse. The shell definition will look like this:
with (import <nixpkgs> {});
let
DataTraverse = buildPerlPackage {
pname = "Data-Traverse";
version = "0.03";
src = fetchurl {
url = "mirror://cpan/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz";
sha256 = "dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f";
};
meta = { };
};
in
mkShell {
buildInputs = [ DataTraverse perl ];
# putting perl here is only required when not using NixOS, this tell you want Nix perl binary
}
Then, run "nix-shell myfile.nix" and run you perl script using Data::Traverse, it should work! # Conclusion Using not packaged libraries is not that bad once you understand the logic of declaring it properly as a new package that you keep locally and then hook it to your current shell session. Finding the syntax, the logic and the method when you are not a Nix guru made me despair. I've been struggling a lot with this, trying to install from cpan or pip (even if it wouldn't work after next update of my system and I didn't even got it to work. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/nix-cpan-pip.gmi</guid> <link>gemini://perso.pw/blog//articles/nix-cpan-pip.gmi</link> <pubDate>Sat, 18 Sep 2021 00:00:00 GMT</pubDate> </item> <item> <title>Benchmarking compilation time with ccache/mfs on OpenBSD</title> <description> <![CDATA[ <pre># Introduction I always wondered how to make packages building faster. There are at least two easy tricks available: storing temporary data into RAM and caching build objects. Caching build objects can be done with ccache, it will intercept cc and c++ calls (the programs compiling C/C++ files) and depending on the inputs will reuse a previously built object if available or build normally and store the result for potential next reuse. It has nearly no use when you build software only once because it requires objects to be cached before being useful. It obviously doesn't work for non C/C++ programs. The other trick is using a temporary filesystem stored in memory (RAM), on OpenBSD we will use mfs but on Linux or FreeBSD you could use tmpfs. The difference between those two is mfs will reserve the given memory usage while tmpfs is faster and won't reserve the memory of its filesystem (which has pros and cons). So, I decided to measure the build time of the Gemini browser Lagrange in three cases: without ccache, with ccache but first build so it doesn't have any cached objects and with ccache with objects in it. I did these three tests multiple time because I also wanted to measure the impact of using memory base filesystem or the old spinning disk drive in my computer, this made a lot of tests because I tried with ccache on mfs and package build objects (later referenced as pobj) on mfs, then one on hdd and the other on mfs and so on. To proceed, I compiled net/lagrange using dpb after cleaning the lagrange package generated everytime. Using dpb made measurement a lot easier and the setup was reliable. It added some overhead when checking dependencies (that were already installed in the chroot) but the point was to compare the time difference between various tweaks. # Results numbers Here are the results, raw and with a graphical view. I did run multiples time the same test sometimes to see if the result dispersion was huge, but it was reliable at +/- 1 second.
Type Duration for second build Duration with empty cache
ccache mfs + pobj mfs 60 133
ccache mfs + pobj hdd 63 130
ccache hdd + pobj mfs 61 127
ccache hdd + pobj hdd 68 137
no ccache + pobj mfs 124
no ccache + pobj hdd 128
=> static/ccache-hdd-bench.png Diagram with results # Results analysis At first glance, we can see that not using ccache results in builds a bit faster, so ccache definitely has a very small performance impact when there is no cached objects. Then, we can see results are really tied together, except for the ccache and pobj both on the hdd which is the slowest combination by far compared to the others times differences. # Problems encountered My building system has 16 GB of memory and 4 cores, I want builds to be as fast as possible so I use the 4 cores, for some programs using Rust for compilation (like Firefox), more than 8 GB of memory (4x 2GB) is required because of Rust and I need to keep a lot of memory available. I tried to build it once with 10GB of mfs filesystem but when packaging it did reach the filesystem limit and fail, it also swapped during the build process. When using a 8GB mfs for pobj, I've been hitting the limit which induced build failures, building four ports in parallel can take some disk space, especially at package time when it copies the result. It's not always easy to store everything in memory. I decided to go with a 3 GB ccache over MFS and keep the pobj on the hdd. I had no spare SSD to add an SSD to the list. :( # Conclusion Using mfs for at least ccache or pobj but not necessarily both is beneficial. I would recommend using ccache in mfs because the memory required to store it is only 1 or 2 GB for regular builds while storing the pobj in mfs could requires a few dozen gigabytes of memory (I think chromium requires 30 or 40 GB last time I tried). </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/openbsd-ccache-mfs.gmi</guid> <link>gemini://perso.pw/blog//articles/openbsd-ccache-mfs.gmi</link> <pubDate>Sat, 18 Sep 2021 00:00:00 GMT</pubDate> </item> <item> <title>Experimenting with a new OpenBSD development lab</title> <description> <![CDATA[ <pre># Experimenting This article is not an how to or explaining anything, I just wanted to share how I spend my current free time. It's obviously OpenBSD related. When updating or making new packages, it's important to get the dependencies right, at least for the compilation dependencies it's not hard because you know it's fine once the building process can run entirely, but at run time you may have surprises and discover lacking dependencies. # What's a dependency? Software are made of written text called source code (or code to make it simpler), but to avoid wasting time (because writing code is hard enough already) some people write libraries which are pieces of code made in the purpose of being used by other programs (through fellow developers) to save everyone's time and efforts. A library can propose graphics manipulation, time and date functions, sound decoding etc... and the software we are using rely on A LOT of extra code that comes from other piece of code we have to ship separately. Those are dependencies. There are dependencies required for building a program, they are used to manipulate the source code to transform it into machine readable code, or for organizing the building process to ease the development and so on and there are libraries dependencies which are required for the software to run. The simplest one to understand would be the library to access the audio system of your operating system for an audio player. And finally, we have run time dependencies which can be found upon loading a software or within its use. They may not be well documented in the project so we can't really know they are required until we try to use some feature of the software and it crashes / errors because of something missing. This could be a program that would call an extra program to delegate the resizing of a picture. # What's up? In order to spot these run time dependencies, I've started to use an old laptop (a thinkpad T400 that I absolutely love) with a clean OpenBSD installation, lot of local packages on my network (see it later) and a very clean X environment. The point of this computer is to clean every package, install only one I need to try (pulling the dependencies that come with it) and see if it works under the minimal conditions. They should work with no issue if the packages are correctly done. Once I'm satisfied with the test process, I will clean every packages on the system and try another one. Sometimes, as we have many many packages installed, it happens we have a run time dependency installed by that is not declared in the software package we are working on, and we don't see the failure as the requirement is provided by some other package. By using a clean environment to check every single program separately, I remove the "other packages" that could provide a requirement. # Building When I work on packages I often need to compile many of them, and it takes time, a lot of time, and my laptop usually make a lot of noise and is hot and slow to do something else, it's not very practical. I'm going to setup a dedicated building machine that I will power on when I'll work on ports, and it will be hidden in some isolated corner at home building packages when I need it. That machine is a bit more powerful and will prevent my laptop to be unusable for some time. This machine in combination with the laptop are a great combination to make quick changes and test how it goes. The laptop will pull packages directly from the building machine, and things could be fixed on the building machine quite fast. # The end Contributing to packages is an endless work, making good packages is hard work and requires tests. I'm not really good at doing packages but I want to improve myself in that field and also improve the way we can test packages are working. With these new development environments I hope I will be able to contribute a bit more to the quality of the futures OpenBSD releases. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/experiments-openbsd-building.gmi</guid> <link>gemini://perso.pw/blog//articles/experiments-openbsd-building.gmi</link> <pubDate>Thu, 16 Sep 2021 00:00:00 GMT</pubDate> </item> <item> <title>Reviewing some open source distraction free editors</title> <description> <![CDATA[ <pre># Introduction This article is about comparing "distraction free" editors running on Linux. This category of editors is supposed to be used in full screen and shouldn't display much more than text, allowing to stay focused on the text. I've found a few programs that run on Linux and are open source, I deliberately omitted web browser based editors