💾 Archived View for perso.pw › blog › rss.xml captured on 2021-12-05 at 23:47:19.

View Raw

More Information

⬅️ Previous capture (2021-12-04)

➡️ Next capture (2021-12-17)

🚧 View Differences

-=-=-=-=-=-=-

<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Solene'%</title>
    <description></description>
    <link>gemini://perso.pw/blog/</link>
    <atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" />
    <item>
  <title>Nvidia card in eGPU and NixOS</title>
  <description>
    <![CDATA[
<pre># Introduction

I previously wrote about using an eGPU on Gentoo Linux.  It was working when using the eGPU display but I never got it to work for accelerating games using the laptop display.

Now, I'm back on NixOS and I got it to work!

# What is it about?

My laptop has a thunderbolt connector and I'm using a Razer Core X external GPU case that is connected to the laptop using a thunderbolt cable.  This allows to use an external "real" GPU on a laptop but it has performance trade off and on Linux also compatibility issues.

There are three ways to use the nvidia eGPU:

- run the nvidia driver and use it as a normal card with its own display connected to the GPU, not always practical with a laptop
- use optirun / primerun to run programs within a virtual X server on that GPU and then display it on the X server (very clunky, originally created for Nvidia Optimus laptop)
- use Nvidia offloading module (it seems recent and I learned about it very recently)

The first case is easy, just install nvidia driver and use the right card, it should work on any setup.  This is the setup giving best performance.

The most complicated setup is to use the eGPU to render what's displayed on the laptop, meaning the video signal has to come back from the thunderbolt cable, reducing the bandwidth.

# Nvidia offloading

Nvidia made work in their proprietary driver to allow a program to have its OpenGL/Vulkan calls to be done in a GPU that is not the one used for the display.  This allows to throw optirun/primerun for this use case, which is good because they added performance penalty, complicated setup and many problems.

=> https://download.nvidia.com/XFree86/Linux-x86_64/435.17/README/primerenderoffload.html Official documentation about offloading with nvidia driver


# NixOS

I really love NixOS and for writing articles it's so awesome, because instead of a set of instructions depending on conditions, I only have to share the piece of config required.

This is the bits to add to your /etc/nixos/configuration.nix file and then rebuild system:

hardware.nvidia.modesetting.enable = true;

hardware.nvidia.prime.sync.allowExternalGpu = true;

hardware.nvidia.prime.offload.enable = true;

hardware.nvidia.prime.nvidiaBusId = "PCI:10:0:0";

hardware.nvidia.prime.intelBusId = "PCI:0:2:0";

services.xserver.videoDrivers = ["nvidia" ];


A few notes about the previous chunk of config:
- only add nvidia to the list of video drivers, at first I was adding modesetting but this was creating troubles
- the PCI bus ID can be found with lspci, it has to be translated in decimal, here my nvidia id is 10:0:0 but in lspci it's 0a:00:00 with 0a being 10 in hexadecimal

=> https://nixos.wiki/wiki/Nvidia#offload_mode NixOS wiki about nvidia offload mode

# How to use it

The use of offloading is controlled by environment variables.  What's pretty cool is that if you didn't connect the eGPU, it will still work (with integrated GPU).

## Running a command

We can use glxinfo to be sure it's working, add the environment as a prefix:

__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo


## In Steam

Modify the command line of each game you want to run with the eGPU (it's tedious), by:

__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia %command%


## In Lutris

Lutris has a per-game or per-runner setting named "Enable Nvidia offloading", you just have to enable it.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/nixos-egpu.gmi</guid>
  <link>gemini://perso.pw/blog//articles/nixos-egpu.gmi</link>
  <pubDate>Sun, 05 Dec 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Using awk to pretty-display OpenBSD packages update changes</title>
  <description>
    <![CDATA[
<pre># Introduction

You use OpenBSD and when you upgrade your packages you often wonder which one is a rebuild and which one is a real version update?  The packages updates are logged in /var/log/messages and using awk it's easy to achieve some kind of report.

# Command line

The typical update line will display the package name, its version, a "->" and the newer version of the installed package.  By verifying if the newer version is different from the original version, we can report updated packages.

awk is already installed in OpenBSD, so you can run this command in your terminal without any other requirement.

awk -F '-' '/Added/ && /->/ { sub(">","",$0) ; if( $(NF-1) != $NF ) { $NF=" => "$NF ; print }}' /var/log/messages


The output should look like this (after a pkg_add -u):

Dec 4 12:27:45 daru pkg_add: Added quirks 4.86 => 4.87

Dec 4 13:01:01 daru pkg_add: Added cataclysm dda 0.F.2v0 => 0.F.3p0v0

Dec 4 13:01:05 daru pkg_add: Added ccache 4.5 => 4.5.1

Dec 4 13:04:47 daru pkg_add: Added nss 3.72 => 3.73

Dec 4 13:07:43 daru pkg_add: Added libexif 0.6.23p0 => 0.6.24

Dec 4 13:40:41 daru pkg_add: Added kakoune 2021.08.28 => 2021.11.08

Dec 4 13:43:27 daru pkg_add: Added kdeconnect kde 1.4.1 => 21.08.3

Dec 4 13:46:16 daru pkg_add: Added libinotify 20180201 => 20211018

Dec 4 13:51:42 daru pkg_add: Added libreoffice 7.2.2.2p0v0 => 7.2.3.2v0

Dec 4 13:52:37 daru pkg_add: Added mousepad 0.5.7 => 0.5.8

Dec 4 13:52:50 daru pkg_add: Added munin node 2.0.68 => 2.0.69

Dec 4 13:53:01 daru pkg_add: Added munin server 2.0.68 => 2.0.69

Dec 4 13:53:14 daru pkg_add: Added neomutt 20211029p0 gpgme sasl 20211029p0 gpgme => sasl

Dec 4 13:53:20 daru pkg_add: Added nethack 3.6.6p0 no_x11 3.6.6p0 => no_x11

Dec 4 13:58:53 daru pkg_add: Added ristretto 0.12.0 => 0.12.1

Dec 4 14:01:07 daru pkg_add: Added rust 1.56.1 => 1.57.0

Dec 4 14:02:33 daru pkg_add: Added sysclean 2.9 => 3.0

Dec 4 14:03:57 daru pkg_add: Added uget 2.0.11p4 => 2.2.2p0

Dec 4 14:04:35 daru pkg_add: Added w3m 0.5.3pl20210102p0 image 0.5.3pl20210102p0 => image

Dec 4 14:05:49 daru pkg_add: Added yt dlp 2021.11.10.1 => 2021.12.01


# Limitations

The command seems to mangle the separators when displaying the result and doesn't work well with flavors packages that will always be shown as updated.

At least it's a good start, it requires a bit more polishing but that's already useful enough for me.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-package-update-report.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-package-update-report.gmi</link>
  <pubDate>Sat, 04 Dec 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>The state of Steam on OpenBSD</title>
  <description>
    <![CDATA[
<pre># Introduction

There is a very common question within the OpenBSD community, mostly from newcomers: "How can I install Steam on OpenBSD?".

The answer is: You can't, there is no way, this is impossible, period.


# Why?

Steam is a closed source program, while it's now also available on Linux doesn't mean it run on OpenBSD.  The Linux Steam version is compiled for linux and without the sources we can't port it on OpenBSD.

Even if Steam was able to be installed and could be launched, games are not made for OpenBSD and wouldn't work either.

On FreeBSD it may be possible to install Windows Steam using Wine, but Wine is not available on OpenBSD because it require some specific Kernel memory management we don't want to implement for security reasons (I don't have the whole story), but FreeBSD also has a Linux compatibility mode to run Linux binaries, allowing to use programs compiled for Linux.  This linux emulation layer has been dropped in OpenBSD a few years ago because it was old and unmaintained, bringing more issues than helping.

So, you can't install Steam or use it on OpenBSD.  If you need Steam, use a supported operating system.

I wanted to make an article about this in hope my text will be well referenced within search engines, to help people looking for Steam on OpenBSD by giving them a reliable answer.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-steam.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-steam.gmi</link>
  <pubDate>Wed, 01 Dec 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Nethack: end of Sery the Tourist</title>
  <description>
    <![CDATA[
<pre>Hello, if you remember my previous publications about Nethack and my character "Sery the tourist", I have bad news.  On OpenBSD, nethack saves are stored in /usr/local/lib/nethackdir-3.6.0/logfile and obviously I didn't save this when changing computer a few months ago.

I'm very sad of this data loss because I was enjoying a lot telling the story of the character while playing.  Sery reached 7th floor while being a Tourist, which is incredible given all the nethack plays I've done and this one was going really well.

I don't know if you readers enjoyed that kind of content, if so please tell me so I may start a new game and write about it.

As an end, let's say Sery stayed too long in 7th floor and the Langoliers came to eat the Time of her reality.

=> https://stephenking.fandom.com/wiki/Langoliers Langoliers on Stephen King wiki fandom
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/nethack-end-of-sery.gmi</guid>
  <link>gemini://perso.pw/blog//articles/nethack-end-of-sery.gmi</link>
  <pubDate>Sat, 27 Nov 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Simple network dashboard with vnstat</title>
  <description>
    <![CDATA[
<pre># Introduction

Hi!  If you run a server or a router, you may want to have a nice view of the bandwidth usage and statistics.  This is easy and quick to achieve using vnstat software.  It will gather data regularly from network interfaces and store it in rrd files, it's very efficient and easy to use, and its companion program vnstati can generate pictures, perfect for easy visualization.

=> static/vnstat-dashboard.png My simple router network dashboard with vnstat
=> https://humdi.net/vnstat/ vnstat project homepage

# Setup (on OpenBSD)

Simply install vnstat and vnstati packages with pkg_add.  All the network interfaces will be added to vnstatd databases to be monitored.

pkg_add vnstat vnstati

rcctl enable vnstatd

rcctl start vnstatd

install -d -o _vnstat /var/www/htdocs/dashboard


Create a script in /var/www/htdocs/dashboard and make it executable:

!/bin/sh

cd /var/www/htdocs/dashboard/ || exit 1

last 60 entries of 5 minutes stats

vnstati --fiveminutes 60 -o 5.png

vertical summary of last two days

refresh only after 60 minutes

vnstati -c 60 -vs -o vs.png

daily stats for 14 last days

refresh only after 60 minutes

vnstati -c 60 --days 14 -o d.png

monthly stats for last 5 months

refresh only after 300 minutes

vnstati -c 300 --months 5 -o m.png


and create a simple index.html file to display pictures:

<html>

<body>

<div style="display: inline-block;">

<img src="vs.png" /><br />

<img src="d.png" /><br />

<img src="m.png" /><br />

</div>

<img src="5.png" /><br />

</body>

</html>


Add a cron as root to run the script every 10 minutes using _vnstat user.


My personal crontab runs only from 8h to 23h because I will never look at my dashboard while I'm sleeping so I don't need to keep it updated, just replace * by 8-23 for the hour field.

# Http server

Obviously you need to serve /var/www/htdocs/dashboard/ from your http server, I won't cover this step in the article.

# Conclusion

Vnstat is fast, light and easy to use, but yet it produces nice results.

As an extra, you can run the vnstat commands (without the i) and use the raw text output to build an pure text dashboard if you don't want to use pictures (or http).
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/simple-bandwidth-dashboard.gmi</guid>
  <link>gemini://perso.pw/blog//articles/simple-bandwidth-dashboard.gmi</link>
  <pubDate>Thu, 25 Nov 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>OpenBSD and Linux comparison: data transfer benchmark</title>
  <description>
    <![CDATA[
<pre># Introduction

I had a high suspicion about something but today I made measurements.  My feeling is that downloading data from OpenBSD use more "upload data" than on other OS

I originally thought about this issue when I found that using OpenVPN on OpenBSD was limiting my download speed because I was reaching the upload limit of my DSL line, but it was fine on Linux.  From there, I've been thinking since then that OpenBSD was using more out data but I never measured anything before.

# Testing protocol

Now that I have an OpenBSD router it was easy to make the measures with a match rule and a label.  I'll be downloading a specific file from a specific server a few times with each OS, so I'm adding a rule matching this connection.

match proto tcp from 10.42.42.32 to 145.238.169.11 label benchmark


Then, I've been downloading this file three times per OS and resetting counter after each download and saved the results from "pfctl -s labels" command.

=> http://ftp.fr.openbsd.org/pub/OpenBSD/7.0/amd64/comp70.tgz OpenBSD comp70.tgz file from an OpenBSD mirror

The variance of each result per OS was very low, I used the average of each columns as the final result per OS.

# Raw results

OS total packets total bytes packets OUT bytes OUT packets IN bytes IN

----- ------------- ----------- ----------- --------- ---------- --------

OpenBSD 175348 158731602 72068 3824812 10328 154906790

OpenBSD 175770 158789838 72486 3877048 10328 154912790

OpenBSD 176286 158853778 72994 3928988 10329 154924790

Linux 154382 157607418 51118 2724628 10326 154882790

Linux 154192 157596714 50928 2713924 10326 154882790

Linux 153990 157584882 50728 2705092 10326 154879790


# About the results

A quick look will show that OpenBSD sent +42% OUT packets compared to Linux and also +42% OUT bytes, meanwhile the OpenBSD/Linux IN bytes ratio is nearly identical (100.02%).

=> static/network-usage-packets.png Chart showing the IN and OUT packets of Linux and OpenBSD side by side

# Conclusion

I'm not sure what to conclude except that now, I'm sure there is something here requiring investigation.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-network-usage-mystery.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-network-usage-mystery.gmi</link>
  <pubDate>Sun, 14 Nov 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>How I ended up liking GNOME</title>
  <description>
    <![CDATA[
<pre># Introduction

Hi!  This was a while without much activity on my blog, the reason is that I stabbed through my right index with a knife by accident, the injury was so bad I can barely use my right hand because I couldn't move my index at all without pain.  So I've been stuck with only my left hand for a month now.  Good news, it's finally getting better :)

Which leads me to the topic of this article, why I ended liking GNOME!

# Why I didn't use GNOME

I will first start about why I didn't use it before.  I like to try everything all the time, I like disruption, I like having an hostile (desktop/shell/computer) environment to stay sharp and not being stuck on ideas.

My current setup was using Fvwm or Stumpwm, mostly keyboard driven, with many virtual desktop to spatially regroup different activities.  However, with an injured hand, I've been facing a big issue, most of my key binding were for two hands and it seemed too weird for me to change the bindings to work with one hand.

I tried to adapt using only one hand, but I got poor results and using the cursor was not very efficient because stumpwm is hostile to cursor and fvwm is not really great for this either.

# The road to GNOME

With only one hand to use my computer, I found the awesome program ibus-typing-booster to help me typing by auto completing words (a bit like on touchscreen phones), it worked out of the box with GNOME due to the ibus integration working well.  I used GNOME to debug the package but ended liking it in my current condition.

How do I like it now, while I was pestling about it a few months ago as I found it very confusing?  Because it's easy to use and spared me movements with my hands, absolutely.



This is certainly doing in MATE or Xfce too without much work, but it's out of the box with GNOME.  It's perfectly usable without knowing any keyboard shortcut.

# Mixed feelings

I'm pretty sure I'll return to my previous environment once my finger/hand because I have a better feeling with it and I find it more usable.  But I have to thanks the GNOME project to work on this desktop environment that is easy to use and quite accessible.

It's important to put into perspective when dealing with desktop environment.  GNOME may not be the most performing and ergonomic desktop, but it's accessible, easy to use and forgiving people who doesn't want to learn tons of key bindings or can't do them.

# Conclusion

There is a very recurrent question I see on IRC or forums: what's the best desktop environment/window manager?  What are YOU using?  I stopped having a bold opinion about this topic, I simply reply there are many desktop environments because they are many kind of people and the person asking the question need to find the right one to suiting them.

# Update (2021-11-11)

Using the xfdashboard program and assigning it to Super key allows to mimic the GNOME "activity" view in your favorite window manager: choosing windows, moving them between desktops, running applications.  I think this can easily turn any window manager into something more accessible, or at least "GNOME like".
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/how-I-ended-liking-gnome.gmi</guid>
  <link>gemini://perso.pw/blog//articles/how-I-ended-liking-gnome.gmi</link>
  <pubDate>Wed, 10 Nov 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>What if Internet stops? How to rebuild an offline federated infrastructure using OpenBSD</title>
  <description>
    <![CDATA[
<pre># Introduction

What if we lose Internet tomorrow and we stop building computers?  What would you want on your computer in the eventuality we would still have *some* power available to run it?

I find it to be an interesting exercise in the continuity of my old laptop challenge.

# Bootstrapping

My biggest point would be that my computer could be used to replicate itself to other computer owners, give them the data so they can spread it again.  Data copied over and over will be a lot more resilient than a single copy with a few local backups (local as in same city at best because there is no Internet).  

Because most people's computers relying on the Internet to have data turned into useless bricks, I think everyone would be glad to be part of an useful infrastructure that can replicate and extend.

# Essentials

I think I would have to argue this is very useful to have computers and knowledge they can carry if we are short on electricity for running computers.  We would want science knowledge (medicine, chemistry, physics, mathematics) but also history and other topics in the long run.  We would also require maps of the local region/country to make long term plans and help decisions and planning to build infrastructures (pipes, roads, lines).  We would require software to display but also edit these data.

Here is a list of sources I would keep synced on my computer.



The wikipedia dumps in zim format are very practical to run an offline wikipedia, we would require some OpenBSD programs to make it work but we would like more people to have them, Android tablets and phones are everywhere, small and doesn't draw much battery, I'd distribute the wikipedia dumps along with a kiwix APK file to view them without requiring a computer.  Keeping the sources of the Android programs would be a wise decision too.

As for maps, we can download areas on openstreetmap and rework them with Qgis on OpenBSD and redistribute maps and a compatible viewer for Android devices with the OSMand~ free software app.

It would be important to keep the data set rather small, I think under 100 GB because it would be complicated to have a 500GB requirement for setting up a new machine that can re-propagate the data set.

If I would ever need to do that, the first time would be to make serious backups of the data set using multiples copies on hard drives that I would I hand to different people.  Once the propagation process is done, it matters less because I could still gather the data somewhere.

=> https://wiki.kiwix.org/wiki/Content_in_all_languages Kiwix compatible data sets (including Wikipedia)
=> https://f-droid.org/packages/org.kiwix.kiwixmobile/ Android Kiwix app on F-droid
=> https://f-droid.org/en/packages/net.osmand.plus/ Android OSMand~ app for OSM maps on F-droid

# Why OpenBSD?

I'd choose OpenBSD because it's a system I know well, but also because it's easy to hack on it to make changes on the kernel.  If we ever need to connect a computer to an industrial machine, I'd rather try to port if on OpenBSD.

This is also true for the ports library, with all the distfiles it's possible to rebuild packages for multiple architectures, allowing to use older computers that are not amd64, but also easily patching distfiles to fix issues or add new features.  Carrying packages without their sources would be a huge mistake, you will have a set of binary blobs that can't evolve.

OpenBSD is also easy to install and it works fine most of the time.  I'd imagine automatic installation process from USB or even from PXE, and then share all the data so other people can propagate installation and data again.

This would also work with another system of course, the point is to keep the sources of the system and of its package to be able to rebuild the system for older supported architecture but also be able to enhance and work on the sources for bug fixing and new features.

# Distributing

I think a very nice solution would be to use Git, there are plugins to handle binary data so the repository doesn't grow over time.  Git is decentralized, you can get updates from someone who receives an update from someone else and git can also report if someone messed with the history.

We could imagine some well known places running a local server with a WiFi hotspot that can receive updates from someone allowed to (using ssh+git) push updates to a git repository.  There could be repositories for various topics like: news, system update, culture (music, videos, readings), maybe some kind of social network like twtxt.  Anyone could come and sync their local git repository to get the news and updates, and be able to spread it again.

=> https://github.com/buckket/twtxt twtxt project github page

# Conclusion

This is often a topic I have in mind when I think at why we are using computers and what makes them useful.  In this theoretic future which is not "post-apocalyptic" but just something went wrong and we have a LOT of computers that become useless.  I just want to prove that computers can still be useful without the Internet but you just need to understand their genuine purpose.

I'd be interested into what others would do, please let me know if you want to write on that topic :)
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/huge-disaster-recovery-plan.gmi</guid>
  <link>gemini://perso.pw/blog//articles/huge-disaster-recovery-plan.gmi</link>
  <pubDate>Thu, 21 Oct 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Use fzf for ksh history search</title>
  <description>
    <![CDATA[
<pre># Introduction

fzf is a powerful tool to interactively select a line among data piped to stdin, a simple example is to pick a line in your shell history and it's my main fzf use.

fzf ships with bindings for bash, zsh or fish but doesn't provide anything for ksh, OpenBSD default shell.  I found a way to run it with Ctrl+R but it comes with a limitation!

This setup will run fzf for looking a history line with Ctrl+R and will run it without allowing you to edit the line! /!\

# Configuration

In your interactive shell configuration file (should be the one set in $ENV), add the following function and binding, it will rebind Ctrl+R to fzf-histo function that will look into your shell history.

function fzf-histo {

RES=$(fzf --tac --no-sort -e < $HISTFILE)

test -n "$RES" || exit 0

eval "$RES"

}

bind -m ^R=fzf-histo^J


Reload your file or start a new shell, Ctrl+R should now run fzf for a more powerful history search.  Don't forget to install fzf package.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/ksh-fzf.gmi</guid>
  <link>gemini://perso.pw/blog//articles/ksh-fzf.gmi</link>
  <pubDate>Sun, 17 Oct 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Typing faster with assistive technology</title>
  <description>
    <![CDATA[
<pre># Introduction 

This article is being written only using my left hand with the help of ibus-typing-booster program.

=> https://mike-fabian.github.io/ibus-typing-booster/ ibus-typing-booster project

The purpose of this tool is to assist the user by proposing words while typing, a bit like smartphones do.  It can be trained with a dictionary, a text file but also learn from user inputs over time.

A package for OpenBSD is on the tracks.

# Installation 

This program requires ibus to work, on Gnome it is already enabled but in other environments some configuration are required.  Because this may be subject to change over time and duplicating information is bad, I'll give the links for configuring ibus-typing-booster.

=> https://mike-fabian.github.io/ibus-typing-booster/docs/user/#1 How to enable ibus-typing-booster

# How to use

Once you have setup ibus and ibus-typing-booster you should be able to switch from normal input to assisted input using "super"+space.

When you type with ibus-typing-booster enabled, with default settings, the input should be underlined to show a suggestion can be triggered using TAB key.  Then, from a popup window you can pick a word by using TAB to cycle between the suggestions and pressing space to validate, or use the F key matching your choice number (F1 for first, F2 for second etc...) and that's all.

# Configuration

There are many ways to configure it, suggestions can be done inline while typing which I think is more helpful when you type slowly and you want a quick boost when the suggestion is correct.  The suggestions popup can be vertical or horizontal, I personally prefer horizontal which is not the default.  Colors and key bindings can changed.

# Performance

While I type very fast when I have both my hands, using one hand requires me to look the keyboard and make a lot of moves with my hand.  This work fine and I can type reasonably fast but this is extremely exhausting and painful for my hand.  With ibus-typing-booster I can type full sentences with less efforts but a bit slower.  However this is a lot more comfortable than typing everything using my hand.

# Conclusion

This is an assistive technology easy to setup and that can be a life changer for disabled users who can make use of it.

This is not the first time I'm temporarily disabled in regards to using a keyboard, I previously tried a mirrored keyboard layout reverting keys when pressing caps lock, and also Dasher which allow to make words from simple movements such as moving mouse cursor.  I find this ibus plugin to be easier to integrate for the brain because I just type with my keyboard in the programs, with Dasher I need to cut and paste content, and with mirrored layout I need to focus on the layout change.

I am very happy of it.</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/ibus-typing-booster.gmi</guid>
  <link>gemini://perso.pw/blog//articles/ibus-typing-booster.gmi</link>
  <pubDate>Sat, 16 Oct 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Full WireGuard setup with OpenBSD</title>
  <description>
    <![CDATA[
<pre># Introduction

We want all our network traffic to go through a WireGuard VPN tunnel automatically, both WireGuard client and server are running OpenBSD, how to do that?  While I thought it was simple at first, it soon became clear that the "default" part of the problem was not easy to solve, fortunately there are solutions.

This guide should work from OpenBSD 6.9.

=> https://man.openbsd.org/pf.conf#nat-to pf.conf man page about NAT
=> https://man.openbsd.org/wg WireGuard interface man page
=> https://man.openbsd.org/ifconfig#WIREGUARD ifconfig man page, WireGuard section

# Setup

For this setup I assume we have a server running OpenBSD with a public IP address (1.2.3.4 for the example) and an OpenBSD computer with Internet connectivity.

Because we want to use the WireGuard tunnel as the default route, we can't define a default route through WireGuard as this, that would prevent our interface to reach the WireGuard endpoint to make the tunnel working.  We could play with the routing table by deleting the default route found on the interface, create a new route to reach the WireGuard server and then create a default route through WireGuard, but the whole process is fragile and there is no right place to trigger a script doing this.

Instead, we can assign the network interface used to access the Internet to the rdomain 1, configure WireGuard to reach its remote peer through rdomain 1 and create a default route through WireGuard on the rdomain 0.  Quick explanation about rdomain: they are different routing tables, default is rdomain 0 but we can create new routing tables and run commands using a specific routing table with "route -T 1 exec ping perso.pw" to make a ping through rdomain 1.


+-------------+

| server | wg0: 192.168.10.1

| |---------------+

+-------------+ |

| public IP |

| 1.2.3.4 |

| |

| |

/\/\/\/\/\/\/\ |WireGuard

| internet | |VPN

\/\/\/\/\/\/\/ |

| |

| |

|rdomain 1 |

+-------------+ |

| computer |---------------+

+-------------+ wg0: 192.168.10.2

rdomain 0 (default)


# Configuration

The configuration process will be done in this order:

1. create the WireGuard interface on your computer to get its public key
2. create the WireGuard interface on the server to get its public key
3. configure PF to enable NAT and enable IP forwarding
4. reconfigure computer's WireGuard tunnel using server's public key
5. time to test the tunnel
6. make it default route

Our WireGuard server will accept connections on address 1.2.3.4 at the UDP port 4433, we will use the network 192.168.10.0/24 for the VPN, the server IP on WireGuard will be 192.168.10.1 and this will be our future default route.

## On your computer

We will make a simple script to generate the configuration file, you can easily understand what is being done.  Replace "1.2.3.4 4433" by your IP and UDP port to match your setup.

PRIVKEY=$(openssl rand -base64 32)

cat <<EOF > /etc/hostname.wg0

wgkey $PRIVKEY

wgpeer wgendpoint 1.2.3.4 4433 wgaip 0.0.0.0/0

inet 192.168.10.2/24

up

EOF

start interface so we can get the public key

we should have an error here, this is normal

sh /etc/netstart wg0

PUBKEY=$(ifconfig wg0 | grep 'wgpubkey' | cut -d ' ' -f 2)

echo "You need $PUBKEY to setup the remote peer"


## On the server

### WireGuard

Like we did on the computer, we will use a script to configure the server.  It's important to get the PUBKEY displayed in the previous step.

PUBKEY=PASTE_PUBKEY_HERE

PRIVKEY=$(openssl rand -base64 32)

cat <<EOF > /etc/hostname.wg0

wgkey $PRIVKEY

wgpeer $PUBKEY wgaip 192.168.10.0/24

inet 192.168.10.1/24

wgport 4433

up

EOF

start interface so we can get the public key

we should have an error here, this is normal

sh /etc/netstart wg0

PUBKEY=$(ifconfig wg0 | grep 'wgpubkey' | cut -d ' ' -f 2)

echo "You need $PUBKEY to setup the local peer"


Keep the public key for next step.

## Firewall

We want to enable NAT so we can reach the Internet through the server using WireGuard, edit /etc/pf.conf to add the following line (after the skip lines):

pass out quick on egress from wg0:network to any nat-to (egress)


Reload with "pfctl -f /etc/pf.conf".

NOTE: if you block all incoming traffic by default, you need to open UDP port 4433.  You will also need to either skip firewall on wg0 or configure PF to open what you need.  This is beyond the scope of this guide.

## IP forwarding

We need to enable IP forwarding because we will pass packets from an interface to another, this is done with "sysctl net.inet.ip.forwarding=1" as root.  To make it persistent across reboot, add "net.inet.ip.forwarding=1" to /etc/sysctl.conf (you may have to create the file).

From now, the server should be ready.

## On your computer

Edit /etc/hostname.wg0 and paste the public key between "wgpeer" and "wgaip", the public key is wgpeer's parameter.  Then run "sh /etc/netstart wg0" to reconfigure your wg0 tunnel.

After this step, you should be able to ping 192.168.10.1 from your computer (and 192.168.10.2 from the server).  If not, please double check the WireGuard and PF configurations on both side.

## Default route

This simple setup for the default route will truly make WireGuard your default route.  You have to understand services listening on all interfaces will only attach to WireGuard interface because it's the only address in rdomain 0, if needed you can use a specific routing table for a service as explained in rc.d man page.

Replace the line "up" with the following:

wgrtable 1

up

!route add -net default 192.168.10.1


Your configuration file should look like this:

wgkey YOUR_KEY

wgpeer YOUR_PUBKEY wgendpoint REMOTE_IP 4433 wgaip 0.0.0.0/0

inet 192.168.10.2/24

wgrtable 1

up

!route add -net default 192.168.10.1


Now, add "rdomain 1" to your network interface used to reach the Internet, in my setup it's /etc/hostname.iwn0 and it looks like this.

join network wpakey superprivatekey

join home wpakey notsuperprivatekey

rdomain 1

up

autoconf


Now, you can restart network with "sh /etc/netstart" and all the network should pass through the WireGuard tunnel.

# Handling DNS

Because you may use a nameserver in /etc/resolv.conf that was provided by your local network, it's not reachable anymore.  I highly recommend to use unwind (in every case anyway) to have a local resolver, or modify /etc/resolv.conf to use a public resolver.

unwind can be enabled with "rcctl enable unwind" and "rcctl start unwind", from OpenBSD 7.0 you should have resolvd running by default that will rewrite /etc/resolv.conf if unwind is started, otherwise you need to write "nameserver 127.0.0.1" in /etc/resolv.conf

# Bypass VPN

If you need for some reason to run a program and not route its traffic through the VPN, it is possible.  The following command will run firefox using the routing table 1, however depending on the content of your /etc/resolv.conf you may have issues resolving names (because 127.0.0.1 is only reachable on rdomain 0!).  So a simple fix would be to use a public resolver if you really need to do so often.

route -T 1 exec firefox


=> https://man.openbsd.org/route.8#exec route man page about exec command

# WireGuard behind a NAT

If you are behind a NAT you may need to use the KeepAlive option on your WireGuard tunnel to keep it working.  Just add "wgpka 20" to enable a KeepAlive packet every 20 seconds in /etc/hostname.wg0 like this:

wgpeer YOUR_PUBKEY wgendpoint REMOTE_IP 4433 wgaip 0.0.0.0/0 wgpka 20

[....]


=> https://man.openbsd.org/ifconfig#wgpka ifconfig man page explaining wgpka parameter


# Conclusion

WireGuard is easy to deploy but making it a default network interface adds some complexity.  This is usually simpler for protocols like OpenVPN because the OpenVPN daemon can automatically do the magic to rewrite the routes (and it doesn't do it very well) and won't prevent non-VPN access until the VPN is connected.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-wireguard-exit.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-wireguard-exit.gmi</link>
  <pubDate>Sat, 09 Oct 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Port of the week: foliate</title>
  <description>
    <![CDATA[
<pre># Introduction

Today I wanted to share with you about the program Foliate, a GTK Ebook reader with interesting features.  First, there aren't many epub readers available on OpenBSD (and also on Linux).

=> https://johnfactotum.github.io/foliate/ Foliate project website

# How to install

On OpenBSD, a simple "pkg_add foliate" and you are done.

# Features

Foliate supports multiple features such as:



# Port of the week

Because it's easy to use, its feature and that it works very well compared to alternatives this port is nominated for the port of the week!
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/potw-foliate.gmi</guid>
  <link>gemini://perso.pw/blog//articles/potw-foliate.gmi</link>
  <pubDate>Mon, 04 Oct 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Story of making the OpenBSD Webzine</title>
  <description>
    <![CDATA[
<pre># Introduction

Hello readers!  I just started a Webzine dedicated to the OpenBSD project and community.  I'd like to tell you the process of its creation.

=> https://webzine.puffy.cafe/ The OpenBSD Webzine

# Idea

A week ago I joked on an french OpenBSD IRC channel that it would be nice to do a webzine to gather some quotes and links about OpenBSD, I didn't thought it would be real a few days later.  OpenBSD has a small community and even if we can get some news from Mastodon, Twitter, watching new commits, writing blog articles about stuff, we had nothing gathering all of that.  I can't imagine most OpenBSD users being able or willing to follow everything happening in the project, so I thought a Webzine targeting average OpenBSD users would be fine.  The ultimate accomplishment would be that when we release a new Webzine issue, readers would enjoy reading it with a nice cup of their favorite drink, like if it was one's favorite hobby 'zine.

# Technology doesn't matter

At first I wanted the Webzine to look like a news paper, so I tried to use Scribus (used to make magazines and serious stuff) and make a mockup to see what it would look like.  Then I shared it with a small French community and some people suggested I should use LaTeX for the job, I replied it was not great for handling the layout exactly as I wanted but I challenged that person to show me something done with LaTeX that looks better than my Scribus mockup.

One hour later, that person came with a PDF generated from LaTeX with the same content, and it looked very great!  I like LaTeX but I couldn't believe it could be used efficiently for this job.  I immediately made changes to my Scribus version to improve it, taking the LaTeX PDF version as a model and I released a new version.  At that time, I had two PDF generated from two different tools.

A few people suggested me to make a version using mdoc, I joked because it wasn't serious, but because boredom is a powerful driving force I decided to reuse the content of my mockup to do another mockup with mdoc.  I chose to export it to html and had to write a simple CSS style sheet to make it look nice, but ultimately mdoc export had some issues and required to apply changes with sed to the output to fix the HTML rendering to not look like a man page misused for something else.

Anyway, I got three mockups of the same Webzine example and decided to use Scribus to export its version as a SVG file and embed it in a html file for allowing web browsers to display it natively.

I asked the Mastodon community (thank you very much to everyone who participated!) which version they liked the most and I got many replies: the mdoc html version was the most preferred by with 41%, while 32% liked the SVG-in-html version and 27% the PDF.  Results were very surprising!  The version I liked the least was the most preferred, but there were reasons underneath.

The PDF version was not available in web browsers (or at least didn't display natively) and some readers didn't enjoy that. As for the SVG version it didn't work well on mobile phones and both versions didn't work at all in console web clients (links, lynx, w3m).  There was also accessibility concerns with the PDF or SVG for screen readers / text-to-speech users and I wanted the Webzine to be available for everyone so both formats were a no-go.

Ultimately, I decided the best way would be to publish the Webzine as HTML if I wanted it to look nice and being accessible on any device for any users.  I'm not a huge fan of web and html, but it was the best choice for the readers.  From this point, I started working with a few people (still from the same French OpenBSD community) to decide how to make it as HTML, from this moment I wasn't alone anymore in the project.

In the end, the issue is done by writing html "by hand" because it just works and doesn't require extra complexity layer. Simple html is not harder than markdown or LaTeX or weird format because it doesn't require extra tweaks after conversion.

# Community

I created a git repository on tildegit.org where I already host some projects so we could work on this project as a team.  Requirements and what we wanted to do was getting refined a bit more every day.  I designed a simplistic framework in shell that would suits our needs.  It wasn't long before we got the framework to generate html pages, some styles changes happened all along the development and I think this will still happen regularly in the near future.  We had a nice base to start writing content.

We had to choose a licensing, contributions processes, who is doing what etc...  Fun times, I enjoyed this a lot.  Our goal was to make a Webzine that would work everywhere, without JS, with a dark mode and still usable on phone or console clients so we regularly checked all of that and reported issues that were getting fixed really quickly.

# Simple framework

Let's talk a bit about the website framework.  There is a simple hierarchy of directories, one used to write each issue in a dedicated directory, a Makefile to build everything, parts that are common to each generated pages (containing style, html header and footer).  Each issue is made from of lot of file starting with a number, so when a page is generated by the concatenation of all the parts parts we can keep the numbers ordering.

It may not be optimized CPU wise, but concatenating parts allow reusing common parts (mainly header and footer) but also working on smaller files: each file of the issues represents a section of it (Quote, Going further, Headlines etc...).

# Conclusion

This is a fantastic journey, we are starting to build a solid team for the webzine.  Everyone is allowed to contribute.  My idea was to give every reader a small slice of the OpenBSD project life every so often and I think we are on good tracks now.  I'd like to thanks all the people from the https://openbsd.fr.eu.org/ community who joined me at the early stages to make this project great.

=> https://tildegit.org/solene/openbsd-webzine/ Git repository of the OpenBSD Webzine (if you want to contribute)
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/webzine-development.gmi</guid>
  <link>gemini://perso.pw/blog//articles/webzine-development.gmi</link>
  <pubDate>Fri, 01 Oct 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Measuring power efficiency of a CPU frequency scheduler on OpenBSD</title>
  <description>
    <![CDATA[
<pre># Introduction

I started to work on the OpenBSD code dealing with the CPU frequency scaling.  The current automatic logic is a trade-off between okay performance and okay battery.  I'd like the auto policy to be different when on battery and when on current (for laptops) to improve battery life for nomad users and performance for people connected to the grid.

I've been able to make raw changes to produce this effect but before going further, I wanted to see if I got any improvement in regards to battery life and to which extent if it was positive.

In the incoming sections of the article I will refer to Wh unit, meaning Watt-hour. It's a measurement unit for a quantity of energy used, because energy used is absolutely not linear, we can make an average of the usage and scale it to one hour so it's easy to compare.  An oven drawing 1 kW when on and being on for an hour will use 1 kWh (one kilowatt-hour), while an electric heater drawing 2 kW when on and turned on for 30 minutes will use 1 kWh too.

=> https://en.wikipedia.org/wiki/Kilowatt-hour Kilowatt Hour explanation from Wikipedia

# How to understand power usage for nomad users

While one may think that the faster we do a task, the less time the system stay up and the less battery we use, it's not entirely true for laptops or computers.

There are two kinds of load on a system: interactive and non-interactive. In non-interactive mode, let's imagine the user powers on the computer, run a job and expect it to be finished as soon as possible and then shutdown the computer.  This is (I think) highly unusual for people using a laptop on battery.  Most of the time, users with a laptop will want their computer to be able to stay up as long as possible without having to charge.

In this scenario I will call interactive, the computer may be up with lot of idle time where the human operator is slowly typing, thinking or reading.  Usually one doesn't power off a computer and power it on again while the person is sitting in front of the laptop.  So, for a given task among the main task "staying up" may not be more efficient (in regards to battery) if it takes less time, because whatever the time it will take to do X() the system will stay up after.

# Testing protocol

Here is the protocol I did for the testing "powersaving" frequency policy and then the regular auto policy.

1. Clean package of games/gzdoom
2. Unplug charger
3. Dump hw.sensors.acpibat1.watthour3 value in a file (it's the remaining battery in Wh)
4. Run compilation of the port games/gzdoom with dpb set to use all cores
5. Dump watthour3 value again
6. Wait until 18 minutes and 43 seconds
7. Dump watthour3 value again

Why games/gzdoom? It's a port I know can be compiled with parallel build allowing to use all CPU and I know it takes some times but isn't too short too.

Why 18 minutes and 43 seconds?  It's the time it takes for the powersaving policy to compile games/gzdoom.  I needed to compare the amount of energy used by both policies for the exact same time with the exact same job done (remember the laptop must be up as long as possible, so we don't shutdown it after compiling gzdoom).

I could have extended the duration of the test so the powersaving would have had some idle time but given the idle time is drawing the exact same power with both policies, that would have been meaningless.

# Results

I'm planning to add results for the lowest and highest modes (apm -L and apm -H) to see the extremes.

## Compilation time

As expected, powersaving was slower than the auto mode, 18 minutes and 43 seconds versus 14 minutes and 31 seconds for the auto policy.

Policy Compile time Idle time

------ ------------ ---------

powersaving 1123 0

auto 871 252


=> static/freq-time.png Chart showing the difference in time spent for the two policies


## Energy used

We see that the powersaving used more energy for the duration of the compilation of gzdoom, 5.9 Wh vs 5.6 Wh, but as we don't turn off the computer after the compilation is done, the auto mode also spent a few minutes idling and used 0.74 Wh in that time.

Policy Compile power Idle power Total (Wh)

------ ------------ --------- ----------

powersaving 5,90 0,00 5,90

auto 5,60 0,74 6,34


=> static/freq-power.png Chart showing the difference in energy used for the two policies


# Conclusion

For the same job done: compiling games/gzdoom and stay on for 18 minutes and 43 seconds, the powersaving policy used 5.90 Wh while the auto mode used 6.34 Wh.  This is a saving of 6.90% of power.

This is a testing policy I made for testing purposes, it may be too conservative for most people, I don't know.  I'm currently playing with this and with a reproducible benchmark like this one I'm able to compare results between changes in the scheduler.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-power-usage.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-power-usage.gmi</link>
  <pubDate>Sun, 26 Sep 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Reuse of OpenBSD packages for trying runtime</title>
  <description>
    <![CDATA[
<pre># Introduction

So, I'm currently playing with OpenBSD trying each end user package (providing binaries) and see if they work when installed alone.  I needed a simple way to keep packages downloaded and I didn't want to go the hard way by using rsync on a package mirror because it would waste too much bandwidth and would take too much time.

The most efficient way I found rely on a cache and ordering the source of packages.

# pkg_add mastery

pkg_add has a special variable named PKG_CACHE that when it's set, downloaded packages are copied in this directory.  This is handy because every time I will install a package, all the packages downloaded by will kept in that directory.

The other variable that interests us for the job is PKG_PATH because we want pkg_add to first look up in $PKG_CACHE and if not found, in the usual mirror.

I've set this in my /root/.profile

export PKG_CACHE=/home/packages/

export PKG_PATH=${PKG_CACHE}:http://ftp.fr.openbsd.org/pub/OpenBSD/snapshots/packages/amd64/


Every time pkg_add will have to get a package, it will first look in the cache, if not there it will download it in the mirror and then store it in the cache.

# Saving time removing packages

Because I try packages one by one, installing and removing dependencies takes a lot of time (I'm using old hardware for the job).  Instead of installing a package, deleting it and removing its dependencies, it's easier to work with manually installed packages and once done, remove dependencies, this way you will keep already installed dependencies that will be required for the next package.

!/bin/sh

prepare the packages passed as parameter as a regex for grep

KEEP=$(echo $* | awk '{ gsub(" ","|",$0); printf("(%s)", $0) }')

iterate among the manually installed packages

but skip the packages passed as parameter

for pkg in $(pkg_info -mz | grep -vE "$KEEP")

do

# instead of deleting the package

# mark it installed automatically

pkg_add -aa $pkg

done

install the packages given as parameter

pkg_add $*

remove packages not required anymore

pkg_delete -a


This way, I can use this script (named add.sh) "./add.sh gnome" and then reuse it with "./add.sh xfce", the common dependencies between gnome and xfce packages won't be removed and reinstalled, they will be kept in place.

# Conclusion

There are always tricks to make bandwidth and storage more efficient, it's not complicated and it's always a good opportunity to understand simple mechanisms available in our daily tools.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-quick-package-work.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-quick-package-work.gmi</link>
  <pubDate>Sun, 19 Sep 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>How to use cpan or pip packages on Nix and NixOS</title>
  <description>
    <![CDATA[
<pre># Introduction

When using Nix/NixOS and requiring some development libraries available in pip (for python) or cpan (for perl) but not available as package, it can be extremely complicated to get those on your system because the usual way won't work.

# Nix-shell

The command nix-shell will be our friend here, we will define a new environment in which we will have to create the package for the libraries we need.  If you really think this library is useful, it may be time to contribute to nixpkgs so everyone can enjoy it :)

The simple way to invoke nix-shell is to use packages, for example the command ` nix-shell -p python38Packages.pyyaml` will give you access to the python library pyyaml for Python 3.8 as long as you run python from this current shell.

The same way for Perl, we can start a shell with some packages available for databases access, multiples packages can be passed to "nix-shell -p" like this: `nix-shell -p perl532Packages.DBI perl532Packages.DBDSQLite`.

# Defining a nix-shell

Reading the explanations found on a blog and help received on Mastodon, I've been able to understand how to use a simple nix-shell definition file to declare new cpan or pip packages.

=> https://ghedam.at/15978/an-introduction-to-nix-shell Mattia Gheda's blog: Introduction to nix-shell
=> https://social.coop/@cryptix/106952010198335578 Mastodon toot from @cryptix@social.coop how to declare a python package on the fly

What we want is to create a file that will define the state of the shell, it will contain new packages needed but also the list of packages.

# Skeleton

Create a file with the nix extension (or really, whatever the file name you want), special file name "shell.nix" will be automatically picked up when using "nix-shell" instead of passing the file name as parameter.

with (import <nixpkgs> {});

let

# we will declare new packages here

in

mkShell {

buildInputs = [ ]; # we will declare package list here

}


Now we will see how to declare a python or perl library.

## Python

For python, we need to know the package name on pypi.org and its version.  Reusing the previous template, the code would look like this for the package Crossplane

with (import <nixpkgs> {}).pkgs;

let

crossplane = python37.pkgs.buildPythonPackage rec {

pname = "crossplane";

version = "0.5.7";

src = python37.pkgs.fetchPypi {

inherit pname version;

sha256 = "a3d3ee1776bcccebf7a58cefeb365775374ab38bd544408117717ccd9f264f60";

};

meta = { };

};

in

mkShell {

buildInputs = [ crossplane python37 ];

}


If you need another library, replace crossplane variable name but also pname value by the new name, don't forget to update that name in buildInputs at the end of the file.  Use the correct version value too.

There are two references to python37 here, this implies we need python 3.7, adapt to the version you want.

The only tricky part is the sha256 value, the only way I found to find it easily is the following.

1. declare the package with a random sha256 value (like echo hello | sha256)
2. run nix-shell on the file, see it complaining about the wrong checksum
3. get the url of the file, download it and run sha256 on it
4. update the file with the new value

## Perl

For perl, it is required to use a script available in the official git repository when packages are made.  We will only download the latest checkout because it's quite huge.

In this example I will generate a package for Data::Traverse.

$ git clone --depth 1 https://github.com/nixos/nixpkgs

$ cd nixpkgs/maintainers/scripts

$ nix-shell -p perlPackages.{CPANPLUS,perl,GetoptLongDescriptive,LogLog4perl,Readonly}

$ ./nix-generate-from-cpan.pl Data::Traverse

attribute name: DataTraverse

module: Data::Traverse

version: 0.03

package: Data-Traverse-0.03.tar.gz (Data-Traverse-0.03, DataTraverse)

path: authors/id/F/FR/FRIEDO

downloaded to: /home/solene/.cpanplus/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz

sha-256: dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f

unpacked to: /home/solene/.cpanplus/5.34.0/build/EB15LXwI8e/Data-Traverse-0.03

runtime deps:

build deps:

description: Unknown

license: unknown

License 'unknown' is ambiguous, please verify

RSS feed: https://metacpan.org/feed/distribution/Data-Traverse

===

DataTraverse = buildPerlPackage {

pname = "Data-Traverse";

version = "0.03";

src = fetchurl {

url = "mirror://cpan/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz";

sha256 = "dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f";

};

meta = {

};

};


We will only reuse the part after the ===, this is nix code that defines a package named DataTraverse.

The shell definition will look like this:

with (import <nixpkgs> {});

let

DataTraverse = buildPerlPackage {

pname = "Data-Traverse";

version = "0.03";

src = fetchurl {

url = "mirror://cpan/authors/id/F/FR/FRIEDO/Data-Traverse-0.03.tar.gz";

sha256 = "dd992ad968bcf698acf9fd397601ef23d73c59068a6227ba5d3055fd186af16f";

};

meta = { };

};

in

mkShell {

buildInputs = [ DataTraverse perl ];

# putting perl here is only required when not using NixOS, this tell you want Nix perl binary

}


Then, run "nix-shell myfile.nix" and run you perl script using Data::Traverse, it should work!

# Conclusion

Using not packaged libraries is not that bad once you understand the logic of declaring it properly as a new package that you keep locally and then hook it to your current shell session.

Finding the syntax, the logic and the method when you are not a Nix guru made me despair.  I've been struggling a lot with this, trying to install from cpan or pip (even if it wouldn't work after next update of my system and I didn't even got it to work.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/nix-cpan-pip.gmi</guid>
  <link>gemini://perso.pw/blog//articles/nix-cpan-pip.gmi</link>
  <pubDate>Sat, 18 Sep 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Benchmarking compilation time with ccache/mfs on OpenBSD</title>
  <description>
    <![CDATA[
<pre># Introduction

I always wondered how to make packages building faster.  There are at least two easy tricks available: storing temporary data into RAM and caching build objects.

Caching build objects can be done with ccache, it will intercept cc and c++ calls (the programs compiling C/C++ files) and depending on the inputs will reuse a previously built object if available or build normally and store the result for potential next reuse.  It has nearly no use when you build software only once because it requires objects to be cached before being useful.  It obviously doesn't work for non C/C++ programs.

The other trick is using a temporary filesystem stored in memory (RAM), on OpenBSD we will use mfs but on Linux or FreeBSD you could use tmpfs.  The difference between those two is mfs will reserve the given memory usage while tmpfs is faster and won't reserve the memory of its filesystem (which has pros and cons).

So, I decided to measure the build time of the Gemini browser Lagrange in three cases: without ccache, with ccache but first build so it doesn't have any cached objects and with ccache with objects in it.  I did these three tests multiple time because I also wanted to measure the impact of using memory base filesystem or the old spinning disk drive in my computer, this made a lot of tests because I tried with ccache on mfs and package build objects (later referenced as pobj) on mfs, then one on hdd and the other on mfs and so on.

To proceed, I compiled net/lagrange using dpb after cleaning the lagrange package generated everytime.  Using dpb made measurement a lot easier and the setup was reliable.  It added some overhead when checking dependencies (that were already installed in the chroot) but the point was to compare the time difference between various tweaks.

# Results numbers

Here are the results, raw and with a graphical view.  I did run multiples time the same test sometimes to see if the result dispersion was huge, but it was reliable at +/- 1 second.

Type Duration for second build Duration with empty cache

ccache mfs + pobj mfs 60 133

ccache mfs + pobj hdd 63 130

ccache hdd + pobj mfs 61 127

ccache hdd + pobj hdd 68 137

no ccache + pobj mfs 124

no ccache + pobj hdd 128


=> static/ccache-hdd-bench.png Diagram with results

# Results analysis

At first glance, we can see that not using ccache results in builds a bit faster, so ccache definitely has a very small performance impact when there is no cached objects.

Then, we can see results are really tied together, except for the ccache and pobj both on the hdd which is the slowest combination by far compared to the others times differences.


# Problems encountered

My building system has 16 GB of memory and 4 cores, I want builds to be as fast as possible so I use the 4 cores, for some programs using Rust for compilation (like Firefox), more than 8 GB of memory (4x 2GB) is required because of Rust and I need to keep a lot of memory available.  I tried to build it once with 10GB of mfs filesystem but when packaging it did reach the filesystem limit and fail, it also swapped during the build process.

When using a 8GB mfs for pobj, I've been hitting the limit which induced build failures, building four ports in parallel can take some disk space, especially at package time when it copies the result.  It's not always easy to store everything in memory.

I decided to go with a 3 GB ccache over MFS and keep the pobj on the hdd.

I had no spare SSD to add an SSD to the list. :(

# Conclusion

Using mfs for at least ccache or pobj but not necessarily both is beneficial.  I would recommend using ccache in mfs because the memory required to store it is only 1 or 2 GB for regular builds while storing the pobj in mfs could requires a few dozen gigabytes of memory (I think chromium requires 30 or 40 GB last time I tried).
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/openbsd-ccache-mfs.gmi</guid>
  <link>gemini://perso.pw/blog//articles/openbsd-ccache-mfs.gmi</link>
  <pubDate>Sat, 18 Sep 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Experimenting with a new OpenBSD development lab</title>
  <description>
    <![CDATA[
<pre># Experimenting

This article is not an how to or explaining anything, I just wanted to share how I spend my current free time.  It's obviously OpenBSD related.

When updating or making new packages, it's important to get the dependencies right, at least for the compilation dependencies it's not hard because you know it's fine once the building process can run entirely, but at run time you may have surprises and discover lacking dependencies.

# What's a dependency?

Software are made of written text called source code (or code to make it simpler), but to avoid wasting time (because writing code is hard enough already) some people write libraries which are pieces of code made in the purpose of being used by other programs (through fellow developers) to save everyone's time and efforts.

A library can propose graphics manipulation, time and date functions, sound decoding etc... and the software we are using rely on A LOT of extra code that comes from other piece of code we have to ship separately.  Those are dependencies.

There are dependencies required for building a program, they are used to manipulate the source code to transform it into machine readable code, or for organizing the building process to ease the development and so on and there are libraries dependencies which are required for the software to run.  The simplest one to understand would be the library to access the audio system of your operating system for an audio player.

And finally, we have run time dependencies which can be found upon loading a software or within its use.  They may not be well documented in the project so we can't really know they are required until we try to use some feature of the software and it crashes / errors because of something missing.  This could be a program that would call an extra program to delegate the resizing of a picture.

# What's up?

In order to spot these run time dependencies, I've started to use an old laptop (a thinkpad T400 that I absolutely love) with a clean OpenBSD installation, lot of local packages on my network (see it later) and a very clean X environment.

The point of this computer is to clean every package, install only one I need to try (pulling the dependencies that come with it) and see if it works under the minimal conditions.  They should work with no issue if the packages are correctly done.

Once I'm satisfied with the test process, I will clean every packages on the system and try another one.

Sometimes, as we have many many packages installed, it happens we have a run time dependency installed by that is not declared in the software package we are working on, and we don't see the failure as the requirement is provided by some other package.  By using a clean environment to check every single program separately, I remove the "other packages" that could provide a requirement.

# Building

When I work on packages I often need to compile many of them, and it takes time, a lot of time, and my laptop usually make a lot of noise and is hot and slow to do something else, it's not very practical.  I'm going to setup a dedicated building machine that I will power on when I'll work on ports, and it will be hidden in some isolated corner at home building packages when I need it.  That machine is a bit more powerful and will prevent my laptop to be unusable for some time.

This machine in combination with the laptop are a great combination to make quick changes and test how it goes.  The laptop will pull packages directly from the building machine, and things could be fixed on the building machine quite fast.

# The end

Contributing to packages is an endless work, making good packages is hard work and requires tests.  I'm not really good at doing packages but I want to improve myself in that field and also improve the way we can test packages are working.  With these new development environments I hope I will be able to contribute a bit more to the quality of the futures OpenBSD releases.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/experiments-openbsd-building.gmi</guid>
  <link>gemini://perso.pw/blog//articles/experiments-openbsd-building.gmi</link>
  <pubDate>Thu, 16 Sep 2021 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Reviewing some open source distraction free editors</title>
  <description>
    <![CDATA[
<pre># Introduction

This article is about comparing "distraction free" editors running on Linux.  This category of editors is supposed to be used in full screen and shouldn't display much more than text, allowing to stay focused on the text.

I've found a few programs that run on Linux and are open source, I deliberately omitted web browser based editors



I used them on Alpine, three of them installed from Flatpak and Apostrophe installed from the Alpine packages repositories.

I'm writing this on my netbook and wanted to see if a "distraction" free editor could be valuable for me, the laptop screen and resolution are small and using it for writing seems a fun idea, although I'm not really convinced of the use (for me!) of such editors.

# Resource usage and performance

Quick tour of the memory usage (reported in top in the SHR column)



As for the perceived performance when typing I've had mixed results.



# Features

I didn't know much what to expect from these editors, I've seen some common features and some other that I discovered.



# Personal experience and feelings

It would be long and not really interesting to list which program has which feature so here is my feelings about those four software.

## Apostrophe

It's the one I used for writing this article, it feels very nice, it proposes only three themes that you can't customize and the font can't be changed.  Although you can't customize that much, it's the one that looks the best out of the box, that is easiest to use and which just works fine.  From a distraction free editor, it seems it's the best approach.

This is the one I would recommend to anyone wanting a distraction free editor.

=> https://gitlab.gnome.org/World/apostrophe Apostrophe project website

## Quilter

Because of the input lag when typing text, this was the worse experience for me, maybe it's platform specific?  The user interface looks a LOT like apostrophe at the point I'd think one is a fork from another, but in regards to performance it's drastically different.  It offers three themes but also allow choosing the fonts from three named "Quilt something" which is disappointing.

=> https://github.com/lainsce/quilter Quilter project website

## Focuswriter

This one has potential, it has a lot of things you can tweak in the preferences menu, from which character should be doubled (like quotes) when typed, daily goals, statistics, configurable shortcuts for everything, writing from right to left.

It also relies a lot on the theming features to choose which background (picture or color) you want, how to space the text, which font, which size, opacity of the typing area.  It has too many tweaks required to be usable to me, the default themes looked nice but the text was small and ugly, it was absolutely not enjoying to type and see the text appending.  I tried to duplicate a theme (from the user interface) and change the font and size, but I didn't get something that I enjoyed.  Maybe with some time spent it could look good, but what the other tools provide is something that just works and looks good out of the box.

=> https://gottcode.org/focuswriter/ Focuswriter project website

## Ghostwriter

I tried ghostwriter 1.x at first then I saw there was a 2.x version with a lot more features, so I used both for this review, I'll only cover the 2.x version but looking at the repositories information many distributions providing the old version, including flatpak.

Ghostwriter seems to be the king of the arena.  It has all the features you would expect from a distraction free editor, it has sane defaults but is customizable and is enjoyable out of the box.  For writing long documents, the markdown outlining panel to see the structure of the document is very useful and there are features for writing goal and statistics, this may certainly be useful for some users.

=> https://wereturtle.github.io/ghostwriter/ Ghostwriter project website

## vi

I couldn't review some editors without including a terminal based editor.  I chose vi because it seemed the most distraction free to me, emacs has too many features and nano has too much things displayed at the bottom of the screen.  I choose vi instead of ed because it's more beginner friendly, but ed would work as fine.  Note that I am using vi (from busybox on Alpine linux) and not Vim or nvi.

vi doesn't have much features, it can save text to a file.  The display can be customized in the terminal emulator and allow a great choice of font / theme / style / coloring after decades of refinements in this field.  It has no focus mode or markdown coloration/integration, which I admit can be confusing for big texts with some markup involved, at least for bullet lists and headers.  I always welcome a bit of syntactic coloration and vi lacks this (this can be solved with a more advanced text editor).  vi won't allow you to export into any kind of file except plain text, so you need to know how to convert the text file into the output format you are looking for.

=> https://busybox.net/ busybox project website

# Conclusion

It's hard for me to tell if typing this article using Apostrophe editor was better or more efficient than using my regular kakoune terminal text editor.  The font looks absolutely better in Apostrophe but I never gave much attention to the look and feel of my terminal emulator.

I'll try using Apostrophe or Ghostwriter for further articles, at least by using my netbook as a typing machine.
</pre>
    ]]>
  </description>
  <guid>gemini://perso.pw/blog//articles/review-distraction-free-editors.gmi</guid>
  <link>gemini://perso.pw/blog//articles/review-distraction-free-editors.gmi</link>
  <pubDate>Wed, 15 Sep 2021 00:00:00 GMT</pubDate>
</item>

  </channel>
</rss>