💾 Archived View for perso.pw › blog › rss.xml captured on 2023-04-19 at 22:32:51.
⬅️ Previous capture (2023-03-20)
➡️ Next capture (2023-04-26)
-=-=-=-=-=-=-
<?xml version="1.0" encoding="UTF-8"?> <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"> <channel> <title>Solene'%</title> <description></description> <link>gemini://perso.pw/blog/</link> <atom:link href="gemini://perso.pw/blog/rss.xml" rel="self" type="application/rss+xml" /> <item> <title>Quelques Haikus pour début 2023</title> <description> <![CDATA[ <pre>Une petite sélection de haikus qui ont été publiés sur Mastodon, cela dit, il ne sont pas toujours bien fichus mais ce sont mes premiers, espérons que l'expérience m'aide à faire mieux par la suite.
Merle qui chasse
Un ciel bleu teinté de blanc
Le thym en fleurs
Plateaux enneigés
Bien au chaud et Ă l'abri -
Violente tempĂŞte
Antarctique -
Monuments cyclopéens
Hiver ténébreux
Petit Ă©tang gris -
Tapissé de feuilles
Tout en silence
Plage au soleil
L'oiseau en laisse dans le ciel -
Son fil, cerf-volant
Idées et pensées -
Comme l'orage d'été
Tombent du ciel
Grâce matinée
Dimanche, changement d'heure -
Le chant des oiseaux
Maladie, douleur
Climat doux, bourgeons en fleurs -
Le temps, guérison
Le vent dans les feuilles -
Le ruissellement de l'eau
ForĂŞt en Ă©veil
Les rues silencieuses
L'aube qui peine Ă se lever -
Jardin givré
Une nuit de pleine lune
Barbecue par des amis -
Vacances d'été
Des pommes de terre
Plateau de charcuterie -
Copieuse raclette
Ciel bleu printanier
fleurs, abeilles, tout se réveil -
Balade en forĂŞt
</pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/haiku-2023Q1.gmi</guid> <link>gemini://perso.pw/blog//articles/haiku-2023Q1.gmi</link> <pubDate>Sun, 09 Apr 2023 00:00:00 GMT</pubDate> </item> <item> <title>How to setup a local network cache for Flatpak</title> <description> <![CDATA[ <pre># Introduction As you may have understood by now, I like efficiency on my systems, especially when it comes to network usage due to my poor slow ADSL internet connection. Flatpak is nice, I like it for many reasons, and what's cool is that it can download only updated files instead of the whole package again. Unfortunately, when you start using more and more packages that are updated daily, and which require subsystems like NVIDIA drivers, MESA etc... this adds up to quitea lot of daily downloads, and multiply that by a few computers and you gets a lot of network traffic. But don't worry, you can cache it on your LAN to download updates only once. # Setup As usual for this kind of job, we will use Nginx on a local server on the network, and configure it to act as a reverse proxy to the flatpak repositories. This requires modifying the URL of each flatpak repository on the machines, it's a one time operation. Here is the configuration you need on your Nginx to proxy Flathub:
map $status $cache_header {
200 "public";
302 "public";
default "no-cache";
}
server {
listen 0.0.0.0:8080; # you may want to listen on port 80, or add TLS
server_name my-cache.local; # replace this with your hostname, or system IP
# flathub cache
set $flathub_cache https://dl.flathub.org;
location /flathub/ {
rewrite ^/flathub/(.*) /$1 break;
proxy_cache flathub;
proxy_cache_key "$request_filename";
add_header Cache-Control $cache_header always;
proxy_cache_valid 200 302 300d;
expires max;
proxy_pass $flathub_cache;
}
}
proxy_cache_path /var/cache/nginx/flathub/cache levels=1:2
keys_zone=flathub:5m
max_size=20g
inactive=60d
use_temp_path=off;
This will cause nginx to proxy requests to the flathub server, but keep files in a 20 GB cache. You will certainly need to create the `/var/cache/nginx/flathub` directory, and make sure it has the correct ownership for your system configuration. If you want to support another flatpak repository (like Fedora's), you need to create a new location, and new cache in your nginx config. # Client configuration On each client, you need to change the URL to reach flathub, in the example above, the URL is `http://my-cache.local:8080/flathub/repo/`. You can change the URL with the following command:
flatpak remote-modify flathub --url=http://my-cache.local:8080/flathub/repo/`
Please note that if you add flathub repo, you must use the official URL to have the correct configuration, and then you can change its URL with the above command. # Conclusion Our dear nginx is still super useful as a local caching server, it's super fun to see some updates going at 100 MB/s from my NAS now. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/lan-cache-flatpak.gmi</guid> <link>gemini://perso.pw/blog//articles/lan-cache-flatpak.gmi</link> <pubDate>Wed, 05 Apr 2023 00:00:00 GMT</pubDate> </item> <item> <title>Detect left over users and groups on OpenBSD</title> <description> <![CDATA[ <pre># Introduction If you use OpenBSD and administrate machines, you may be aware that packages can install new dedicated users and groups, and that if you remove a package doing so, the users/groups won't be deleted, instead, `pkg_delete` displays instructions about deletion. In order to keep my OpenBSD systems clean, I wrote a script looking for users and groups that have been installed (they start by the character `_`), and check if the related package is still installed, if not, it outputs instructions that could be run in a shell to cleanup your system. # The code
SYS_USERS=$(mktemp /tmp/system_users.txt.XXXXXXXXXXXXXXX)
PKG_USERS=$(mktemp /tmp/packages_users.txt.XXXXXXXXXXXXXXX)
awk -F ':' '/^_/ && $3 > 500 { print $1 }' /etc/passwd | sort > "$SYS_USERS"
find /var/db/pkg/ -name '+CONTENTS' -exec grep -h ^@newuser {} + | sed 's/^@newuser //' | awk -F ':' '{ print $1 }' | sort > "$PKG_USERS"
BOGUS=$(comm -1 -3 "$SYS_USERS" "$PKG_USERS")
if [ -n "$BOGUS" ]
then
echo "Bogus users/groups (missing in /etc/passwd, but a package need them)" >/dev/stderr
echo "$BOGUS" >/dev/stderr
fi
EXTRA=$(comm -2 -3 "$SYS_USERS" "$PKG_USERS")
if [ -n "$EXTRA" ]
then
echo "Extra users" >/dev/stderr
for user in $EXTRA
do
echo "userdel $user"
echo "groupdel $user"
done
fi
rm "$SYS_USERS" "$PKG_USERS"
## How to run Write the content of the script above in a file, mark it executable, and run it from the shell, it should display a list of `userdel` and `groupdel` commands for all the extra users and groups. # Conclusion With this script and the package `sysclean`, it's quite easy to keep your OpenBSD system clean, as if it was just a fresh install. # Limitations It's not perfect in its current state because if you deleted an user, the according group that is still left won't be reported. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/openbsd-delete-old-users.gmi</guid> <link>gemini://perso.pw/blog//articles/openbsd-delete-old-users.gmi</link> <pubDate>Mon, 03 Apr 2023 00:00:00 GMT</pubDate> </item> <item> <title>Monitor your remote host network quality using smokeping on OpenBSD</title> <description> <![CDATA[ <pre># Introduction If you need to more the network quality of a link, or the network availability of a remote host, I'd recommend you to take a look at Smokeping. => https://oss.oetiker.ch/smokeping/ Smokeping official Website Smokeping is a Perl daemon that will regularly run a command (fping, some dns check, etc…) multiple times to check the availability of the remote host, but also the quality of the link, including the standard deviation of the response time. It becomes very easy to know if a remote host is flaky, or if the link where Smokeping runs isn't stable any more when you see that all the remote hosts have connectivity issues. Let me explain how to install and configure it on OpenBSD 7.2 and 7.3. # Installation Smokeping comes in two parts, but they are in the same package, the daemon components to run it 24/7 to gather metrics, and the fcgi component used to render the website for visualizing data. First step is to install the `smokeping` package.
The package will also install the file `/usr/local/share/doc/pkg-readmes/smokeping` giving explanations for the setup. It contains a lot of instructions, from the setup to advanced configuration, but without many explanations if you are new to smokeping. ## The daemon Once you installed the package, the first step is to configure smokeping by editing the file `/etc/smokeping/config` as root. Under the `*** General ***` section, you can change the variables `owner` and `contact`, this information is displayed on Smokeping HTML interface, so if you are in company and some colleague look at the graphs, they can find out who to reach if there is an issue with smokeping or with the links. This is not useful if you use it for yourself. Under the `*** Alerts ***` section, you can configure the emails notifications by configuring `to` and `from` to match your email address, and a custom address for smokeping emails origin. Then, under `*** Targets ***` section, you can configure each host to monitor. The syntax is unusual though.
probe = FPing
menu = Top
title = Network Latency Grapher
remark = Welcome to the SmokePing
+ Remote
menu= Remote
title= Remote hosts
++ Persopw
menu = perso.pw
title = My server perso.pw
host = perso.pw
++ openportspl
menu = openports.pl
title = openports.pl VM at openbsd.amsterdam
host = openports.pl
++ grifonfr
menu = grifon.fr
title = grifon.fr VPN endpoint
host = 89.234.186.37
+ LAN
menu = Lan
title = Lan network at home
++ solaredge
menu = solaredge
title = solardedge
host = 10.42.42.246
++ modem
menu = ispmodem
title = ispmodem
host = 192.168.1.254
Now you configured smokeping, you need to enable the service and run it.
If everything is alright, `rcctl check smokeping` shouldn't fail, if so, you can read `/var/log/messages` to find why it's failing. Usually, it's a `+` line that isn't valid because of a non-authorized character or a space. I recommend to always add a public host of a big platform that is known to be working reliably all the time, to have a comparison point against all your other hosts. ## The Web Interface Now the daemon is running, you certainly want to view the graphs produced by Smokeping. Reusing the example from the pkg-readme file, you can configure httpd web server with this:
server "smokeping.example.org" {
listen on * port 80
location "/smokeping/smokeping.cgi*" {
fastcgi socket "/run/smokeping.sock"
root "/"
}
}
Your service will be available at the address `http://smokeping.example.org/smokeping/smokeping.cgi`. For this to work, we need to run a separate FCGI server, fortunately packaged as an OpenBSD service.
Note that there is a way to pre-render all the HTML interface by a cron job, but I don't recommend it as it will drain a lot of CPU for nothing, except if you have many users viewing the interface and that they don't need interactive zoom on the graphs. # Conclusion Smokeping is very effective because of the way it renders data, you can easily spot issues in your network that a simple ping or response time wouldn't catch. Please note it's better to have two smokeping setup at different places in order to monitor each other remote smokeping link quality. Otherwise, if a remote host appear flaky, you can't entirely be sure if the Internet access of the smokeping is flaky, or if it's the remote host, or a peering issue. Here is the 10 days graph for a device I have on my LAN but connected to the network using power line networking. => static/smokeping.png Monitoring graph of a device connected on LAN using power line network Don't forget to read `/usr/local/share/doc/pkg-readmes/smokeping` and the official documentation if you want a more complicated setup. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/smokeping.gmi</guid> <link>gemini://perso.pw/blog//articles/smokeping.gmi</link> <pubDate>Sun, 26 Mar 2023 00:00:00 GMT</pubDate> </item> <item> <title>L'État m'impose Google (ou Apple)</title> <description> <![CDATA[ <pre># Introduction C'est rare, mais ceci est un message de ras-le-bol. Ayant besoin d'une formation, pour finir les procédures en lignes sur un compte CPF (Compte Formation Professionnelle), j'ai besoin d'avoir une "identité numérique +". Sur le principe, c'est cool, c'est un moyen de créer un compte en validant l'identité de la personne via une pièce d'identité, jusque là c'est normal et plutot bien pensé. # Le problème Le gros soucis, c'est qu'une fois les formalités terminées, il faut installer l'application Android / iOS sur son téléphone, et là soucis. => https://play.google.com/store/apps/details?id=fr.laposte.idn&hl=fr&pli=1 Google Play : L'Identité Numérique La Poste Ayant libéré mon téléphone Android de Google grâce à LineageOS, j'ai choisi de ne pas installer Google Play pour être 100% dégooglisé, et j'installe mes applications depuis le dépôt F-droid qui couvre tous mes besoins. => https://f-droid.org/en/ Site du projet F-droid => https://lineageos.org/ Site du projet LineageOS Dans ma situation, il existe une solution pour installer des applications (heuresement très rares) nécessaires pour certains services, qui consiste à utiliser "Aurora Store" depuis mon téléphone pour télécharger un APK de Google Play (le fichier d'installation d'application) et l'installer. Pas de soucis, j'ai pu installer le programme de La Poste. Le problème, c'est que je le lance et j'obtiens ce magnifique message "Erreur, vous devez installer l'application depuis Google Play", et là , je ne peux absolument rien faire d'autre que de quitter l'application. => static/identite-numerique.png Message d'erreur de l'application La Poste sur LineageOS sans services Google Et voilà , je suis coincée, l'État m'impose d'utiliser Google pour utiliser ses propres services 🙄, mes solutions sont les suivantes :
on:
schedule:
- cron: '0 2 * * *' # every day at 02h00
# Credits I would like to thank Jonathan Tremesaygues who wrote most of the GitHub actions pieces after I shared with him about my idea and how I would implement it. => https://jtremesay.org/ Jonathan Tremesaygues's website # Going further Here is a simple script I'm using to use a local Linux machine as a Gentoo builder for the box you run it from. It's using a gentoo stage3 docker image, populated with packages from the local system and its `/etc/portage/` directory. Note that you have to use `app-misc/resolve-march-native` to generate the compiler command line parameters to replace `-march=native` because you want the remote host to build with the correct flags and not its own `-march=native`, you should also make sure those flags are working on the remote system. From my experience, any remote builder newer than your machine should be compatible. => https://tildegit.org/solene/gentoo-remote-builder Tildegit: Example of scripts to build packages on a remote machine for the local machine </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/github-actions-building-gentoo-packages.gmi</guid> <link>gemini://perso.pw/blog//articles/github-actions-building-gentoo-packages.gmi</link> <pubDate>Sat, 04 Mar 2023 00:00:00 GMT</pubDate> </item> <item> <title>Lightweight data monitoring using RRDtool</title> <description> <![CDATA[ <pre># Introduction I like my servers to run the least code possible, and the least services running in general, this ease maintenance and let room for other thing to run. I recently wrote about monitoring software to gather metrics and render them, but they are all overkill if you just want to keep track of a single value over time, and graph it for visualization. Fortunately, we have an old and robust tool doing the job fine, it's perfectly documented and called RRDtool. => https://oss.oetiker.ch/rrdtool/ RRDtool official website RRDtool stands for "Round Robin Database Tool", it's a set of programs and a specific file format to gather metrics. The trick with RRD files is that they have a fixed size, when you create it, you need to define how many values you want to store in it, at which frequency, for how long. This can't be changed after the file creation. In addition, RRD files allow you to create derivated time series to keep track of computed values on a longer timespan, but with a lesser resolution. Think of the following use case: you want to monitor your home temperature every 10 minutes for the past 48 hours, but you want to keep track of some information for the past year, you can tell RRD to compute the average temperature for every hour, but for a week, or the average temperature for four hours but for a month, and the average temperature per day for a year. All of this will be fixed size. # Anatomy of a RRD file RRD files can be dumped as XML, this will give you a glimpse that may ease the understanding of this special file format. Let's create a file to monitor the battery level of your computer every 20 seconds, with the last 5 values, don't focus at understanding the whole command line now:
rrdtool create test.rrd --step 10 DS:battery:GAUGE:20:0:100 RRA:AVERAGE:0.5:1:5
If we dump the created file using the according command, we get this result (stripped a bit to make it fit better):
<!-- Round Robin Database Dump -->
<rrd>
<version>0003</version>
<step>10</step> <!-- Seconds -->
<lastupdate>1676569107</lastupdate> <!-- 2023-02-16 18:38:27 CET -->
<ds>
<name> battery </name>
<type> GAUGE </type>
<minimal_heartbeat>20</minimal_heartbeat>
<min>0.0000000000e+00</min>
<max>1.0000000000e+02</max>
<!-- PDP Status -->
<last_ds>U</last_ds> <value>NaN</value> <unknown_sec> 7 </unknown_sec>
</ds>
<!-- Round Robin Archives -->
<rra>
<cf>AVERAGE</cf>
<pdp_per_row>1</pdp_per_row> <!-- 10 seconds -->
<params> <xff>5.0000000000e-01</xff> </params>
<cdp_prep>
<ds>
<primary_value>0.0000000000e+00</primary_value>
<secondary_value>0.0000000000e+00</secondary_value>
<value>NaN</value>
<unknown_datapoints>0</unknown_datapoints>
</ds>
</cdp_prep>
<database>
<!-- 2023-02-16 18:37:40 CET / 1676569060 --> <row><v>NaN</v></row>
<!-- 2023-02-16 18:37:50 CET / 1676569070 --> <row><v>NaN</v></row>
<!-- 2023-02-16 18:38:00 CET / 1676569080 --> <row><v>NaN</v></row>
<!-- 2023-02-16 18:38:10 CET / 1676569090 --> <row><v>NaN</v></row>
<!-- 2023-02-16 18:38:20 CET / 1676569100 --> <row><v>NaN</v></row>
</database>
</rra>
</rrd>
The most important thing to understand here, is that we have a "ds" (data serie) named battery of type GAUGE with no last value (I never updated it), but also a "RRA" (Round Robin Archive) for our average value that contain timestamp and no value associated to each. You can see that internally, we already have our 5 slots that exist with a null value associated. If I update the file, the first null value will disappear, and a new record will be added at the end with the actual value. # Monitoring a value In this guide, I would like to share my experience at using rrdtool to monitor my solar panel power output over the last few hours, which can be easily displayed on my local dashboard. The data are also collected and sent to a graphana server, but it's not local and displaying to know the last values is wasting resources and bandwidth. First, you need `rrdtool` to be installed, you don't need anything else to work with RRD files. ## Create the RRD file Creating the RRD file is the most tricky part, because you can't change it afterward. I want to collect a data every 5 minutes (300 seconds), this is an absolute data between 0 and 4000, so we will define a step of 300 seconds to tell the file must receive a value every 300 seconds. The type of the value will be GAUGE, because it's just a value that doesn't depend on the previous one. If we were monitoring power change over time, we would like to use DERIVE, because it computes the delta between each value. Furthermore, we need to configure the file to give up on a value slot if it's not updated within 600 seconds. Finally, we want to be able to graph each measurement, this can be done by adding an AVERAGE calculated value in the file, but with a resolution of 1 value, with 240 measurements stored. What this mean, is for each time we add a value in the RRD file, the field for AVERAGE will be calculated with only the last value as input, and we will keep 240 of them, allowing us to graph up to 240 * 5 minutes of data back in time.
rrdtool create solar-power.rrd --step 300 ds:value:gauge:600:0:4000 rra:average:0.5:1:240
^ ^ ^ ^ ^ ^ ^ ^ ^
| | | | | max value | | | | number of values to keep
| | | | min value | | | how many previous values should be used in the function, 1 means just a single value, so averaging itself
| | | time before null | | (xfiles factor) how much percent of unknown values do we agree to use for calculating a value
| | measurement type | function to apply, can be AVERAGE, MAX, MIN, LAST, or mathematical operations
| variable name
And then, you have your `solar-power.rrd` file created. You can inspect it with `rrdtool info solar-power.rrd` or dump its content with `rrdtool dump solar-power.rrd`. => https://oss.oetiker.ch/rrdtool/doc/rrdcreate.en.html RRDtool create documentation ## Add values to the RRD file Now that we have prepared the file to receive data, we need to populate it with something useful. This can be done using the command `rrdtool update`.
CURRENT_POWER=$(some-command-returning-a-value)
rrdtool update solar-power.rrd "N:${CURRENT_POWER}"
^ ^
| | value of the first field of the RRD file (we created a single field)
| when the value has been measured, N equals to NOW
=> https://oss.oetiker.ch/rrdtool/doc/rrdupdate.en.html RRDtool update documentation ## Graph the content of the RRD file The trickiest part, but less problematic, is to generate a usable graph from the data. The operation is not destructive as it's not modifying the file, so we can make a lot of experimentations on it without affecting the content. We will generate something simple like the picture below. Of course, you can add a lot more information, color, axis, legends etc.. but I need my dashboard to stay simple and clean. => ./static/solar-power.svg A diagram displaying solar power over time (on a cloudy day)
rrdtool graph --end now -l 0 --start end-14000s --width 600 --height 300 \
/var/www/htdocs/dashboard/solar.svg -a SVG \
DEF:ds0=/var/lib/rrdtool/solar-power.rrd:value:AVERAGE \
"LINE1:ds0#0000FF:power" \
"GPRINT:ds0:LAST:current value %2.1lf"
I think most flags are explicit, if not you can look at the documentation, what interests us here are the last three lines. The `DEF` line associates the RRA AVERAGE of the variable `value` in the file `/var/lib/rrdtool/solar-power.rrd` to the name `ds0` that will be used later in the command line. The `LINE1` line associates a legend, and a color to the rendering of this variable. The `GPRINT` line adds a text in the legend, here we are using the last value of `ds0` and format it in a printf style string `current value %2.1lf`. => https://oss.oetiker.ch/rrdtool/doc/rrdgraph.en.html RRDtool graph documentation => https://oss.oetiker.ch/rrdtool/doc/rrdgraph_examples.en.html RRDtool graph examples # Conclusion RRDtool is very nice, it's a storage engine for monitoring software such as collectd or munin, but we can also use them on the spot with simple scripts. However, they have drawbacks, when you start to create many files it doesn't scale well, generate a lot of I/O and consume CPU if you need to render hundreds of pictures, that's why a daemon named `rrdcached` has been created to help mitigate the load issue by delegating updates of a lot of RRD files in a more sequential way. # Going further I encourage you to look at the official project website, all the other command can be very useful, and rrdtool also exports data as XML or JSON if needed, which is perfect to plug in with other software. </pre> ]]> </description> <guid>gemini://perso.pw/blog//articles/rrdtool-light-monitoring.gmi</guid> <link>gemini://perso.pw/blog//articles/rrdtool-light-monitoring.gmi</link> <pubDate>Thu, 16 Feb 2023 00:00:00 GMT</pubDate> </item> </channel> </rss>