πΎ Archived View for www.foo.zone βΊ gemfeed βΊ atom.xml captured on 2022-07-16 at 13:58:16.
β¬ οΈ Previous capture (2022-04-29)
β‘οΈ Next capture (2023-01-29)
-=-=-=-=-=-=-
<?xml version="1.0" encoding="utf-8"?> <feed xmlns="http://www.w3.org/2005/Atom"> <updated>2022-07-07T22:06:15+03:00</updated> <title>foo.zone feed</title> <subtitle>To be in the .zone!</subtitle> <link href="gemini://foo.zone/gemfeed/atom.xml" rel="self" /> <link href="gemini://foo.zone/" /> <id>gemini://foo.zone/</id> <entry> <title>Sweating the small stuff - Tiny projects of mine</title> <link href="gemini://foo.zone/gemfeed/2022-06-15-sweating-the-small-stuff.gmi" /> <id>gemini://foo.zone/gemfeed/2022-06-15-sweating-the-small-stuff.gmi</id> <updated>2022-06-15T08:47:44+01:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>This blog post is a bit different from the others. It consists of multiple but smaller projects worth mentioning. I got inspired by Julia Evan's 'Tiny programs' blog post and the side projects of The Sephist, so I thought I would also write a blog posts listing a couple of small projects of mine:. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Sweating the small stuff - Tiny projects of mine</h1> <pre> _ /_/_ .'''. =O(_)))) ...' `. jgs \_\ `. .''' `..' </pre><br /> <p class="quote"><i>Published by Paul at 2022-06-15, last updated at 2022-06-18</i></p> <p>This blog post is a bit different from the others. It consists of multiple but smaller projects worth mentioning. I got inspired by Julia Evan's "Tiny programs" blog post and the side projects of The Sephist, so I thought I would also write a blog posts listing a couple of small projects of mine:</p> <a class="textlink" href="https://jvns.ca/blog/2022/03/08/tiny-programs/">Tiny programs</a><br /> <a class="textlink" href="https://thesephist.com/projects/">The Sephist's project list</a><br /> <p>Working on tiny projects is a lot of fun as you don't need to worry about any standards or code reviews and you decide how and when you work on it. There aren't restrictions regarding technologies used. You are likely the only person working on these tiny projects and that means that there is no conflict with any other developers. This is complete freedom :-).</p> <p>But before going through the tiny projects let's take a paragraph for the <span class="inlinecode">1y</span> anniversary retrospective.</p> <h2><span class="inlinecode">1y</span> anniversary</h2> <p>It has been one year since I started posting regularly (at least once monthly) on this blog again. It has been a lot of fun (and work) doing so for various reasons:</p> <ul> <li>I practice English writing (I am not a native speaker). I am far from being a novelist, but this blog helps improves my writing skills. I also tried out tools like Grammarly.com and Languagetool.org and also worked with <span class="inlinecode">:spell</span> in Vim or the LibreOffice checker. This post was checked with the <span class="inlinecode">write-better</span> Node application. </li> <li>I force myself to "finish" some kind of project worth writing about every month. If its not a project, then its still a topic which requires research and deep thinking. Producing 2k words of text can actually be challenging.</li> <li>It's fun to rely on KISS (keep it simple & stupid) tools. E.g. use of Gemtexter and not WordPress, use of Vim instead of an office suite or a rich web editor.</li> </ul> <p>Retrospectively, these have been the most popular blog posts of mine over the last year:</p> <a class="textlink" href="https://foo.zone/gemfeed/2021-09-12-keep-it-simple-and-stupid.html">Keep it simple and stupid</a><br /> <a class="textlink" href="https://foo.zone/gemfeed/2022-04-10-creative-universe.html">Creative universe</a><br /> <a class="textlink" href="https://foo.zone/gemfeed/2021-11-29-bash-golf-part-1.html">Bash Golf series</a><br /> <a class="textlink" href="https://foo.zone/gemfeed/2021-12-26-how-to-stay-sane-as-a-devops-person.html">How to stay sane as a DevOps person</a><br /> <a class="textlink" href="https://foo.zone/gemfeed/2022-05-27-perl-is-still-a-great-choice.html">Perl is still a great choice</a><br /> <p>But now, let's continue with the small projects worth mentioning :-)</p> <h2>Static photo album generator</h2> <p><span class="inlinecode">photoalbum.sh</span> is a minimal static HTML photo album generator. I use it to drive "The Irregular Ninja" site and for some ad-hoc (personal) albums to share photos with the family and friends.</p> <a class="textlink" href="https://codeberg.org/snonux/photoalbum">https://codeberg.org/snonux/photoalbum</a><br /> <h3>The Irregular Ninja</h3> <p>Photography is one of my casual hobbies. I love to capture interesting perspectives and motifs. I love to walk new streets and neighbourhoods I never walked before so I can capture those unexpected motifs, colours and moments. Unfortunately, because of time constraints (and sometime weather constraints), I do that on a pretty infrequent basis.</p> <a href="https://foo.zone/gemfeed/2022-06-15-sweating-the-small-stuff/ninja.jpg"><img src="https://foo.zone/gemfeed/2022-06-15-sweating-the-small-stuff/ninja.jpg" /></a><br /> <p>More than 10 years ago I wrote the bespoke small static photo album generator in Bash <span class="inlinecode">photoalbum.sh</span> which I recently refactored to a modern Bash coding style and also freshened up the Cascading Style Sheets. Last but not least, the new domain name <span class="inlinecode">irregular.ninja</span> has been registered.</p> <p>The thumbnails are presented in a random order and there are also random CSS effects for each preview. There's also a simple background blur for each page generated. And that's all in less than 300 lines of Bash code! The script requires ImageMagick (available for all common Linux and *BSD distributions) to be installed.</p> <p>As you can see, there is a lot of randomization and irregularity going on. Thus, the name "Irregular Ninja" was born.</p> <a class="textlink" href="https://irregular.ninja">https://irregular.ninja</a><br /> <p>I only use a digital compact camera or a smartphone to take the photos. I don't like the idea of carrying around a big camera with me "just in case" so I keep it small and simple. The best camera is the camera you have with you. :-)</p> <p>I hope you like this photo site. It's worth checking it out again around once every other month!</p> <h2>Random journal page extractor</h2> <p>I bullet journal. I write my notes into a Leuchtturm paper notebook. Once full, I am scanning it to a PDF file and archive it. As of writing this, I am at journal #7 (each from 123 up to 251 pages in A5). It means that there is a lot of material already.</p> <p>Once in a while I want to revisit older notes and ideas. For that I have written a simple Bash script <span class="inlinecode">randomjournalpage.sh</span> which randomly picks a PDF file from a folder and extracts 42 pages from it at a random page offset and opens them in a PDF viewer (Evince in this case, as I am a GNOME user). </p> <a class="textlink" href="https://codeberg.org/snonux/randomjournalpage">https://codeberg.org/snonux/randomjournalpage</a><br /> <p>There's also a weekly <span class="inlinecode">CRON</span> job on my servers to send me a reminder that I might want to read in my old journals again. My laptop also runs this script each time it boots and saves the output to a NextCloud folder. From there, it's synchronized to the NextCloud server so I can pick it up from there with my smartphone later when I am "on the road".</p> <h2>Global uptime records statistic generator</h2> <p><span class="inlinecode">guprecords</span> is a Perl script which reads multiple <span class="inlinecode">uprecord</span> files (produced by <span class="inlinecode">uptimed</span> - a widely available daemon for recording server uptimes) and generates uptime statistics of multiple hosts combined. I keep all the record files of all my personal computers in a Git repository (I even keep the records of the boxes I don't own or use anymore) and there's already quite a collection of it. It looks like this:</p> <pre> β― perl ~/git/guprecords/src/guprecords --indir=./stats/ --count=20 --all Pos | System | Kernel | Uptime | Boot time 1 | sun | FreeBSD 10.1-RELEA.. | 502d 03:29:19 | Sun Aug 16 15:56:40 2015 2 | vulcan | Linux 3.10.0-1160... | 313d 13:19:39 | Sun Jul 25 18:32:25 2021 3 | uugrn | FreeBSD 10.2-RELEASE | 303d 15:19:35 | Tue Dec 22 21:33:07 2015 4 | uugrn | FreeBSD 11.0-RELEA.. | 281d 14:38:04 | Fri Oct 21 15:22:02 2016 5 | deltavega | Linux 3.10.0-957.2.. | 279d 11:15:00 | Sun Jun 30 11:42:38 2019 6 | vulcan | Linux 3.10.0-957.2.. | 279d 11:12:14 | Sun Jun 30 11:43:41 2019 7 | deltavega | Linux 3.10.0-1160... | 253d 04:42:22 | Sat Apr 24 13:34:34 2021 8 | host0 | FreeBSD 6.2-RELEAS.. | 240d 02:23:23 | Wed Jan 31 20:34:46 2007 9 | uugrn | FreeBSD 11.1-RELEA.. | 202d 21:12:41 | Sun May 6 18:06:17 2018 10 | tauceti | Linux 3.2.0-4-amd64 | 197d 18:45:40 | Mon Dec 16 19:47:54 2013 11 | pluto | Linux 2.6.32-5-amd64 | 185d 11:53:04 | Wed Aug 1 07:34:10 2012 12 | sun | FreeBSD 10.3-RELEA.. | 164d 22:31:55 | Sat Jul 22 18:47:21 2017 13 | vulcan | Linux 3.10.0-1160... | 161d 07:08:43 | Sun Feb 14 10:05:38 2021 14 | sun | FreeBSD 10.3-RELEA.. | 158d 21:18:36 | Sat Jan 27 10:18:57 2018 15 | uugrn | FreeBSD 11.1-RELEA.. | 157d 20:57:24 | Fri Nov 3 05:02:54 2017 16 | tauceti-f | Linux 3.2.0-3-amd64 | 150d 04:12:38 | Mon Sep 16 09:02:58 2013 17 | tauceti | Linux 3.2.0-4-amd64 | 149d 09:21:43 | Mon Aug 11 09:47:50 2014 18 | pluto | Linux 3.2.0-4-amd64 | 142d 02:57:31 | Mon Sep 8 01:59:02 2014 19 | tauceti-f | Linux 3.2.0-3-amd64 | 132d 22:46:26 | Mon May 6 11:11:35 2013 20 | keppler-16b | Darwin 13.4.0 | 131d 08:17:12 | Thu Jun 11 10:44:25 2015 </pre><br /> <p>It can also sum up all uptimes for each host to generate a total per host uptime top list:</p> <pre> β― perl ~/git/guprecords/src/guprecords --indir=./stats/ --count=20 --total Pos | System | Kernel | Uptime | 1 | uranus | Linux 5.4.17-200.f.. | 1419d 19:05:39 | 2 | sun | FreeBSD 10.1-RELEA.. | 1363d 11:41:14 | 3 | vulcan | Linux 3.10.0-1160... | 1262d 20:27:48 | 4 | uugrn | FreeBSD 10.2-RELEASE | 1219d 15:10:16 | 5 | deltavega | Linux 3.10.0-957.2.. | 1115d 06:33:55 | 6 | pluto | Linux 2.6.32-5-amd64 | 1086d 10:44:05 | 7 | tauceti | Linux 3.2.0-4-amd64 | 846d 12:58:21 | 8 | tauceti-f | Linux 3.2.0-3-amd64 | 625d 07:16:39 | 9 | host0 | FreeBSD 6.2-RELEAS.. | 534d 19:50:13 | 10 | keppler-16b | Darwin 13.4.0 | 448d 06:15:00 | 11 | tauceti-e | Linux 3.2.0-4-amd64 | 415d 18:14:13 | 12 | moon | Darwin 18.7.0 | 326d 11:21:42 | 13 | callisto | Linux 4.0.4-303.fc.. | 303d 12:18:24 | 14 | alphacentauri | FreeBSD 10.1-RELEA.. | 300d 20:15:00 | 15 | earth | Linux 5.13.14-200... | 289d 08:05:05 | 16 | makemake | Linux 5.11.9-200.f.. | 286d 21:53:03 | 17 | london | Linux 3.2.0-4-amd64 | 258d 15:10:38 | 18 | fishbone | OpenBSD 4.1 .. | 223d 05:55:26 | 19 | sagittarius | Darwin 15.6.0 | 198d 23:53:59 | 20 | mars | Linux 3.2.0-4-amd64 | 190d 05:44:21 | </pre><br /> <a class="textlink" href="https://codeberg.org/snonux/guprecords">https://codeberg.org/snonux/guprecords</a><br /> <p>This all is of no real practical use but fun!</p> <h2>Server configuration management</h2> <p>The <span class="inlinecode">rexfiles</span> project contains all Rex files for my (personal) server setup automation. A <span class="inlinecode">Rexfile</span> is written in a Perl DSL run by the Rex configuration management system. It's pretty much KISS and that's why I love it. It suits my personal needs perfectly. </p> <a class="textlink" href="https://codeberg.org/snonux/rexfiles">https://codeberg.org/snonux/rexfiles</a><br /> <a class="textlink" href="https://www.rexify.org">https://www.rexify.org</a><br /> <p>This is an E-Mail I posted to the Rex mailing list:</p> <p class="quote"><i>Hi there! I was searching for a simple way to automate my personal OpenBSD setup. I found that configuration management systems like Puppet, Salt, Chef, etc.. were too bloated for my personal needs. So for a while I was configuring everything by hand. At one point I got fed up and started writing Shell scripts. But that was not the holy grail so that I looked at Ansible. I found that Ansible had some dependencies on Python on the target machine when you want to use all the features. Furthermore, I am not really familiar with Python. But then I remembered that there was also Rex. It's written in my beloved Perl. Also, OpenBSD comes with Perl in the base system out of the box which makes it integrate better than all my scripts (automation and also scripts deployed via the automation to the system) are all in the same language. Rex may not have all the features like other configuration management systems, but its easy to work-around or extend when you know Perl. Thanks!</i></p> <h2>Fancy SSH execution loop</h2> <p><span class="inlinecode">rubyfy</span> is a fancy SSH loop wrapper written in Ruby for running shell commands on multiple remote servers at once. I also forked this project for work (under a different name) where I added even more features such as automatic server discovery. It's used by many colleagues on a frequent basis. Here are some examples:</p> <pre> # Run command 'hostname' on server foo.example.com ./rubyfy.rb -c 'hostname' <<< foo.example.com # Run command 'id' as root (via sudo) on all servers listed in the list file # Do it on 10 servers in parallel ./rubyfy.rb --parallel 10 --root --command 'id' < serverlist.txt # Run a fancy script in background on 50 servers in parallel ./rubyfy.rb -p 50 -r -b -c '/usr/local/scripts/fancy.zsh' < serverlist.txt # Grep for specific process on both servers and write output to ./out/grep.txt echo {foo,bar}.example.com | ./rubyfy.rb -p 10 -c 'pgrep -lf httpd' -n grep.txt # Reboot server only if file /var/run/maintenance.lock does NOT exist! echo foo.example.com | ./rubyfy.rb --root --command reboot --precondition /var/run/maintenance.lock </pre><br /> <a class="textlink" href="https://codeberg.org/snonux/rubyfy">https://codeberg.org/snonux/rubyfy</a><br /> <h2>A KISS dynamic DNS solution</h2> <p><span class="inlinecode">dyndns</span> is a tiny shell script which implements "your" own DynDNS service. It relies on SSH access to the authoritative DNS server and the <span class="inlinecode">nsupdate</span> command. There is really no need to use any of the "other" free DynDNS services out there.</p> <p>Syntax (this must run from the client connecting to the DNS server through SSH): </p> <pre> ssh dyndns@dyndnsserver /path/to/dyndns-update \ your.host.name. TYPE new-entry TIMEOUT </pre><br /> <p>This is a real world example: </p> <pre> ssh dyndns@dyndnsserver /path/to/dyndns-update \ local.buetow.org. A 137.226.50.91 30 </pre><br /> <a class="textlink" href="https://codeberg.org/snonux/dyndns">https://codeberg.org/snonux/dyndns</a><br /> <h2>CPU information gatherer for Linux</h2> <p>This is a tiny GNU Awk script for Linux which displays information about the CPU. All what it does is presenting <span class="inlinecode">/proc/cpuinfo</span> in an easier to read way. The output is somewhat more compact than the standard <span class="inlinecode">lscpu</span> command you find commonly on Linux distributions.</p> <pre> β― ./cpuinfo cpuinfo (c) 1.0.2 Paul Buetow 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz GenuineIntel 12288 KB cache p = 001 Physical processors c = 004 Cores s = 008 Siblings (Hyper-Threading enabled if s != c) v = 008 [v = p*c*(s != c ? 2 : 1)] Total logical CPUs Hyper-Threading is enabled 0003000 MHz each core 0012000 MHz total 0005990 Bogomips each processor (including virtual) 0023961 Bogomips total </pre><br /> <a class="textlink" href="https://codeberg.org/snonux/cpuinfo">https://codeberg.org/snonux/cpuinfo</a><br /> <h2>Show differences of two files over the network</h2> <p>This is a shell wrapper to use the standard diff tool over the network to compare a file between two computers. It uses NetCat for the network part and also encrypts all traffic using OpenSSL. This is how its used:</p> <p>1. Open two terminal windows and login to two different hosts (you could use ClusterSSH or <span class="inlinecode">tmux</span> here). 2. Run on the first host <span class="inlinecode">netdiff otherhost.example.org /file/to/diff.txt</span> and run on the second host <span class="inlinecode">netdiff firsthost.example.org /file/to/diff.txt</span>. 3. You then will see the file differences.</p> <a class="textlink" href="https://codeberg.org/snonux/netdiff">https://codeberg.org/snonux/netdiff</a><br /> <h2>Delay sending out E-Mails with Mutt</h2> <p>This is a shell script for the Mutt email client for delaying sending out E-Mails. For example, you want to write an email on Saturday but don't want to bother the recipient earlier than Monday. It relies on CRON.</p> <a class="textlink" href="https://codeberg.org/snonux/muttdelay">https://codeberg.org/snonux/muttdelay</a><br /> <h2>Graphical UI for sending text messages</h2> <p><span class="inlinecode">jsmstrade</span> is a minimalistic graphical Java swing client for sending SMS messages over the SMStrade service.</p> <a href="https://foo.zone/gemfeed/2022-06-15-sweating-the-small-stuff/jsmstrade.png"><img src="https://foo.zone/gemfeed/2022-06-15-sweating-the-small-stuff/jsmstrade.png" /></a><br /> <a class="textlink" href="https://codeberg.org/snonux/jsmstrade">https://codeberg.org/snonux/jsmstrade</a><br /> <a class="textlink" href="https://smstrade.de">https://smstrade.de</a><br /> <h2>IPv6 and IPv4 connectivity testing site</h2> <p><span class="inlinecode">ipv6test</span> is a quick and dirty Perl CGI script for testing whether your browser connects via IPv4 or IPv6. It requires you to setup three sub-domains: One reachable only via IPv4 (e.g. <span class="inlinecode">test4.ipv6.buetow.org</span>), another reachable only via IPv6 (e.g. <span class="inlinecode">test6.ipv6.buetow.org</span>) and the main one reachable through both protocols (e.g. <span class="inlinecode">ipv6.buetow.org</span>).</p> <p>I don't have it running on any of my servers at the moment. This means that there is no demo to show now. Sorry!</p> <h2>List open Jira tickets in the terminal</h2> <p><span class="inlinecode">japi</span> s a small Perl script for listing open Jira issues. It might be broken by now as the Jira APIs may have changed. Sorry! But feel free to fork and modernize it. :-)</p> <a class="textlink" href="https://codeberg.org/snonux/jsmstrade">https://codeberg.org/snonux/jsmstrade</a><br /> <p> </p> <h2>Debian running on "your" Android phone</h2> <p>Debroid is a tutorial and a set of scripts to install and to run a Debian <span class="inlinecode">chroot</span> on an Android phone.</p> <a class="textlink" href="https://foo.zone/gemfeed/2015-12-05-run-debian-on-your-phone-with-debroid.html">Check out my previous post about it</a><br /> <p>I am not using Debroid anymore as I have switched to Termux now.</p> <a class="textlink" href="https://termux.com">https://termux.com</a><br /> <h2>Perl service framework</h2> <p>PerlDaemon is a minimal daemon for Linux and other Unix like operating systems programmed in Perl. It is a minimal but pretty functional and fairly generic service framework. This means that it does not do anything useful other than providing a framework for starting, stopping, configuring and logging. To do something useful, a module (written in Perl) must be provided.</p> <a class="textlink" href="https://foo.zone/gemfeed/2011-05-07-perl-daemon-service-framework.html">Checkout my previous post about it</a><br /> <h2>More</h2> <p>There are more projects on my Codeberg page but they aren't as tiny as the ones mentioned in this post or aren't finished yet so I won't bother listing them here. However, there also a few more scripts used frequently by me (not publicly accessible (yet?)) which I would like to mention here:</p> <h3>Work time tracker</h3> <p><span class="inlinecode">worktime.rb</span>, for example, is a command line Ruby script I use to track my time spent working. This is to make sure that I don't overwork (in particular useful when working from home). It also generates some daily and weekly stats and carries over work time (surpluses or minuses) to the next work day, week or even year.</p> <p>It has some special features such as tracking time for self-improvement/development, days off and time spent at the lunch break and time spent on Pet Projects.</p> <p>An example weekly report looks like this (I often don't track my lunch time but what I do instead I stop the work timer when I go out for lunch and start the work timer once back at the desk):</p> <pre> Mon 20211213 50: work:5.92h Tue 20211214 50: work:7.47h lunch:0.50h pet:0.42h Wed 20211215 50: work:8.86h pet:0.50h Thu 20211216 50: work:8.02h pet:0.50h Fri 20211217 50: work:9.81h * Sat 20211218 50: work:0.00h selfdevelopment:1.00h * Sun 20211219 50: work:2.08h pet:1.00h selfdevelopment:-2.08h ================================================ balance:0.06h work:42.15h lunch:0.50h pet:2.42h selfdevelopment:-1.08h buffer:8.38h </pre><br /> <p>All I do when I start work is to run the <span class="inlinecode">wtlogin</span> command and after finishing work to run the <span class="inlinecode">wtlogout</span> command. My shell will remind me when I work without having logged in. It uses a simple JSON database which is editable with <span class="inlinecode">wtedit</span> (this opens the JSON in Vim). The report shown above can be generated with <span class="inlinecode">wtreport</span>. Any out-of-bounds reporting can be added with the <span class="inlinecode">wtadd</span> command.</p> <h3>Password and document store</h3> <p><span class="inlinecode">geheim.rb</span> is my personal password and document store ("geheim" is the German word for secret). It's written in Ruby and heavily relies on Git, FZF (for search), Vim and standard encryption algorithms. Other than the standard <span class="inlinecode">pass</span> Unix password manager, <span class="inlinecode">geheim</span> also encrypts the file names and password titles.</p> <p>The tool is command line driven but also provides an interactive shell when invoked with <span class="inlinecode">geheim shell</span>. It also works on my Android phone via Termux so I have all my documents and passwords always with me. </p> <h3>Backup procedure</h3> <p><span class="inlinecode">backup</span> is a Bash script which does run once daily (or every time on boot) on my home FreeBSD NAS server and performs backup related tasks such as creating a local backup of my remote NextCloud instance, creating encrypted (incremental) ZFS snapshots of everything what's stored on the NAS and synchronizes (via <span class="inlinecode">rsync</span>) backups to a remote cloud storage. It also can synchronize backups to a local external USB drive.</p> <a class="textlink" href="https://foo.zone/gemfeed/2016-04-03-offsite-backup-with-zfs.html">Check out my offsite backup series</a><br /> <h2>konpeito.media</h2> <p>Here's a bonus...</p> <pre> β β β β β β β β β βββββ βββββ βββββ βββββββββββββββ β β β β ββ ββ β β β βββββ β β β ββ ββββββ ββ β βββββββ ββββββ β βββββββ ββββββ β βββββββββ β β ββ β β β ββ β βββββββ ββββββββ β β β β β βββββββββββββ β ββ ββ ββ ββ β βββ β ββ β β β ββ ββ βββββββββββββββββ β β ββββ β β β ββββ β β βββ </pre><br /> <p>*THIS ISN'T MY PROJECT* but I found KONPEITO an interesting Gemini capsule. It's a quarterly released Low-Fi music mix tape distributed only through Gemini (and not the web). </p> <a class="textlink" href="gemini://konpeito.media">gemini://konpeito.media</a><br /> <p>If you wonder what Gemini is:</p> <a class="textlink" href="https://foo.zone/gemfeed/2021-04-24-welcome-to-the-geminispace.html">Welcome to the Geminispae</a><br /> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Perl is still a great choice</title> <link href="gemini://foo.zone/gemfeed/2022-05-27-perl-is-still-a-great-choice.gmi" /> <id>gemini://foo.zone/gemfeed/2022-05-27-perl-is-still-a-great-choice.gmi</id> <updated>2022-05-27T07:50:12+01:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>Perl (the Practical Extraction and Report Language) is a battle-tested, mature, multi-paradigm dynamic programming language. Note that it's not called PERL, neither P.E.R.L. nor Pearl. 'Perl' is the name of the language and 'perl' the name of the interpreter or the interpreter command.. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Perl is still a great choice</h1> <a href="https://foo.zone/gemfeed/2022-05-27-perl-is-still-a-great-choice/regular_expressions.png"><img src="https://foo.zone/gemfeed/2022-05-27-perl-is-still-a-great-choice/regular_expressions.png" /></a><br /> <p class="quote"><i>Published by Paul at 2022-05-27, Comic source: XKCD</i></p> <p>Perl (the Practical Extraction and Report Language) is a battle-tested, mature, multi-paradigm dynamic programming language. Note that it's not called PERL, neither P.E.R.L. nor Pearl. "Perl" is the name of the language and "perl" the name of the interpreter or the interpreter command.</p> <p>Unfortunately (it makes me sad), Perl's popularity has been declining over the last years as Google trends shows:</p> <a href="https://foo.zone/gemfeed/2022-05-27-perl-is-still-a-great-choice/googletrendsperl.jpg"><img src="https://foo.zone/gemfeed/2022-05-27-perl-is-still-a-great-choice/googletrendsperl.jpg" /></a><br /> <p>So why is that? Once the de-facto standard super-glue language for the web nowadays seems to have a bad repetition. Often, people state:</p> <ul> <li>Perl is a write-only language. Nobody can read Perl code.</li> <li>Perl? Isn't it abandoned? It's still at version 5!</li> <li>Why use Perl as there are better alternatives?</li> <li>Why all the sigils? It looks like an exploding ASCII factory!!</li> </ul> <h2>Write-only language</h2> <p>Is Perl really a write-only language? You have to understand that Perl 5 was released in 1994 (28 years ago as of this writing) and when we refer to Perl we usually mean Perl 5. That's many years, and there are many old scripts not following the modern Perl best practices (as they didn't exist yet). So yes, legacy scripts may be difficult to read. Japanese may be difficult to read too if you don't know Japanese, though.</p> <p>To come back to the question: Is Perl a write-only language? I don't think so. Like in any other language, you have to apply best practices in order to keep your code maintainable. Some other programming languages enforce best practices, but that makes these languages less expressive. Perl follows the principles "there is more than one way to do it" (aka TIMTOWDI) and "making easy things easy and hard things possible".</p> <p>Perl gives the programmer more flexibility in how to do things, and this results in a stronger learning curve than for lesser expressive languages like for example Go or Python. But, like in everything in life, common sense has to be applied. You should not take TIMTOWDI to the extreme in a production piece of code. In my personal opinion, it is also more satisfying to program in an expressive language.</p> <p>Some good books on "good" Perl I can recommend are:</p> <a class="textlink" href="http://modernperlbooks.com">Modern Perl</a><br /> <a class="textlink" href="https://hop.perl.plover.com">Higher Order Perl</a><br /> <p>Due to Perl's expressiveness you will find a lot of obscure code in the interweb in form of obfuscation, fancy email signatures (JAPHs), art, polyglots and even poetry in Perl syntax. But that's not what you will find in production code. That's only people having fun with the language which is different to "getting things done". The expressiveness is a bonus. It makes the Perl programmers love Perl.</p> <a class="textlink" href="https://en.wikipedia.org/wiki/Just_another_Perl_hacker">JAPH</a><br /> <a class="textlink" href="http://www.cpan.org/misc/japh">http://www.cpan.org/misc/japh</a><br /> <a class="textlink" href="https://www.perlmonks.org/index.pl?next=20;node_id=1590">Perl Poetry</a><br /> <p>Even I personally have written some poetry in Perl and experimented with a polyglot script:</p> <a class="textlink" href="https://foo.zone/gemfeed/2008-06-26-perl-poetry.html">My very own Perl Poetry</a><br /> <a class="textlink" href="https://foo.zone/gemfeed/2014-03-24-the-fibonacci.pl.c-polyglot.html">A Perl-Raku-C polyglot generating the Fibonacci sequence</a><br /> <p>This all doesn't mean that you can't "get things done" with Perl. Quite the opposite is the case. Perl is a very pragmatic programming language and is suitable very well for rapid prototyping and any kind of small to medium-sized scripts and programs. You can write large enterprise scale application in Perl too, but that wasn't the original intend of why Perl was invented (more on that later).</p> <h2>Is Perl abandoned?</h2> <p>As I pointed out in the previous section, Perl 5 is around for quite some time without any new major version released. This can lead to the impression that development is not progressing and that the project is abandoned. Nothing can be further from the truth. Perl 5.000 was released in 1994 and the latest version (as of this writing) Perl 5.34.1 was released two months ago in 2022. You can check the version history on Wikipedia. You will notice releases being made regularly:</p> <a class="textlink" href="https://en.wikipedia.org/wiki/Perl_5_version_history">Perl 5 version history</a><br /> <p>As you can see, Perl 5 is under active development. Actually, Perl is a family of two high-level, general-purpose, interpreted, dynamic programming languages. "Perl" refers to Perl 5, but from 2000 to 2019 it also referred to its redesigned "sister language", Perl 6, before the latter's name was officially changed to Raku in October 2019 as the differences between Perl 5 and Perl 6 were too groundbreaking. Raku would be a different topic (mostly out of scope of this blog article) but I at least wanted it to mention here. In my opinion, Raku is the "most powerful" programming language out there (I recently started learning it and intend to use it for some of my future personal programming projects):</p> <a class="textlink" href="https://raku.org">The Raku Programming Language</a><br /> <p>So it means that Perl and Raku now exist in parallel. They influence each other, but are different programming languages now. So why not just all use Raku instead of Perl? There are still a couple of reasons of why to choose Perl over Raku:</p> <ul> <li>Many programmers already know Perl and many scripts are already written in Perl. It's possible to call Perl code from Raku (either inline or as a library) and it is also possible to auto-convert Perl code into Raku code, but that's either a workaround or involves some kind of additional work.</li> <li>Perl 5 comes with a great backwards compatibility. Perl scripts from 5.000 will generally still work on a recent version of Perl. New features usually have to be enabled via a so-called "use pragmas". For example, in order to enable sub signatures, "use signatures;" has to be specified.</li> <li>Perl is pre-installed almost everywhere. Fancy running a quick one-off script? In almost all cases, there's no need to install Perl first - it's already there on almost any Linux or *BSD or Unix or other Unix like operating system!</li> <li>Perl has been ported to "zillions" of platforms. One day I found myself on a VMS box. Perl doesn't come installed by default on VMS, but the admin installed Perl there already. The whole operating system was very strange to me, but I was able to write "shell scripts" in Perl and became productive pretty quickly on VMS without knowing almost anything about VMS :-).</li> <li>Perl is reliable. It has been proven itself "millions" of times, over and over again. Large enterprises, such as booking.com, heavily rely on Perl. Did you know that the package manager of the OpenBSD operating system is programmed in Perl, too?</li> <li>Perl is a great language to program in (given that you follow the modern best practices). Don't get confused when Perl is doing some things differently than other programming languages.</li> </ul> <a class="textlink" href="https://perldoc.perl.org/feature">Perl feature pragmas</a><br /> <a class="textlink" href="https://www.OpenBSD.org">The OpenBSD Operating System</a><br /> <a class="textlink" href="https://news.ycombinator.com/item?id=23360338">Why does OpenBSD still include Perl in its base installation?</a><br /> <p>The renaming of Perl 6 to Raku has now opened the door for a future Perl 7. As far as I understand, Perl 7 will be Perl 5 but with modern features enabled by default (e.g. pragmas "use strict; use warnings; use signatures;" and so on. Also, the hope is that a Perl 7 with modern standards will attract more beginners. There aren't many Perl jobs out there nowadays. That's mostly due to Perl's bad (bad for no real reasons) repetition.</p> <a class="textlink" href="https://www.perl.com/article/announcing-perl-7/">Announcing Perl 7</a><br /> <a class="textlink" href="http://blogs.perl.org/users/psc/2022/05/what-happened-to-perl-7.html">What happened to Perl 7? (maybe have to use "use v7;")</a><br /> <h2>Why use Perl as there are better alternatives?</h2> <p>Here, common sense must be applied. I don't believe there is anything like "the perfect" programming language. Everyone has got his preferred (or a set of preferred) programming language to chose from. All programming languages come with their own set of strengths and weaknesses. These are the strengths making Perl shine, and you (technically) don't need to bother to look for "better" alternatives:</p> <ul> <li>Perl is better than Shell/awk/sed scripts. There's a point where shell scripts become fairly complex. The next step-up is to switch to Perl. There are many different versions of shells and awk and sed interpreters. Do you always know which versions (mawk, nawk, gawk, sed, gsed, ...) are currently installed? These commands aren't fully compatible to each other. However, there is only one Perl 5. Simply: Perl is faster, more powerful, more expressive than any shell script can ever be, and it is also extendible through CPAN. Perl can directly talk to databases, which shell scripts can't.</li> <li>Perl code tends to be compact so that it's much better suitable for "shell scripting" and quick "one-liners" than other languages. In my own experience: Ruby and Python code tends to blow up quickly. It doesn't mean that Ruby and Python are not suitable for this task, but I think Perl does much better.</li> <li>Perl 5 has proven itself for decades and is a very stable/robust language. It is a battle-tested and mature as something can ever become.</li> <li>Perl is the reference standard for regular expressions. Even so much that there is a PCRE library (Perl Compatible Regular Expressions) used by many other languages now. Perl fully integrates regular expression syntax into the language, which doesn't feel like an odd add-on like in most other languages.</li> <li>Perl 5 is the master of text processing (well, maybe after Raku now. But you might not have the latest Raku available everywhere). The chief objective of developing the language was for text processing, and this is where Perl (Practical extraction and report language) really shines.</li> <li>Perl is a "deep" language. That means Perl got a lot of features and syntactic sugar and magic. Depending on the perspective, this could be interpreted as a downside too. But IMHO mastery of a "deep" language brings big rewards. The code can be very compact, and it is fun to code in it.</li> <li>Perl is the only language I know which can do "taint checking". Running a script in taint mode makes Perl sanitize all external input and that's a great security feature. Ruby used to have this feature too, but it got removed (as I understand there were some problems with the implementation not completely safe and it was easier just to remove it from the language than to fix it).</li> </ul> <p>About the first point, using Perl for better "shell" scripts was actually the original intend of why Perl was invented in the first place.</p> <a class="textlink" href="https://nostarch.com/perloneliners">Perl one-liners</a><br /> <a class="textlink" href="http://regex.info/book.html">Mastering Regular Expressions</a><br /> <a class="textlink" href="https://en.wikipedia.org/wiki/Taint_checking">Taint checking</a><br /> <p>Here are some reasons why not to chose Perl and look for "better" alternatives:</p> <ul> <li>If performance is your main objectives, then Perl might not be the language to use. Perl is a dynamic interpreted language, and it will generally never be as fast as statically typed languages compiled to native binaries (e.g. C/C++/Rust/Haskell) or statically typed languages run in a VM with JIT (e.g. Java) or gradually typed languages run in a VM (e.g. Raku) or languages like Golang (statically typed, compiled to a binary but still with a runtime in the binary). Perl might be still faster than the other language listed here in certain circumstances (e.g. faster startup time than Java), but usually it's not. It's not a problem of Perl, it's a problem of all dynamic scripting languages including Python, Ruby, ....</li> <li>Don't use Perl (just yet) if you want to code object-oriented. Perl supports OOP, but it feels clunky and odd to use (blessed references to any data types are objects) and doesn't support real encapsulation out of the box. There are many (many) extensions available on CPAN to make OOP better, but that's totally fragmented. The most popular extension, Moose, comes with a huge dependency tree. But wait for Perl 7. It will maybe come with a new object system (an object system inspired by Raku).</li> <li>It's possible to write large programs in Perl (make difficult things possible), but it might not be the best choice here. This also leads back to the clunky object system Perl has. You could write your projects in a procedural or functional style (Perl perfectly fits here), but OOP seems to be the gold standard for large projects nowadays. Functional programming requires a different mindset, and pure procedural programming lacks abstractions.</li> <li>Apply common sense. What is the skill set your team has? What's already widely used and supported at work? Which languages comes with the best modules for the things you want to work on? Maybe Python is the answer (better machine learning modules). Maybe Perl is the better choice (better Bioinformatic modules). Perhaps Ruby is already the de-facto standard at work and everyone knows at least a little Ruby (as it happened to be at my workplace) and Ruby is "good enough" for all the tasks already. But that's not a hindrance to throw in a Perl one-liner once in a while :P.</li> </ul> <a class="textlink" href="https://gist.github.com/Ovid/68b33259cb81c01f9a51612c7a294ede">Cor - A minimal object system for the Perl core - proposal</a><br /> <h2>Why all the sigils? It looks like an exploding ASCII factory!!</h2> <p>The sigils $ @ % & (where Perl is famously known for) serve a purpose. They seem confusing at first, but they actually make the code better readable. $scalar is a scalar variable (holding a single value), @array is an array (holding a list of values), %hash holds a list of key-value pairs and &sub is for subroutines. A given variable $ref can also hold reference to something. @$arrayref dereferences a reference to an array, %$hashref to a hash, $scalarref to a scalar, &$subref dereferences a referene to a subroutine, etc. That can be encapsulated as deep as you want. (This paragraph only scratched the surface here of what Perl can do, and there is a lot of syntactic sugar not mentioned here).</p> <p>In most other programming languages, you won't know instantly what's the "basic type" of a given variable without looking at the variable declaration or the variable name (If named intelligently, e.g. a variable name containing a list of socks is "sock_list"). Even Ruby makes some use of sigils (@ @@ an $), but that's for a different purpose than in Perl (in Ruby it is about object scope, class scope and global scope). Raku uses all the sigils Perl uses plus an additional bunch of twigils, e.g. $.foo for a scalar object variable with public accessors, $!foo for a private scalar object variable, @.foo, @!foo, %.foo, %!foo and so on. Sigils (and twigils) are very convenient once you get used to them. Don't let them scare you off - they are there to help you!</p> <a class="textlink" href="https://www.perl.com/article/on-sigils/">https://www.perl.com/article/on-sigils/</a><br /> <h2>Where do I personally still use perl?</h2> <ul> <li>I use Rexify for my OpenBSD server automation. Rexify is a configuration management system developed in Perl with similar features to Ansible but less bloated. It suits my personal needs perfectly.</li> <li>I have written a couple of smaller to medium-sized Perl scripts which I (mostly) still use regularly. You can find them on my Codeberg page.</li> <li>My day-to-day workflow heavily relies on "ack-grep". Ack is a tool developed in Perl aimed at programmers and can be used for quick searches on source code at the command line.</li> <li>I aim to leave my OpenBSD servers as "vanilla" as possible (trying to rely only on the standard/base installation without installing additional software from the packaging system or ports tree). All my scripts are written either Bourne shell or in Perl here. So there is no need to install additional interpreters.</li> <li>Here and there, I drop a Perl one-liner in order to get stuff done (work and personally). A wise Perl Monk would say: "One one-liner a day keeps the troubles away".</li> </ul> <p>Btw.: Did you know that the first version of PHP was a set of Perl snippets? Only later, PHP became an independent programming language.</p> <a class="textlink" href="https://www.perl.org">https://www.perl.org</a><br /> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Creative universe</title> <link href="gemini://foo.zone/gemfeed/2022-04-10-creative-universe.gmi" /> <id>gemini://foo.zone/gemfeed/2022-04-10-creative-universe.gmi</id> <updated>2022-04-10T10:09:11+01:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>I have been participating in an annual work-internal project contest (we call it Pet Project contest) since I moved to London and switched jobs to my current employer. I am very happy to say that I won a 'silver' prize last week here π. Over the last couple of years I have been a finalist in this contest six times and won some kind of prize five times. Some of my projects were also released as open source software. One had a magazine article published, and for another one I wrote an article on my employer's engineering blog. If you have followed all my posts on this blog (the one you are currently reading), then you have probably figured out what these projects were:. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Creative universe</h1> <pre> . + . . . . . . . . . * . * . . . . . . + . "You Are Here" . . + . . . . | . . . . . . | . . . +. + . \|/ . . . . . . V . * . . . . + . + . . . + . . + .+. . . . . + . . . . . . . . . . . . . ! / * . . . + . . - O - . . . + . . * . . / | . + . . . .. + . . . . . * . * . +.. . * . . . . . . . . + . . + - the universe </pre><br /> <p class="quote"><i>Published by Paul at 2022-04-10, last updated at 2022-04-18</i></p> <h2>Prelude</h2> <p>I have been participating in an annual work-internal project contest (we call it Pet Project contest) since I moved to London and switched jobs to my current employer. I am very happy to say that I won a "silver" prize last week here π. Over the last couple of years I have been a finalist in this contest six times and won some kind of prize five times. Some of my projects were also released as open source software. One had a magazine article published, and for another one I wrote an article on my employer's engineering blog. If you have followed all my posts on this blog (the one you are currently reading), then you have probably figured out what these projects were:</p> <a class="textlink" href="https://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.html">DTail - The distributed log tail program</a><br /> <a class="textlink" href="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux.html">Realistic load testing with I/O Riot for linux</a><br /> <p>Note that my latest silver prize project isn't open source software and because of that there is no public material I can refer to. Maybe the next one again?</p> <p>I want to point out that I never won the "gold" prize and it's the first time I won "silver", though. I believe, looking at the company's contest history, I am the employee with the most consecutive successful project submissions (my streak broke as I didn't participate last year) and am also the one with the highest successful project count in total. Sorry if this all sounds a bit self-promotional, but I think it is something to be proud of. Consistency beats a one-off success.</p> <p>I often put endless hours and sometimes sleepless nights into such projects and all of that in my own time. I, an engineer whose native tongue is not English, also have to present such a project in front of the CEO, CTO and CPO, the Chief Scientist, the founders of the company, and, if it is not enough, to all other staff of the company too. I usually also demonstrate a working prototype live on a production grid during the presentation. π</p> <p>So why would I sign up myself for such side projects? Isn't it a lot of stress and extra work? Besides the prize in form of money (you can not count on that, you may win or you may not win something) and recognition, there are also other motivational points:</p> <ul> <li>I want to learn new technologies or to deepen my knowledge of a given technology. I want to have a personal benefit from the project, even when I don't win any prize. So when the company is offering a contest, why not use it as a motivational trampoline? It's good to have a hard deadline for a project. And the project will also benefit the company in some way. So it's a win-win.</li> <li>I love the idea of combining several old things into a new thing. You can call this creativity. At work, we call this sometimes Lego: Building new things from given blocks. But I also love to add something new and unique to the mix, something that didn't exist as a Lego block before and could not be built by using only the already existing blocks.</li> </ul> <h2>How to be creative</h2> <p>How did I manage to be creative with all these Pet Projects? Unfortunately, there is no step-by-step guide I could point you to. But what I want to do in this blog post is share my personal experience so far.</p> <h3>Know which problem you want to solve</h3> <p>There must be a problem to be solved or a thing to be improved. It makes no sense to have a project without a goal. A problem might be obvious to you, and you don't even need to think about it. In that case, you are all set, and you can immerse yourself with the problem.</p> <p>If, however, you don't know what problem you want to solve: Do you really need to be creative? All problems are solved anyway, correct? In that case, just go on with your work. As you immerse yourself with your daily work, you will find a project naturally after a while. I don't believe you should artificially find a project. It should come naturally to you. You should have an interest in the problem domain and a strong desire to find a proper solution for the problem. Artificially created projects come with the catch that you might give up on it rather sooner than later due to lack of motivation and desire.</p> <h3>Immerse / deep dive</h3> <p>If you want to be creative in a field, you must know a lot about it. The more you know about it, the more dots you can connect. When you are learning a new technology or if you are thinking about a tough problem, do it thoroughly. Don't let anything distract you. Read books, watch lectures, listen to podcasts or audiobooks about the topic, talk to other people working on similar topics. Immerse yourself for multiple hours per day, multiple days per week, multiple weeks and maybe even months. Create your own inner universe.</p> <p>But once a day is over, shut your thoughts down. Hit the off-switch. Stop thinking about this problem for the remainder of the day. This can be difficult, as you didn't solve the problem- or didn't understand everything of the new technology yet, and you really want to get to the point. But be strict to yourself and stop thinking about it for a while.</p> <p>You must understand that you are more than just your conscious thoughts. Your brain does a lot of work in the background that you aren't aware of consciously. What happens when you stop consciously thinking about a problem is that your brain continues processing it. You might have experienced the "AHA"-effect, where suddenly you had an idea out of nowhere (e.g. during a walk, in the shower, or in the morning when you woke up)? This is your conscious self downloading a result from the background thread of your brain. You can elevate this effect by immersing with the problem immensely before giving your conscious self a break.</p> <p>Sometimes, depending on how deeply you were immersed, you may need to let the problem go for a couple of days (e.g. over a weekend) before you can download a new insight.</p> <h3>Always have a notebook with you</h3> <p>Wherever you go, ensure that you always have something to take notes with you. Once you have an idea from nowhere (or from your unconscious but volatile brain), you really want to write it down to persistent storage. It doesn't matter what kind of note-taking device you use here. It can be a paper journal, or it can be your smartphone.Β </p> <p>My advice is to have a separate section where you put your notes of all of your ideas. At home or in the office, I write everything in my paper journal. When I am not at home, I use a digital note-taking app on my phone. Later, I copy the digital notes from it into a project-specific section of my paper journal.</p> <p>I prefer taking notes on paper, as it gives you more freedom of how to structure it. You can use any colour, and you can also quickly create diagrams without the use of any complex computer program.</p> <h3>When you didn't sleep enough</h3> <p>I noticed while being sleep-deprived I am (obviously) unable to concentrate so much, and it is difficult to be immersed in a focused way. But on the other hand, I am a lot more creative compared to when I am not sleep-deprived. Then, my brain suddenly presents me with connections I have not thought of before. Here, I usually write any idea I have down on a sheet of paper or in my journal, so I can pick it up later. I then often continue to philosophise about a possible solution. Sometimes to the absurd, and sometimes to something pretty useful.</p> <p>I am not saying that you should skip sleep. By all means, if you can sleep, then sleep. But there are some days when you don't manage to sleep (e.g. think too much about a project and didn't manage to hit the off switch). This is, where you can take advantage of your current state of mind. Disclaimer: Skipping sleep damages your health. So, please don't try this out on purpose. But in case you had a bad night, remember this trick.</p> <h3>Have regular breaks and relax</h3> <p>Have regular breaks. Don't skip your lunch break. Best, have a walk during lunchtime. And after work, do some kind of workout or visit a sports class. Do something completely unrelated to work before going to sleep (e.g. visit a parallel universe and read a Science Fiction novel). In short: Totally hit the off-switch after your work for the day is finished. You will be much more energised and motivated the next time you open your work laptop.</p> <a class="textlink" href="../other-resources.html">I personally love to read Science Fiction novels</a><br /> <p>I skip breakfast and lunch during the week. This means that on average, I intermittent fast on average 18-20 hours daily. It may sound odd to most people (who don't intermittent fast), but in a fasted state, I can be even more focused, thus helping me immerse myself in something even more. Not having breakfast and lunch also gives me back some time for other things (e.g. a nice walk, where I listen to podcasts or audiobooks or practise using my camera (street photography)). I relax my routine during the week ends, where I may enjoy a meal at any given time of the day.</p> <p>It also helps a lot eat healthy. Healthy food makes your brain work more efficiently. But I won't go into more details here, as nothing is as contradictory as the health and food industry. Conduct your own research. Your opinion may be different from mine anyway, and everyone's body reacts to certain foods differently. What for one person works may not work for another person. But be aware that you will find a lot of wrong and also conflicting information on the internet. So always use multiple resources for your research.</p> <h3>Upside-down approach</h3> <p>It's easy to fall into the habit of "boxed" thinking, but creativity is exactly the opposite. Once in a while, make yourself think "Is A really required to do B?". Many assumptions are believed to be true. But are they really? A concrete example: "At work we only use the programming language L and framework F" and therefore, it is the standard we must use.</p> <p>Another way to think about it is "Is there an alternative way to accomplish the desired result? What if there were no programming language L and framework F? What would I do instead?". Maybe you would use programming language X to implement your own domain-specific language, which does what framework F would have done but in exactly the way you want to + much more flexible than F! And maybe language X would be much better suitable than L to implement a DSL anyway. Conclusion: It never hurts to verify your assumptions.</p> <p>Often, you will also find solutions to problems you never intended to solve and find new problems you never imagined to actually exist. That might not be a bad thing, but it might sidetrack you on your path to finding a solution for a particular problem. So be careful not to get sidetracked too much.Β In this case, just save a note for later reference (maybe your next Pet Project?) somewhere and go on with your actual problem.</p> <p>Don't be afraid to think about weird and unconventional solutions. Sometimes, the most unconventional solution is the best solution to a problem. Also, try to keep to the basics. The best solutions are KISS.</p> <a class="textlink" href="https://foo.zone/gemfeed/2021-09-12-keep-it-simple-and-stupid.html">Keep it simple and stupid</a><br /> <p>A small additional trick: you can train yourself to generate new and unconventional ideas. Just write down 20 random ideas every day. It doesn't matter what the ideas are about and whether they are useful or not. The purpose of this exercise is to make your brain think about something new and unconventional. These can be absurd ideas such as "Jump out of the window naked in the morning in order to wake up faster". Of course, you would never do that, but at least you had an idea and made your brain generate something.</p> <h3>Don't be busy all the time</h3> <p>Especially as a DevOps Engineer, you could be busy all the time with small, but frequent, ad hoc tasks. Don't lose yourself here. Yes, you should pay attention to your job and those tasks, but you should also make some room for creativity. Don't schedule meeting after ad hoc work after meeting after Jira ticket work after another Jira ticket. There should also be some "free" space in your calendar.</p> <p>Use the "free" time to play around with your tech stack. Try out new options, explore the system metrics, explore new tools, etc. Dividends will pay off with new ideas, which you would have never come up with if you were "just busy" like a machine.</p> <p>Sometimes, I pick a Unix manual page of a random command and start reading it. I have a bash helper function which will pick one for me:</p> <pre> β― where learn learn () { man $(ls /bin /sbin /usr/bin /usr/sbin 2>/dev/null | shuf -n 1) | sed -n "/^NAME/ { n;p;q }" } β― learn perltidy - a perl script indenter and reformatter β― learn timedatectl - Control the system time and date </pre><br /> <h2>Conclusion</h2> <p>This all summarises advice I have, really. Β I hope this was interesting and helpful for you.</p> <p>I have one more small tip: I never published a blog post the same day I wrote it. After finishing writing it, I always wait for a couple of days. In all cases so far, I had an additional idea to add or to fine tune the blog post.</p> <p>Another article I found interesting and relevant is</p> <a class="textlink" href="https://thesephist.com/posts/paradise/">Creative Paradise by The Sephist</a><br /> <p>Relevant books I can recommend are:</p> <ul> <li>Consciousness: A Very Short Introduction; Susan Blackmore; Oxford Uiversity Press</li> <li>Deep Work; Cal Newport; Piatkus</li> <li>So Good They Can't Ignore You; Cal Newport; Business Plus</li> <li>The Off Switch; Mark Cropley; Virgin Books</li> <li>Ultralearning; Scott Young; Thorsons</li> </ul> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>The release of DTail 4.0.0</title> <link href="gemini://foo.zone/gemfeed/2022-03-06-the-release-of-dtail-4.0.0.gmi" /> <id>gemini://foo.zone/gemfeed/2022-03-06-the-release-of-dtail-4.0.0.gmi</id> <updated>2022-03-06T18:11:39+00:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>I have recently released DTail 4.0.0 and this blog post goes through all the new goodies. You can also read my previous post about DTail in case you wonder what DTail is:. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>The release of DTail 4.0.0</h1> <pre> ,_---~~~~~----._ _,,_,*^____ _____``*g*\"*, ____ _____ _ _ / __/ /' ^. / \ ^@q f | _ \_ _|_ _(_) | @f | @)) | | @)) l 0 _/ | | | || |/ _` | | | \`/ \~____ / __ \_____/ \ | |_| || | (_| | | | | _l__l_ I |____/ |_|\__,_|_|_| } [______] I ] | | | | ] ~ ~ | | | | | </pre><br /> <p class="quote"><i>Published by Paul at 2022-03-06</i></p> <p>I have recently released DTail 4.0.0 and this blog post goes through all the new goodies. You can also read my previous post about DTail in case you wonder what DTail is:</p> <a class="textlink" href="https://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.html">DTail - The distributed log tail program</a><br /> <p>If you want to jump directly to DTail, do it here (there are nice animated gifs which demonstrates the usage pretty well):</p> <a class="textlink" href="https://dtail.dev">https://dtail.dev</a><br /> <h2>So, what's new in 4.0.0?</h2> <h3>Rewritten logging</h3> <p>For DTail 4, logging has been completely rewritten. The new package name is "internal/io/dlog". I rewrote the logging because DTail is a special case here: There are logs processed by DTail, there are logs produced by the DTail server itself, there are logs produced by a DTail client itself, there are logs only logged by a DTail client, there are logs only logged by the DTail server, and there are logs logged by both, server and client. There are also different logging levels and outputs involved.</p> <p>As you can imagine, it becomes fairly complex. There is no ready Go off-shelf logging library which suits my needs and the logging code in DTail 3 was just one big source code file with global variables and it wasn't sustainable to maintain anymore. So why not rewrite it for profit and fun? </p> <p>There's a are new log level structure now (The log level now can be specified with the "-logLevel" command line flag):</p> <pre> // Available log levels. const ( None level = iota Fatal level = iota Error level = iota Warn level = iota Info level = iota Default level = iota Verbose level = iota Debug level = iota Devel level = iota Trace level = iota All level = iota ) </pre><br /> <p>DTail also supports multiple log outputs (e.g. to file or to stdout). More are now easily pluggable with the new logging package. The output can also be "enriched" (default) or "plain" (read more about that further below).</p> <h3>Configurable terminal color codes</h3> <p>A complaint I received from the users of DTail 3 were the terminal colors used for the output. Under some circumstances (terminal configuration) it made the output difficult to read so that users defaulted to "--noColor" (disabling colored output completely). I toke it by heart and also rewrote the color handling. It's now possible to configure the foreground and background colors and an attribute (e.g. dim, bold, ...).</p> <p>The example "dtail.json" configuration file represents the default (now, more reasonable default) color codes used, and it is free to the user to customize them:</p> <pre> { "Client": { "TermColorsEnable": true, "TermColors": { "Remote": { "DelimiterAttr": "Dim", "DelimiterBg": "Blue", "DelimiterFg": "Cyan", "RemoteAttr": "Dim", "RemoteBg": "Blue", "RemoteFg": "White", "CountAttr": "Dim", "CountBg": "Blue", "CountFg": "White", "HostnameAttr": "Bold", "HostnameBg": "Blue", "HostnameFg": "White", "IDAttr": "Dim", "IDBg": "Blue", "IDFg": "White", "StatsOkAttr": "None", "StatsOkBg": "Green", "StatsOkFg": "Black", "StatsWarnAttr": "None", "StatsWarnBg": "Red", "StatsWarnFg": "White", "TextAttr": "None", "TextBg": "Black", "TextFg": "White" }, "Client": { "DelimiterAttr": "Dim", "DelimiterBg": "Yellow", "DelimiterFg": "Black", "ClientAttr": "Dim", "ClientBg": "Yellow", "ClientFg": "Black", "HostnameAttr": "Dim", "HostnameBg": "Yellow", "HostnameFg": "Black", "TextAttr": "None", "TextBg": "Black", "TextFg": "White" }, "Server": { "DelimiterAttr": "AttrDim", "DelimiterBg": "BgCyan", "DelimiterFg": "FgBlack", "ServerAttr": "AttrDim", "ServerBg": "BgCyan", "ServerFg": "FgBlack", "HostnameAttr": "AttrBold", "HostnameBg": "BgCyan", "HostnameFg": "FgBlack", "TextAttr": "AttrNone", "TextBg": "BgBlack", "TextFg": "FgWhite" }, "Common": { "SeverityErrorAttr": "AttrBold", "SeverityErrorBg": "BgRed", "SeverityErrorFg": "FgWhite", "SeverityFatalAttr": "AttrBold", "SeverityFatalBg": "BgMagenta", "SeverityFatalFg": "FgWhite", "SeverityWarnAttr": "AttrBold", "SeverityWarnBg": "BgBlack", "SeverityWarnFg": "FgWhite" }, "MaprTable": { "DataAttr": "AttrNone", "DataBg": "BgBlue", "DataFg": "FgWhite", "DelimiterAttr": "AttrDim", "DelimiterBg": "BgBlue", "DelimiterFg": "FgWhite", "HeaderAttr": "AttrBold", "HeaderBg": "BgBlue", "HeaderFg": "FgWhite", "HeaderDelimiterAttr": "AttrDim", "HeaderDelimiterBg": "BgBlue", "HeaderDelimiterFg": "FgWhite", "HeaderSortKeyAttr": "AttrUnderline", "HeaderGroupKeyAttr": "AttrReverse", "RawQueryAttr": "AttrDim", "RawQueryBg": "BgBlack", "RawQueryFg": "FgCyan" } } }, ... } </pre><br /> <p>You notice the different sections - these are different contexts:</p> <ul> <li>Remote: Color configuration for all log lines sent remotely from the server to the client. </li> <li>Client: Color configuration for all lines produced by a DTail client by itself (e.g. status information).</li> <li>Server: Color configuration for all lines produced by the DTail server by itself and sent to the client (e.g. server warnings or errors)</li> <li>MaprTable: Color configuration for the map-reduce table output.</li> <li>Common: Common color configuration used in various places (e.g. when it's not clear what's the current context of a line).</li> </ul> <p>When you do so, make sure that you check your "dtail.json" against the JSON schema file. This is to ensure that you don't configure an invalid color accidentally (requires "jsonschema" to be installed on your computer). Furthermore, the schema file is also a good reference for all possible colors available:</p> <pre> jsonschema -i dtail.json schemas/dtail.schema.json </pre><br /> <h3>Serverless mode</h3> <p>All DTail commands can now operate on log files (and other text files) directly without any DTail server running. So there isn't a need anymore to install a DTail server when you are on the target server already anyway, like the following example shows:</p> <pre> % dtail --files /var/log/foo.log </pre><br /> <p>or</p> <pre> % dmap --files /var/log/foo.log --query 'from TABLE select .... outfile result.csv' </pre><br /> <p>The way it works in Go code is that a connection to a server is managed through an interface and in serverless mode DTail calls through that interface to the server code directly without any TCP/IP and SSH connection made in the background. This means, that the binaries are a bit larger (also ship with the code which normally would be executed by the server) but the increase of binary size is not much.</p> <h3>Shorthand flags</h3> <p>The "--files" from the previous example is now redundant. As a shorthand, It is now possible to do the following instead:</p> <pre> % dtail /var/log/foo.log </pre><br /> <p>Of course, this also works with all other DTail client commands (dgrep, dcat, ... etc).</p> <h3>Spartan (aka plain) mode</h3> <p>There's a plain mode, which makes DTail only print out the "plain" text of the files operated on (without any DTail specific enriched output). E.g.:</p> <pre> % dcat --plain /etc/passwd > /etc/test % diff /etc/test /etc/passwd # Same content, no diff </pre><br /> <p>This might be useful if you wanted to post-process the output. </p> <h3>Standard input pipe</h3> <p>In serverless mode, you might want to process your data in a pipeline. You can do that now too through an input pipe:</p> <pre> % dgrep --plain --regex 'somethingspecial' /var/log/foo.log | dmap --query 'from TABLE select .... outfile result.csv' </pre><br /> <p>Or, use any other "standard" tool:</p> <pre> % awk '.....' < /some/file | dtail .... </pre><br /> <h3>New command dtailhealth</h3> <p>Prior to DTail 4, there was a flag for the "dtail" command to check the health of a remote DTail server (for use with monitoring system such as Nagios). That has been moved out to a separate binary to reduce complexity of the "dtail" command. The following checks whether DTail is operational on the current machine (you could also check a remote instance of DTail server, just adjust the server address).</p> <pre> % cat check_dtail.sh #!/bin/sh exec /usr/local/bin/dtailhealth --server localhost:2222 </pre><br /> <h3>Improved documentation</h3> <p>Some features, such as custom log formats and the map-reduce query language, are now documented. Also, the examples have been updated to reflect the new features added. This also includes the new animated example Gifs (plus documentation how they were created).</p> <p>I must admit that not all features are documented yet:</p> <ul> <li>Server side scheduled map-reduce queries</li> <li>Server side continuous map-reduce queries</li> <li>Some more docs about terminal color customization</li> <li>Some more docs about log levels</li> </ul> <p>That will be added in one of the future releases. </p> <h3>Integration testing suite</h3> <p>DTail comes already with some unit tests, but what's new is a full integration testing suite which covers all common use cases of all the commands (dtail, dcat, dgrep, dmap) with a server backend and also in serverless mode.</p> <p>How are the tests implemented? All integration tests are simply unit tests in the "./integrationtests" folder. They must be explicitly activated with:</p> <pre> % export DTAIL_INTEGRATION_TEST_RUN_MODE=yes </pre><br /> <p>Once done, first compile all commands, and then run the integration tests:</p> <pre> % make . . . % go clean -testcache % go test -race -v ./integrationtests </pre><br /> <h3>Improved code</h3> <p>Not that the code quality of DTail has been bad (I have been using Go vet and Go lint for previous releases and will keep using these), but this time I had new tools (such as SonarQube and BlackDuck) in my arsenal to:</p> <ul> <li>Reduce the complexity of a couple of functions (splitting code up into several smaller functions)</li> <li>Avoid repeating code (this version of DTail doesn't use Go generics yet, though).</li> </ul> <p>Other than that, a lot of other code has been refactored as I saw fit.</p> <h3>Use of memory pools</h3> <p>DTail makes excessive use of string builder and byte buffer objects. For performance reasons, those are now re-used from memory pools.</p> <h2>What's next</h2> <p>DTail 5 won't be released any time soon I guess, but some 4.x.y releases will follow this year fore sure. I can think of:</p> <ul> <li>New (but backwards compatible) features which don't require a new major version bump (some features have been requested at work internally).</li> <li>Even more improved documentation.</li> <li>Dependency updates.</li> </ul> <p>I use usually DTail at work, but I have recently installed it on my personal OpenBSD machines too. I might write a small tutorial here (and I might also add the rc scripts as examples to one of the next DTail releases).</p> <p>I am a bit busy at the moment with two other pet projects of mine (one internal work-project, and one personal one, the latter you will read about in the next couple of months). If you have ideas (or even a patch), then please don't hesitate to contact me (either via E-Mail or a request at GitHub).</p> <p>Thanks!</p> <p>Paul</p> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Computer operating systems I use(d)</title> <link href="gemini://foo.zone/gemfeed/2022-02-04-computer-operating-systems-i-use.gmi" /> <id>gemini://foo.zone/gemfeed/2022-02-04-computer-operating-systems-i-use.gmi</id> <updated>2022-02-04T09:58:22+00:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>This is a list of Operating Systems I currently use. This list is in no particular order and also will be updated over time. The very first operating system I used was MS-DOS (mainly for games) and the very first Unix like operating system I used was SuSE Linux 5.3. My first smartphone OS was Symbian on a clunky Sony Ericsson device.. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Computer operating systems I use(d)</h1> <pre> /( )` \ \___ / | /- _ `-/ ' (/\/ \ \ /\ / / | ` \ O O ) / | `-^--'`< ' (_.) _ ) / `.___/` / `-----' / <----. __ / __ \ <----|====O)))==) \) /==== <----' `--' `.__,' \ | | \ / ______( (_ / \______ (FL) ,' ,-----' | \ `--{__________) \/ "Berkeley Unix Daemon" </pre><br /> <p class="quote"><i>Published by Paul at 2022-02-04, updated 2022-02-18</i></p> <p>This is a list of Operating Systems I currently use. This list is in no particular order and also will be updated over time. The very first operating system I used was MS-DOS (mainly for games) and the very first Unix like operating system I used was SuSE Linux 5.3. My first smartphone OS was Symbian on a clunky Sony Ericsson device.</p> <h2>Fedora Linux</h2> <p>Fedora Linux is the operating system I use on my primary (personal) laptop. It's a ThinkPad X1 Carbon Gen. 9. Lenovo which comes along with official Lenovo Linux support. I already noticed hardware firmware updates being installed directly through Fedora from Lenovo. Fedora is a real powerhouse, cutting-edge and reasonably stable at the same time. It's baked by Red Hat.</p> <p>I also use Fedora on my Microsoft Surface Go 2 convertible tablet. Fedora works quite OK (and much better than Windows) on this device. It's also the perfect travel companion.</p> <p>I use the GNOME Desktop on my Fedora boxes. I have memorized and customized a bunch of keyboard shortcuts. But the fact that I mostly work in the terminal (with tmux) makes the Desktop environment I use only secondary.</p> <h2>EndeavourOS</h2> <p>I installed EndeavourOS on my (older) ThinkPad X240 to try out an Arch based Linux distribution. I also could have installed plain Arch, but I don't see the point when there is EndeavourOS. EndeavourOS is as close as you can get to the plain Arch experience but with an easy installer. I am not saying that it's difficult to install plain Arch but it's, unless you are new to Linux and want to learn about the installation procedure, just waste of time in my humble opinion. Give Linux From Scratch a shot instead if you really want to learn about Linux.</p> <a class="textlink" href="https://www.linuxfromscratch.org/">https://www.linuxfromscratch.org/</a><br /> <p>On EndeavourOS, I use the Xfce desktop environment which feels very snappy and fast on the X240 (which I purchased back in 2014). Usually, I have my X240 standing right next to my work laptop and use it for playing music (mainly online radio streams), for personal note taking and occasional emailing and instant messaging.</p> <p>As this is a rolling Linux distribution there are a lot of software updates coming through every day. Sometimes, it only takes a minute until the next version of a package is available. Honestly, I find that a bit annoying to constantly catch up with all the updates. As for now I will live with it and/or automate it a bit more. It'll be OK if it breaks occasionally, as this is not my primary laptop anyway. </p> <p>Arch Linux and EndeavourOS are community distributions. This means, that there is no big corporation in the backyard lurking around. They won't give you the firmware updates for cutting edge hardware out of the box, though, but they are still a very good choice for hobbyist and also for older hardware where future firmware updates are less likely to happen.</p> <p>I am very happy with the package availability through the official repository and AUR.</p> <a class="textlink" href="https://endeavouros.com/">https://endeavouros.com/</a><br /> <h2>FreeBSD</h2> <p>I have run FreeBSD in many occasions. Right after SuSE Linux, FreeBSD (around 4.x) was the second open source system I used in my life on regular basis. I didn't even go to university yet then I started using it :-). Also, a former employer of mine even allowed me to install FreeBSD on my main workstation (which I actually did and used it for a couple of years). </p> <p>I remember it used to be a pain bootstrapping Java for FreeBSD due to the lack of pre-compiled binary packages. You had first to enable the Linux compatibility layer, then install Linux Java, and then compile FreeBSD Java with the bootstrapped Linux Java (yes, Java is mainly programmed in C++, but for some reason compiling Java for FreeBSD also required an installation of Java). Nowadays, there are ready OpenJDK binary packages you could install. So things have improved a lot since.</p> <p>FreeBSD always had a place somewhere in my life:</p> <ul> <li>On a Desktop PC (personal and work)</li> <li>On a Laptop</li> <li>On a webserver, FTP server, DNS server, mail server</li> <li>On a server offering FreeBSD jails to customers for rent</li> <li>As an experiment running Debian GNU/kFreeBSD inside of jails</li> </ul> <p>Debian GNU/kFreeBSD is now dead (same is my experiment)...</p> <a class="textlink" href="https://www.debian.org/ports/kfreebsd-gnu/">https://www.debian.org/ports/kfreebsd-gnu/</a><br /> <p>...but I still have saved and old uname output :-):</p> <pre> [root@saturn /usr/jail/serv14/etc] # jexec 21 bash root@rhea:/ # uname -a GNU/kFreeBSD rhea.buetow.org 8.0-RELEASE-p5 FreeBSD 8.0-RELEASE-p5 #2: Sat Nov 27 13:10:09 CET 2010 root@saturn.buetow.org:/usr/obj/usr/srcs/freebsd.src8/src/sys/SERV10 x86 64 amd64 Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz GNU/kFreeBSD </pre><br /> <p>Currently, I use FreeBSD on my personal NAS server. The server is a regular PC with a bunch of hard drives and a ZFS RAIDZ (with 4x2TB drives) + a couple of external backup drives.</p> <a class="textlink" href="https://www.FreeBSD.org">https://www.FreeBSD.org</a><br /> <h2>CentOS 7</h2> <p>While CentOS 8 is already out of support, I still use CentOS 7 (which will receive security updates until 2024). CentOS 7 runs in a cloud VM and is the home to my personal NextCloud and Wallabag installations. You probably know already NextCloud. About Wallabag: It is a great free and open source alternative to Pocket (for reading articles from the web offline later). Yes, you can pay for a Wallabag subscription, but you can also host it for free on your own server.</p> <a class="textlink" href="https://nextcloud.com">NextCloud</a><br /> <a class="textlink" href="https://www.wallabag.it/en">Wallabag</a><br /> <p>The reason I use Linux and not *BSD at the moment for these services is Docker. With Docker, it's so easy-peasy to get these up and running. I will have to switch to another OS before CentOS 7 runs out of support, though. It might be CentOS Stream, Rocky Linux, or, more likely, I will use FreeBSD. On FreeBSD there isn't Docker, but what can be done is to create a self-contained Jail for each of the web-apps. </p> <p>I have been using FreeBSD Jails for LAMP stacks before I started using CentOS. The reason why I switched to CentOS (it was still CentOS 6 at that time) in the first place was, that I wanted to try out something new.</p> <a class="textlink" href="https://www.centos.org">https://www.centos.org</a><br /> <h2>OpenBSD</h2> <p>I use two small OpenBSD "cloud" boxes for my "public facing internet front-ends". The services I run here are:</p> <ul> <li>HTTP server (serving this site via https://foo.zone)</li> <li>Gemini server (serving this site via gemini://foo.zone)</li> <li>MTA server (for receiving E-Mails to my hosts)</li> <li>Authorative DNS server (for all of my "domains")</li> <li>Some personal/private git repositories (accessible only via SSH)</li> </ul> <p>OpenBSD is a complete operating system. I love it due to it's "simplicity" and "correctness" and the good documentation (I love the manual pages in particular). OpenBSD is also known for its innovations in security. I must admin, though, that most Unix like operating system would be secure enough for my personal needs and that I don't really need to use OpenBSD here. But nevertheless, I think it's the ideal operating system for what I am using it for.</p> <p>The only softwares which were not part of the base system and I had to install additionally were the Gemini server (vger) and Git, which both were available as pre-compiled OpenBSD binary packages. So, besides of these two packages, it is indeed a pretty complete operating system for my use case.</p> <a class="textlink" href="https://www.openbsd.org">https://www.openbsd.org</a><br /> <h2>macOS (proprietary)</h2> <p>I have to use a MacBook Pro with macOS for work. What else can I say but that this would have never been my personal choice. At least macOS is a UNIX under the hood and comes with a decent terminal and there are plenty of terminal apps available via Brew. Some of the inner workings of macOS were actually forked from the FreeBSD project. </p> <a class="textlink" href="https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/BSD/BSD.html">developer.apple.com: BSD in macOS/Darwin</a><br /> <p>I find the macOS UI rather confusing.</p> <h2>LineageOS (mobile)</h2> <p>At some point I got fed up with big tech, like Google and Samsung (or Apple, but personally I don't use Apple), spying on me. So I purchased a Google phone (a midrange Pixel phone) and installed LineageOS, a free and open source distribution of Android, on it. I don't have anything from Google installed on it (not even the play store, I install my apps from F-Droid). It's my daily driver since mid 2021 now. </p> <p>So far the experience is not great but good. The main culprits are not having Google Maps, Google Gboard and the camera app. The latter lacks some features on LineageOS (e.g. No wide angle lens support). Also, I can't use my banking apps anymore. Sometimes apps crash for no apparent reason(s) but I get around it so far. I shouldn't spend so much time on my smartphone anyway! And the whole point of switching to LineageOS was to get away of big tech and therefore I should not complain :-). What I do like is that 95% the things I used to do on a proprietary mobile phone also can be done with LineageOS.</p> <a class="textlink" href="https://foo.zone/gemfeed/2021-08-01-on-being-pedantic-about-open-source.html">Read also "The Midle Way" section of this blog post regarding smartphones.</a><br /> <p>There's also the excellent Termux app in the F-Droid store, which transforms the phone into a small Linux handheld device. I am able to run all of my Linux/Unix terminal apps with it.</p> <a class="textlink" href="https://lineageos.org/">https://lineageos.org/</a><br /> <a class="textlink" href="https://termux.com/">https://termux.com/</a><br /> <h2>Samsung's Stock Android (mobile proprietary)</h2> <p>Unfortunatley, I still have to keep my proprietary Android phone around. Sometimes, I really need to use some proprietary apps which are only available form the Google play store and also require the Google services installed on the phone. I don't carry this phone around all the time and I only use it intentionally for very specific use cases. I think this is the best compromise I can make.</p> <h2>iOS (mobile proprietary)</h2> <p>I have to use an iPhone for work. I like the hardware but I hate the OS (you can also call it spyOS), but it's the necessarries evil, unfortunately. Apple is even worse than Google here (despite claiming for themselves to produce the most secure phone(s)). I don't have it with me all the time or switched off when I don't need it. I also find iOS quite unintuitive to use.</p> <p>Being on-call for work means to to be reachable 24/7. This implies that the phone is carried around all the time (in an switched-on state). 1984 is now.</p> <a class="textlink" href="https://en.wikipedia.org/wiki/Nineteen_Eighty-Four">https://en.wikipedia.org/wiki/Nineteen_Eighty-Four</a><br /> <h2>Other OSes</h2> <h3>InfinyTime (smartwatch)</h3> <p>I use it on my PineTime smartwatch. Other than checking the time and my step count, I really don't do anything else fancy with it (yet). </p> <a class="textlink" href="https://www.pine64.org/pinetime/">https://www.pine64.org/pinetime/</a><br /> <a class="textlink" href="https://infinitime.io/">https://infinitime.io/</a><br /> <h3>motionEyeOS</h3> <p>I usually install an army of RaspberryPi 3's in my house before I travel for a prolonged amount of time. All Pi's are equipped with an camera and have motionEyeOS (Linux based video surveillance system) installed. There's a neat Android app in the F-Droid store which let's me keep an eye on everything. I make the Pi's accessible from the internet via reverse SSH tunnels through one of my frontend servers.</p> <a class="textlink" href="https://github.com/ccrisan/motioneyeos">https://github.com/ccrisan/motioneyeos</a><br /> <h3>Kobo OS (proprietary)</h3> <p>I use a Kobo Forma as my e-reader device. I have started to switch off the Wifi and to only sideload DRM free ePubs on it. Even offline, it's a fully capable reader device. I wouldn't like the Kobo to call home to Rakuten. I would love to replace it one day with an open source e-reader alternative like the PineNote. There are also some interesting attempts installing postmarketOS Linux on Kobo devices. The latter boots already, but is far from being usable as a normal e-reader.</p> <a class="textlink" href="https://www.pine64.org/pinenote/">The PineNote</a><br /> <a class="textlink" href="https://liliputing.com/2021/07/kobo-clara-hd-becomes-an-e-ink-linux-tablet-with-the-help-of-postmarketos.html">Kobo Clara HD becomes an e-ink Linux tablet</a><br /> <p>But as a fall-back, someone could still use the good old dead tree format!</p> <h3>Android TV (proprietary)</h3> <p>An Android TV box is used for watching movies and series on Netflix and Amazon Prime video (yes, I am human too and rely once in a while on big tech streaming services). The Android TV box is currently in the process of being replaced by OSMC, though. Most services seem to work fine with OSMC, but didn't get around tinkering with Netflix and Amazon there yet.</p> <a class="textlink" href="https://osmc.tv/">https://osmc.tv/</a><br /> <h2>Other OSes..</h2> <p>This section is just for the sake of having a complete list of all OSes I used for some significant amount of time. I might not use all of them any more...</p> <h3>NetBSD</h3> <p>I have been using NetBSD on an old Sun Sparcstation 10 as a student. I also have run NetBSD on a very old ThinkPad with 96MB!!! of RAM (even with X/evilWM). I also installed (but never really used) NetBSD on an HP Jornada 680. But that's all more than 10 years ago. I haven't looked at NetBSD for long time. I want to revive it on an "old" ThinkPad T450 of mine which I currently don't use.</p> <a class="textlink" href="https://netbsd.org">https://netbsd.org</a><br /> <h3>Other OSes in use...</h3> <a class="textlink" href="https://sailfish.org">SailfishOS - Nice mobile OS, but unfortunately includes proprietary components</a><br /> <a class="textlink" href="https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux">Red Hat Enterprose Linux - Only for some work stuff</a><br /> <h3>Other OSes not used any more...</h3> <a class="textlink" href="https://en.opensuse.org/Archive:S.u.S.E._Linux_5.3">SuSE Linux 5.3 - The first Linux OS I used</a><br /> <a class="textlink" href="https://en.wikipedia.org/wiki/IRIX">SGI's IRIX - On a SGI Onyx 3200</a><br /> <a class="textlink" href="https://en.wikipedia.org/wiki/MeeGo">MeeGo - On a Nokia N9</a><br /> <a class="textlink" href="https://en.wikipedia.org/wiki/Microsoft_Windows">Microsoft Windows</a><br /> <a class="textlink" href="https://en.wikipedia.org/wiki/MS-DOS">Microsoft DOS - With and without Windows 3.x</a><br /> <a class="textlink" href="https://en.wikipedia.org/wiki/Symbian">Symbian - The first smartphone OS I used </a><br /> <a class="textlink" href="https://en.wikipedia.org/wiki/Wear_OS">WearOS - On a Google smartwatch</a><br /> <a class="textlink" href="https://www.debian.org">Debian GNU/Linux - Rock solid, but atm. I prefer Fedora/EndeavourOS</a><br /> <a class="textlink" href="https://www.ubuntu.com">Ubuntu Linux (based on Debian)</a><br /> <a class="textlink" href="https://www.linuxfromscratch.org/">Linux from scratch - The best way to learn Linux</a><br /> <a class="textlink" href="https://www.suse.com/products/server/">SUSE Linux Enterprise - Only for some work stuff</a><br /> <h3>Other OSes I only had a glance at...</h3> <a class="textlink" href="https://archiveos.org/opensolaris/">OpenSolaris - Continuation of the open source version of Solaris</a><br /> <a class="textlink" href="https://archlinuxarm.org/">Arch Linux ARM</a><br /> <a class="textlink" href="https://ecomstation.com/">eComStation - Continuation of IBM OS/2</a><br /> <a class="textlink" href="https://en.wikipedia.org/wiki/Minix">Minix</a><br /> <a class="textlink" href="https://en.wikipedia.org/wiki/OpenVMS">OpenVMS</a><br /> <a class="textlink" href="https://en.wikipedia.org/wiki/OS/2">IBM OS/2 Warp</a><br /> <a class="textlink" href="https://freedos.org">FreeDOS - Open source alternative to DOS</a><br /> <a class="textlink" href="https://plan9.io/plan9/">Plan9 </a><br /> <a class="textlink" href="https://reactos.org/">ReactOS - A Microsoft Windows open source clone</a><br /> <a class="textlink" href="https://www.debian.org/ports/hurd/">Debian GNU/Hurd - Debian on the GNU kernel</a><br /> <a class="textlink" href="https://www.debian.org/ports/kfreebsd-gnu/">Debian GNU/kFreeBSD - Debian on the FreeBSD kernel</a><br /> <a class="textlink" href="https://www.gentoo.org">Gentoo Linux</a><br /> <a class="textlink" href="https://www.haiku-os.org/">Haiku - A BeOS open source clone</a><br /> <a class="textlink" href="https://www.oracle.com/solaris/solaris11/">Sun Solaris (now owned by Oracle)</a><br /> <a class="textlink" href="https://www.puredarwin.org/">OpenDarwin ("now" PureDarwin) - Open source operating system based on the open parts of macOS</a><br /> <h3>Other OSes which seem interesting...</h3> <a class="textlink" href="https://asteroidos.org/">Asteroids OS - Open source smartphone OS</a><br /> <a class="textlink" href="https://www.dragonflybsd.org/">DragonFly BSD - Fork of FreeBSD 4</a><br /> <a class="textlink" href="http://wiki.postmarketos.org/wiki/Phosh">Phosh (on postmarketOS) - A true Linux shell for the smartphone</a><br /> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Welcome to the foo.zone</title> <link href="gemini://foo.zone/gemfeed/2022-01-23-welcome-to-the-foo.zone.gmi" /> <id>gemini://foo.zone/gemfeed/2022-01-23-welcome-to-the-foo.zone.gmi</id> <updated>2022-01-23T16:42:04+00:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>I don't count this as a real blog post, but more of an announcement (I aim to write one real post once monthly). From now on, 'foo.zone' is the new address of this site. All other addresses will still forward to it and eventually (based on the traffic still going through) will be deactivated.. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Welcome to the foo.zone</h1> <pre> __ / _| ___ ___ _______ _ __ ___ | |_ / _ \ / _ \ |_ / _ \| '_ \ / _ \ | _| (_) | (_) | / / (_) | | | | __/ |_| \___/ \___(_)___\___/|_| |_|\___| </pre><br /> <p class="quote"><i>Published by Paul at 2022-01-23</i></p> <p>I don't count this as a real blog post, but more of an announcement (I aim to write one real post once monthly). From now on, "foo.zone" is the new address of this site. All other addresses will still forward to it and eventually (based on the traffic still going through) will be deactivated.</p> <p>As you can read on Wikipedia, "foo" is, alongside to "bar" and "baz", a metasyntactic variable (you know what I mean if you are a programmer or IT person):</p> <a class="textlink" href="https://en.wikipedia.org/wiki/Metasyntactic_variable">https://en.wikipedia.org/wiki/Metasyntactic_variable</a><br /> <h2>What is the foo zone?</h2> <p>It's my personal internet site and blog. Everything you read on this site is my personal opinion and experience. It's not intended to be anything professional. If you want my professional background, then go to my LinkedIn profile.</p> <p>Since I re-booted this blog last year, I struggled to find a good host name for it. I started off with "buetow.org", and later I switched halfway to "snonux.de". Buetow is my last name, and snonux relates to some of my internet nicknames and personal IT projects. I also have a "SnonuxBSD" ASCII-art banner in the motd of my FreeBSD based home-NAS.</p> <p>For a while, I was thinking about a better host name for this site, meeting the following criteria:</p> <ul> <li>Isn't directly linked to my name or my internet nicknames.</li> <li>Reflects the "nature" of this site.</li> <li>Is still pretty generic.</li> <li>Is "cool".</li> <li>Is short and easy to remember. </li> <li>Doesn't cost millions.</li> </ul> <p>So I think that foo.zone is the perfect match. It's a bit geeky, but so is this site. The meta-syntactic variable relates to computer science and programming, so does this site. Other than that, staying in this sphere, it's a pretty generic name.</p> <h2>To be in the .zone and not in a .surf club</h2> <p>I was pretty happy finding out that foo.zone was still available for registration. I stumbled across it just yesterday while I was playing around with my new authoritative DNS servers. I was actually quite surprised as usually such short SLDs (second level domains), especially "foo", are all taken already.</p> <p>As a funny bit, I almost chose "foo.surf" over "foo.zone" as in "surfing this site", but then decided against it as I would have to tell everyone that I am not into water sports so much. Well, on the other hand, I now may have to explain to non-programmers that I am not a fan of the rock band "Foo Fighters". But that will be acceptable, as I don't expect "normal" people visiting the foo zone as much anyway. If you reached as far, I have to congratulate you. You are not a normal person.</p> <h2>What about my old hosts</h2> <p>The host buetow.org will stay. However, not as the primary address for this site. I will keep using it for my personal internet infrastructure as well as for most of my E-Mail addresses. I used buetow.org for that over the past 10 years already anyway and that won't change any time soon. I don't know what I am going to do with snonux.de in the long run. A .de SLD (for Germany) is pretty cheap, so I might just keep it for now. </p> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Bash Golf Part 2</title> <link href="gemini://foo.zone/gemfeed/2022-01-01-bash-golf-part-2.gmi" /> <id>gemini://foo.zone/gemfeed/2022-01-01-bash-golf-part-2.gmi</id> <updated>2022-01-01T23:36:15+00:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>This is the second blog post about my Bash Golf series. This series is random Bash tips, tricks and weirdnesses I came across. It's a collection of smaller articles I wrote in an older (in German language) blog, which I translated and refreshed with some new content.. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Bash Golf Part 2</h1> <pre> '\ '\ . . |>18>> \ \ . ' . | O>> O>> . 'o | \ .\. .. . | /\ . /\ . . | / / . / / .' . | jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Art by Joan Stark, mod. by Paul Buetow </pre><br /> <p class="quote"><i>Published by Paul at 2022-01-01, last updated at 2022-01-05</i></p> <p>This is the second blog post about my Bash Golf series. This series is random Bash tips, tricks and weirdnesses I came across. It's a collection of smaller articles I wrote in an older (in German language) blog, which I translated and refreshed with some new content.</p> <a class="textlink" href="https://foo.zone/gemfeed/2021-11-29-bash-golf-part-1.html">Bash Golf Part 1</a><br /> <a class="textlink" href="https://foo.zone/gemfeed/2022-01-01-bash-golf-part-2.html">Bash Golf Part 2 (you are reading this atm.)</a><br /> <h2>Redirection</h2> <p>Let's have a closer look at Bash redirection. As you might already know that there are 3 standard file descriptors:</p> <ul> <li>0 aka stdin (standard input)</li> <li>1 aka stdout (standard output)</li> <li>2 aka stderr (standard error output)</li> </ul> <p>These are most certainly the ones you are using on regular basis. "/proc/self/fd" lists all file descriptors which are open by the current process (in this case: the current Bash shell itself):</p> <pre> β― ls -l /proc/self/fd/ total 0 lrwx------. 1 paul paul 64 Nov 23 09:46 0 -> /dev/pts/9 lrwx------. 1 paul paul 64 Nov 23 09:46 1 -> /dev/pts/9 lrwx------. 1 paul paul 64 Nov 23 09:46 2 -> /dev/pts/9 lr-x------. 1 paul paul 64 Nov 23 09:46 3 -> /proc/162912/fd </pre><br /> <p>The following examples demonstrate two different ways to accomplish the same thing. The difference is that the first command is directly printing out "Foo" to stdout and the second command is explicitly redirecting stdout to its own stdout file descriptor:</p> <pre> β― echo Foo Foo β― echo Foo > /proc/self/fd/0 Foo </pre><br /> <p>Other useful redirections are:</p> <ul> <li>Redirect stderr to stdin: "echo foo 2>&1"</li> <li>Redirect stdin to stderr: "echo foo >&2"</li> </ul> <p>It is, however, not possible to redirect multiple times within the same command. E.g. the following won't work. You would expect stdin to be redirected to stderr and then stderr to be redirected to /dev/null. But as the example shows, Foo is still printed out:</p> <pre> β― echo Foo 1>&2 2>/dev/null Foo </pre><br /> <p class="quote"><i>Update: A reader sent me an email and pointed out that the order of the redirections is important. </i></p> <p>As you can see, the following will not print out anything:</p> <pre> β― echo Foo 2>/dev/null 1>&2 β― </pre><br /> <p>A good description (also pointed out by the reader) can be found here:</p> <a class="textlink" href="https://wiki.bash-hackers.org/howto/redirection_tutorial#order_of_redirection_ie_file_2_1_vs_2_1_file">Order of redirection</a><br /> <p>Ok, back to the original blog post. You can also use grouping here (neither of these commands will print out anything to stdout):</p> <pre> β― { echo Foo 1>&2; } 2>/dev/null β― ( echo Foo 1>&2; ) 2>/dev/null β― { { { echo Foo 1>&2; } 2>&1; } 1>&2; } 2>/dev/null β― ( ( ( echo Foo 1>&2; ) 2>&1; ) 1>&2; ) 2>/dev/null β― </pre><br /> <p>A handy way to list all open file descriptors is to use the "lsof" command (that's not a Bash built-in), whereas $ is the process id (pid) of the current shell process:</p> <pre> β― lsof -a -p $ -d0,1,2 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME bash 62676 paul 0u CHR 136,9 0t0 12 /dev/pts/9 bash 62676 paul 1u CHR 136,9 0t0 12 /dev/pts/9 bash 62676 paul 2u CHR 136,9 0t0 12 /dev/pts/9 </pre><br /> <p>Let's create our own descriptor "3" for redirection to a file named "foo":</p> <pre> β― touch foo β― exec 3>foo # This opens fd 3 and binds it to file foo. β― ls -l /proc/self/fd/3 l-wx------. 1 paul paul 64 Nov 23 10:10 \ /proc/self/fd/3 -> /home/paul/foo β― cat foo β― echo Bratwurst >&3 β― cat foo Bratwurst β― exec 3>&- # This closes fd 3. β― echo Steak >&3 -bash: 3: Bad file descriptor </pre><br /> <p>You can also override the default file descriptors, as the following example script demonstrates:</p> <pre> β― cat grandmaster.sh #!/usr/bin/env bash # Write a file data-file containing two lines echo Learn You a Haskell > data-file echo for Great Good >> data-file # Link fd with fd 6 (saves default stdin) exec 6<&0 # Overwrite stdin with data-file exec < data-file # Read the first two lines from it declare LINE1 LINE2 read LINE1 read LINE2 # Print them echo First line: $LINE1 echo Second line: $LINE2 # Restore default stdin and delete fd 6 exec 0<&6 6<&- </pre><br /> <p>Let's execute it:</p> <pre> β― chmod 750 ./grandmaster.sh β― ./grandmaster.sh First line: Learn You a Haskell Second line: for Great Good </pre><br /> <h2>HERE</h2> <p>I have mentioned HERE-documents and HERE-strings already in this post. Let's do some more examples. The following "cat" receives a multi line string from stdin. In this case, the input multi line string is a HERE-document. As you can see, it also interpolates variables (in this case the output of "date" running in a subshell).</p> <pre> β― cat <<END > Hello World > Itβs $(date) > END Hello World It's Fri 26 Nov 08:46:52 GMT 2021 </pre><br /> <p>You can also write it the following way, but that's less readable (it's good for an obfuscation contest):</p> <pre> β― <<END cat > Hello Universe > Itβs $(date) > END Hello Universe It's Fri 26 Nov 08:47:32 GMT 2021 </pre><br /> <p>Besides of an HERE-document, there is also a so-called HERE-string. Besides of...</p> <pre> β― declare VAR=foo β― if echo "$VAR" | grep -q foo; then > echo '$VAR ontains foo' > fi $VAR ontains foo </pre><br /> <p>...you can use a HERE-string like that:</p> <pre> β― if grep -q foo <<< "$VAR"; then > echo '$VAR contains foo' > fi $VAR contains foo </pre><br /> <p>Or even shorter, you can do:</p> <pre> β― grep -q foo <<< "$VAR" && echo '$VAR contains foo' $VAR contains foo </pre><br /> <p>You can also use a Bash regex to accomplish the same thing, but the points of the examples so far were to demonstrate HERE-{documents,strings} and not Bash regular expressions:</p> <pre> β― if [[ "$VAR" =~ foo ]]; then echo yay; fi yay </pre><br /> <p>You can also use it with "read":</p> <pre> β― read a <<< ja β― echo $a ja β― read b <<< 'NEIN!!!' β― echo $b NEIN!!! β― dumdidumstring='Learn you a Golang for Great Good' β― read -a words <<< "$dumdidumstring" β― echo ${words[0]} Learn β― echo ${words[3]} Golang </pre><br /> <p>The following is good for an obfuscation contest too:</p> <pre> β― echo 'I like Perl too' > perllove.txt β― cat - perllove.txt <<< "$dumdidumstring" Learn you a Golang for Great Good I like Perl too </pre><br /> <h2>RANDOM</h2> <p>Random is a special built-in variable containing a different pseudo random number each time it's used.</p> <pre> β― echo $RANDOM 11811 β― echo $RANDOM 14997 β― echo $RANDOM 9104 </pre><br /> <p>That's very useful if you want to randomly delay the execution of your scripts when you run it on many servers concurrently, just to spread the server load (which might be caused by the script run) better.</p> <p>Let's say you want to introduce a random delay of 1 minute. You can accomplish it with:</p> <pre> β― cat ./calc_answer_to_ultimate_question_in_life.sh #!/usr/bin/env bash declare -i MAX_DELAY=60 random_delay () { local -i sleep_for=$((RANDOM % MAX_DELAY)) echo "Delaying script execution for $sleep_for seconds..." sleep $sleep_for echo 'Continuing script execution...' } main () { random_delay # From here, do the real work. Calculating the answer to # the ultimate question can take billions of years.... : .... } main β― β― ./calc_answer_to_ultimate_question_in_life.sh Delaying script execution for 42 seconds... Continuing script execution... </pre><br /> <h2>set -x and set -e and pipefile</h2> <p>In my opinion, -x and -e and pipefile are the most useful Bash options. Let's have a look at them one after another.</p> <h3>-x</h3> <p>-x prints commands and their arguments as they are executed. This helps to develop and debug your Bash code:</p> <pre> β― set -x β― square () { local -i num=$1; echo $((num*num)); } β― num=11; echo "Square of $num is $(square $num)" + num=11 ++ square 11 ++ local -i num=11 ++ echo 121 + echo 'Square of 11 is 121' Square of 11 is 121 </pre><br /> <p>You can also set it when calling an external script without modifying the script itself:</p> <pre> β― bash -x ./half_broken_script_to_be_debugged.sh </pre><br /> <p>Let's do that on one of the example scripts we covered earlier:</p> <pre> β― bash -x ./grandmaster.sh + bash -x ./grandmaster.sh + echo Learn You a Haskell + echo for Great Good + exec + exec + declare LINE1 LINE2 + read LINE1 + read LINE2 + echo First line: Learn You a Haskell First line: Learn You a Haskell + echo Second line: for Great Good Second line: for Great Good + exec β― </pre><br /> <h3>-e</h3> <p>This is a very important option you want to use when you are paranoid. This means, you should always "set -e" in your scripts when you need to make absolutely sure that your script runs successfully (with that I mean that no command should exit with an unexpected status code).</p> <p>Ok, let's dig deeper:</p> <pre> β― help set | grep -- -e -e Exit immediately if a command exits with a non-zero status. </pre><br /> <p>As you can see in the following example, the Bash terminates after the execution of "grep" as "foo" is not matching "bar". Therefore, grep exits with 1 (unsuccessfully) and the shell aborts. And therefore, "bar" will not be printed out anymore:</p> <pre> β― bash -c 'set -e; echo hello; grep -q bar <<< foo; echo bar' hello β― echo $? 1 </pre><br /> <p>Whereas the outcome changes when the regex matches:</p> <pre> β― bash -c 'set -e; echo hello; grep -q bar <<< barman; echo bar' hello bar β― echo $? 0 </pre><br /> <p>So does it mean that grep will always make the shell terminate whenever its exit code isn't 0? This will render "set -e" quite unusable. Frankly, there are other commands where an exit status other than 0 should not terminate the whole script abruptly. Usually, what you want is to branch your code based on the outcome (exit code) of a command:</p> <pre> β― bash -c 'set -e > grep -q bar <<< foo > if [ $? -eq 0 ]; then > echo "matching" > else > echo "not matching" > fi' β― echo $? 1 </pre><br /> <p>...but the example above won't reach any of the branches and won't print out anything, as the script terminates right after grep.</p> <p>The proper solution is to use grep as an expression in a conditional (e.g. in an if-else statement):</p> <pre> β― bash -c 'set -e > if grep -q bar <<< foo; then > echo "matching" > else > echo "not matching" > fi' not matching β― echo $? 0 β― bash -c 'set -e > if grep -q bar <<< barman; then > echo "matching" > else > echo "not matching" > fi' matching β― echo $? 0 </pre><br /> <p>You can also temporally undo "set -e" if there is no other way:</p> <pre> β― cat ./e.sh #!/usr/bin/env bash set -e foo () { local arg="$1"; shift if [ -z "$arg" ]; then arg='You!' fi echo "Hello $arg" } bar () { # Temporally disable e set +e local arg="$1"; shift # Enable e again. set -e if [ -z "$arg" ]; then arg='You!' fi echo "Hello $arg" } # Will succeed bar World foo Universe bar # Will terminate the script foo β― ./e.sh Hello World Hello Universe Hello You! </pre><br /> <p>Why does calling "foo" with no arguments make the script terminate? Because as no argument was given, the "shift" won't have anything to do as the argument list $@ is empty, and therefore "shift" fails with a non-zero status.</p> <p>Why would you want to use "shift" after function-local variable assignments? Have a look at my personal Bash coding style guide for an explanation :-):</p> <a class="textlink" href="https://foo.zone/gemfeed/2021-05-16-personal-bash-coding-style-guide.html">./2021-05-16-personal-bash-coding-style-guide.html</a><br /> <h3>pipefail</h3> <p>The pipefail option makes it so that not only the exit code of the last command of the pipe counts regards its exit code but any command of the pipe:</p> <pre> β― help set | grep pipefail -A 2 pipefail the return value of a pipeline is the status of the last command to exit with a non-zero status, or zero if no command exited with a non-zero status </pre><br /> <p>The following greps for paul in passwd and converts all lowercase letters to uppercase letters. The exit code of the pipe is 0, as the last command of the pipe (converting from lowercase to uppercase) succeeded:</p> <pre> β― grep paul /etc/passwd | tr '[a-z]' '[A-Z]' PAUL:X:1000:1000:PAUL BUETOW:/HOME/PAUL:/BIN/BASH β― echo $? 0 </pre><br /> <p>Let's look at another example, where "TheRock" doesn't exist in the passwd file. However, the pipes exit status is still 0 (success). This is so because the last command ("tr" in this case) still succeeded. It is just that it didn't get any input on stdin to process:</p> <pre> β― grep TheRock /etc/passwd β― echo $? 1 β― grep TheRock /etc/passwd | tr '[a-z]' '[A-Z]' β― echo $? 0 </pre><br /> <p>To change this behaviour, pipefile can be used. Now, the pipes exit status is 1 (fail), because the pipe contains at least one command (in this case grep) which exited with status 1:</p> <pre> β― set -o pipefail β― grep TheRock /etc/passwd | tr '[a-z]' '[A-Z]' β― echo $? 1 </pre><br /> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>How to stay sane as a DevOps person </title> <link href="gemini://foo.zone/gemfeed/2021-12-26-how-to-stay-sane-as-a-devops-person.gmi" /> <id>gemini://foo.zone/gemfeed/2021-12-26-how-to-stay-sane-as-a-devops-person.gmi</id> <updated>2021-12-26T12:02:02+00:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>Log4shell (CVE-2021-44228) made it clear, once again, that working in information technology is not an easy job (especially when you are a DevOps/SRE or a security engineer). I thought it would be interesting to summarize a few techniques to help you to relax.. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>How to stay sane as a DevOps person </h1> <pre> ) ) (( ( ( )) ) ) ) // ( _ ( __ ( ~->> ,-----' |__,_~~___<'__`)-~__--__-~->> < | // : | -__ ~__ o)____)),__ - '> >- > | // : |- \_ \ -\_\ -\ \ \ ~\_ \ ->> - , >> | // : |_~_\ -\__\ \~'\ \ \, \__ . -<- >> `-----._| ` -__`-- - ~~ -- ` --~> > _/___\_ //)_`// | ||] _____[_______]_[~~-_ (.L_/ || [____________________]' `\_,/'/ ||| / ||| ,___,'./ ||| \ |||,'______| ||| / /|| I==|| ||| \ __/_|| __||__ -----||-/------`-._/||-o--o---o--- ~~~~~' </pre><br /> <p class="quote"><i>Published by Paul at 2021-12-26, last updated at 2022-01-12</i></p> <p>Log4shell (CVE-2021-44228) made it clear, once again, that working in information technology is not an easy job (especially when you are a DevOps person). I thought it would be interesting to summarize a few techniques to help you to relax.</p> <p>(PS: When I mean DevOps, I also mean Site Reliability Engineers and Sysadmins. I believe SRE, DevOps Engineer and Sysadmin are just synonym titles for the same job).</p> <a class="textlink" href="https://en.wikipedia.org/wiki/Log4Shell">https://en.wikipedia.org/wiki/Log4Shell</a><br /> <h2>Set clear expectations</h2> <p>It's important to set clear expectations. It can be difficult to guess what others expect or don't expect from you. If you know exactly what you are supposed to do, you can work towards a specific goal and don't worry about all the other noise so much.</p> <p>However, if you are in a more senior position, it is expected from you to plan your tasks by yourself to a large degree and also be flexible, so you can react quickly to new situations (e.g. resolving incidents). Also, to a large degree, you have to prioritise your work by yourself. This can overthrow all of your plans. In extreme cases, it can help to share your plans with your team so that everyone is on the same page. Afterwards, be the execution machine. People are happy when they see that stuff gets done. Communicate clearly all critical work you do. This will capture all the technical debt there might be. It does not help in the long run if things are fixed in the background without any visibility. </p> <p>Due to politeness, many people are not setting clear expectations. I personally may sound sometimes "too German" when setting expectations, but so far nobody complained, and I have even received positive feedback about it.</p> <h2>Always respond to requests but set expectations and boundaries</h2> <p>There are many temptations to get side-tracked by other projects and/or issues. It is important to set boundaries here. But always answer to all requests as nothing is more frustrating than asking a person and never getting any answer back. This is especially the case when everyone is working form home where people are using tools such as Slack and E-Mail for most of their communications.</p> <h3>Dealing with requests</h3> <p>If the request is urgent, and you have the capacity to help, probably you should help. If it's not urgent, maybe ask to pospone the request (e.g. ask to create a ticket, so that someone from your team can work on it later).</p> <p>If the request is urgent, but you don't have the knowledge or the capacity to help, try to defer to a colleague who might be able to help. You could also provide some quick tips and hints, so that the requester can resolve the issue by himself. Make it transparent why you might not have the time right now, as this can help the person to review his own priorities or to escalate. </p> <h3>Escalation is only a tool</h3> <p>Never make or take an escalation personally. The only forms of escalation should be due to technical issues or lack of resources. An escalation then becomes like a math equation and does not need human resources involved. So de-facto, an escalation is nothing negative, but just a process people can follow to form decision-making. In a good company escalations tend to be an exception, though. Staff knows how to deal with the things by themselves without bothering management too much. </p> <h2>Think positively</h2> <p>If times are very stressful, think that it could always be worse:</p> <ul> <li>Nobody is dying, we are only doing some IT stuff.</li> <li>Your time after work is your own time, look forward to time with your family or a nice dinner or your favourite sports class.</li> <li>You probably will never run out of work in the IT sector. So you will always be able to make a living.</li> <li>Your IT job and life is actually pretty good (compared to a homeless person for example). You are probably part of the world's top 1% regarding life standard.</li> </ul> <h2>Go slower even if you could go faster</h2> <p>When working in a team, you may feel that you could get done things faster when you just did everything by yourself. This can be a bit frustrating at times, as you might need to work late hours and also might need to explain things over and over again to others. Also, you could be the one who needs to get things explained over and over again as you are not so familiar with the topic (yet). You will appreciate it if the other person is slowing down for you a bit.</p> <h3>You work in a team</h3> <p>Security is a team sport. So slow down and make sure that everyone is on track with the goals. You can go full-speed with your very own subtasks, though. Not everyone knows how to use all the tools so well like a full-time DevOps person. As a DevOps person, you are not a security expert, though. Security experts are different people in your company, but DevOps will be the main tribe deploying mitigations (following the security recommendations) and management will be the main tribe coordinating all the efforts. </p> <p>So even if you think that you can do everything faster by your own, can you really? You probably don't know what you don't know about IT security. The more you know about it, the more you know about what you don't know.</p> <h3>Don't rush</h3> <p>Slowing down also helps to prevent errors. Don't rush your tasks, even if they are urgent. Try to be quick, but don't rush them. Maybe you are writing a script to mitigate a production issue. You could others peer review that script, for example. Their primary programming language may not be the same (e.g. Golang vs Perl), but they would understand the logic. Or ask another DevOps person from your company with good scripting skills review your mitigation, but he then may lack the domain knowledge of the software you are patching. So in either case, the review will take a bit longer as the reviewer might not be an expert in everything.</p> <p>So relax, don't always expect immediate results. Set clear and reasonable timelines for the management about the mitigations. You are not a superhero who has to do everything by yourself. Sometimes, you will miss a deadline. But that will have good reasons. Don't rush to complete just to meet a deadline. </p> <a class="textlink" href="https://foo.zone/gemfeed/2021-10-22-defensive-devops.html">Read also "Defensive DevOps" about deploying mitigation scripts.</a><br /> <h2>You are not a superhero</h2> <p>Always keep that in mind. You can't solve all problems by your own. Maybe you could, but that would be a lot of additional stress (and this will reflect to your personal life). Also, Superman and Wonder Woman receive much higher salaries than you will ever do ;-).</p> <p>I have been a superhero multiple times mitigating critical incidents, and I was proud about it in those moments. But actually, I am not proud looking at those retrospectively as for everything there should be other people around who should be able to resolve an incident. No company should rely on a single person, there must always be a substitute. You are not a superhero and as harsh as it sounds, everyone is replaceable. Every superhero can be replaced with another superhero. The only thing it takes to become a superhero is time to get to know the infrastructure and tools very well, paired with work dedication.</p> <p>This doesn't mean, that you shouldn't try your best. But you don't need to try to be the superhero. Maybe someone else will be the superhero, but that's OK as long as it's not always the same person every time. Everyone can have a good day after all. If I could choose between being a superhero or having a good night sleep, I would probably prefer the sleep. </p> <h3>Give away some of your superpowers</h3> <p>If you are a superhero, try to give away some of your superpowers, so that you can relax in the evening knowing that others (e.g. the current on-call engineers) know how to tackle things. Every member of the team needs to do DevOps (even the team managers, in my humble opinion). Some may be less experienced than others or have other expertises, but to counteract this you could document the recurring tasks so that they are easy to follow (which then later could be either automated away or, even better, fully fixed).</p> <p>On the other side, if you are a DevOps person, try to sneak into other people's shoes too. For example, you might not be an expert in Java programming, but a lot of the infrastructure is programmed in Java. This is where usually the Software Developers and Engineers shine. But if you know how to read, debug and even extend Java code too (by learning from the Software Developer superheroes), then your will only benefit from it. </p> <p>So you are not a superhero. Or, if you are a superhero, then all colleagues should be superheroes too.</p> <h2>Don't jump on all problems immediately</h2> <p>In a perfect world, every member of a team comes along with the same strengths and skills. But in reality, everyone is different. </p> <p>In order to distribute the troubleshooting skills across the team, you should not jump on every problem immediately. Leave some space for others to resolve the issue. This is where the best learning happens. Nobody will learn from you when you solve all problems. People might learn something after you explained what you did, but the takeaways will be minimal compared to when people try to resolve issues by themselves. Always be available for questions which will help your colleagues to steer into the right direction and if you think it helps, give them some tips resolving the issue, even if they didn't ask for it. Sometimes, engineers are too proud to ask. </p> <p>The whole paragraph changes when there is an issue you don't know how to resolve. Jump on it, so you can learn from it. But also ask for advice if you are unsure about it.</p> <p>If the issue is a very critical one, then you might better off trying to resolve it as fast as possible with your full powers in order to avoid any major damage to the company. This, of course, only works if you know how to resolve it quickly. So, don't leave others with not much experience yet looking at it. If possible, work with the team to resolve the issue. Unfortunately, solving it with the team is not always the fastest way. So in this particular circumstance, the company may be better off being saved by a single superhero. Make sure that the problem will not occur again or, at least, that others can fix it the next time without Superman flying by.</p> <h2>Force breaks; and shutdown now</h2> <p>Be strict about your time off. Nowadays, tech workers check their messages also out of office hours and are reachable 24/7. This really should only be the case when you are on-call, to be honest (or if you work for a startup). All other out-of-office time is owned by you and not your employer. You have signed an 40 hour/week and not 7 days/week contract. Of course, there will be always some sort of flexibility and exceptions. You might need to work over the weekend to get a migration done or a problem solved. But to balance it out, you should have other days off as substitutes.</p> <p>It's important to shut down your brain from work during your breaks (be strict with your breaks, leave your desk for lunch or for a walk early afternoon and if you aren't on-call also don't take your work-phone with you). You will be happier and also much more energized and productive in the afternoon. Also, when you are reachable 24/7, your colleagues will start thinking that you don't have anything more important to do than work.</p> <h2>Block time every day for personal advance</h2> <p>It does not matter how many tasks are in your backlog or how many issues are to be tackled. *Always* find time for personal advance. The most issues aren't critical anyway and can wait a bit. At the end of the day, you will have a nice feeling that you have accomplished something meaningful. This can be an interesting project or learning a new technology you are interested in. Of course, there must be consensus with your manager (unless you do that kind of thing in your personal time of course). </p> <p>If you are too busy at work and just can't block time, then maybe it's time to think about alternatives. But before you do that, probably there is something else you can do. Perhaps you just think you can't block time, but you would be positively surprised to hear from your manager that he will fully support you. Of course, he won't agree to you working full-time on your pet projects. But a certain portion of your time should be allocated for personal advance. After all, your employer also want's you to stay happy so that you don't look for alternatives. It's of everyone's interest that you like your job and stay motivated. The more you are motivated, the more productive you are. The more productive you are, the more valuable you are for the company.</p> <h2>More</h2> <p>Another blog post worth reading:</p> <a class="textlink" href="https://unixsheikh.com/articles/how-to-stay-sane-in-todays-world-of-tech.html">https://unixsheikh.com/articles/how-to-stay-sane-in-todays-world-of-tech.html</a><br /> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Bash Golf Part 1</title> <link href="gemini://foo.zone/gemfeed/2021-11-29-bash-golf-part-1.gmi" /> <id>gemini://foo.zone/gemfeed/2021-11-29-bash-golf-part-1.gmi</id> <updated>2021-11-29T14:06:14+00:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>This is the first blog post about my Bash Golf series. This series is random Bash tips, tricks and weirdnesses I came across. It's a collection of smaller articles I wrote in an older (in German language) blog, which I translated and refreshed with some new content.. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Bash Golf Part 1</h1> <pre> '\ . . |>18>> \ . ' . | O>> . 'o | \ . | /\ . | / / .' | jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Art by Joan Stark </pre><br /> <p class="quote"><i>Published by Paul at 2021-11-29, last updated at 2022-01-05</i></p> <p>This is the first blog post about my Bash Golf series. This series is about random Bash tips, tricks and weirdnesses I came across. It's a collection of smaller articles I wrote in an older (in German language) blog, which I translated and refreshed with some new content.</p> <a class="textlink" href="https://foo.zone/gemfeed/2021-11-29-bash-golf-part-1.html">Bash Golf Part 1 (you are reading this atm.)</a><br /> <a class="textlink" href="https://foo.zone/gemfeed/2022-01-01-bash-golf-part-2.html">Bash Golf Part 2</a><br /> <h2>TCP/IP networking</h2> <p>You probably know the Netcat tool, which is a swiss army knife for TCP/IP networking on the command line. But did you know that the Bash natively supports TCP/IP networking?</p> <p>Have a look here how that works:</p> <pre> β― cat < /dev/tcp/time.nist.gov/13 59536 21-11-18 08:09:16 00 0 0 153.6 UTC(NIST) * </pre><br /> <p>The Bash treats /dev/tcp/HOST/PORT in a special way so that it is actually establishing a TCP connection to HOST:PORT. The example above redirects the TCP output of the time-server to cat and cat is printing it on standard output (stdout).</p> <p>A more sophisticated example is firing up an HTTP request. Let's create a new read-write (rw) file descriptor (fd) 5, redirect the HTTP request string to it, and then read the response back:</p> <pre> β― exec 5<>/dev/tcp/google.de/80 β― echo -e "GET / HTTP/1.1\nhost: google.de\n\n" >&5 β― cat <&5 | head HTTP/1.1 301 Moved Permanently Location: http://www.google.de/ Content-Type: text/html; charset=UTF-8 Date: Thu, 18 Nov 2021 08:27:18 GMT Expires: Sat, 18 Dec 2021 08:27:18 GMT Cache-Control: public, max-age=2592000 Server: gws Content-Length: 218 X-XSS-Protection: 0 X-Frame-Options: SAMEORIGIN </pre><br /> <p>You would assume that this also works with the ZSH, but it doesn't. This is one of the few things which don't work with the ZSH but in the Bash. There might be plugins you could use for ZSH to do something similar, though.</p> <h2>Process substitution</h2> <p>The idea here is, that you can read the output (stdout) of a command from a file descriptor:</p> <pre> β― uptime # Without process substitution 10:58:03 up 4 days, 22:08, 1 user, load average: 0.16, 0.34, 0.41 β― cat <(uptime) # With process substitution 10:58:16 up 4 days, 22:08, 1 user, load average: 0.14, 0.33, 0.41 β― stat <(uptime) File: /dev/fd/63 -> pipe:[468130] Size: 64 Blocks: 0 IO Block: 1024 symbolic link Device: 16h/22d Inode: 468137 Links: 1 Access: (0500/lr-x------) Uid: ( 1001/ paul) Gid: ( 1001/ paul) Context: unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 Access: 2021-11-20 10:59:31.482411961 +0000 Modify: 2021-11-20 10:59:31.482411961 +0000 Change: 2021-11-20 10:59:31.482411961 +0000 Birth: - </pre><br /> <p>This example doesn't make any sense practically speaking, but it clearly demonstrates how process substitution works. The standard output pipe of "uptime" is redirected to an anonymous file descriptor. That fd then is opened by the "cat" command as a regular file.</p> <p>A useful use case is displaying the differences of two sorted files:</p> <pre> β― echo a > /tmp/file-a.txt β― echo b >> /tmp/file-a.txt β― echo c >> /tmp/file-a.txt β― echo b > /tmp/file-b.txt β― echo a >> /tmp/file-b.txt β― echo c >> /tmp/file-b.txt β― echo X >> /tmp/file-b.txt β― diff -u <(sort /tmp/file-a.txt) <(sort /tmp/file-b.txt) --- /dev/fd/63 2021-11-20 11:05:03.667713554 +0000 +++ /dev/fd/62 2021-11-20 11:05:03.667713554 +0000 @@ -1,3 +1,4 @@ a b c +X β― echo X >> /tmp/file-a.txt # Now, both files have the same content again. β― diff -u <(sort /tmp/file-a.txt) <(sort /tmp/file-b.txt) β― </pre><br /> <p>Another example is displaying the differences of two directories:</p> <pre> β― diff -u <(ls ./dir1/ | sort) <(ls ./dir2/ | sort) </pre><br /> <p>More (Bash golfing) examples:</p> <pre> β― wc -l <(ls /tmp/) /etc/passwd <(env) 24 /dev/fd/63 49 /etc/passwd 24 /dev/fd/62 97 total β― β― while read foo; do > echo $foo > done < <(echo foo bar baz) foo bar baz β― </pre><br /> <p>So far, we only used process substitution for stdout redirection. But it also works for stdin. The following two commands result into the same outcome, but the second one is writing the tar data stream to an anonymous file descriptor which is substituted by the "bzip2" command reading the data stream from stdin and compressing it to its own stdout, which then gets redirected to a file:</p> <pre> β― tar cjf file.tar.bz2 foo β― tar cjf >(bzip2 -c > file.tar.bz2) foo </pre><br /> <p>Just think a while and see whether you understand fully what is happening here.</p> <h2>Grouping</h2> <p>Command grouping can be quite useful for combining the output of multiple commands:</p> <pre> β― { ls /tmp; cat /etc/passwd; env; } | wc -l 97 β― ( ls /tmp; cat /etc/passwd; env; ) | wc -l 97 </pre><br /> <p>But wait, what is the difference between curly braces and normal braces? I assumed that the normal braces create a subprocess whereas the curly ones don't, but I was wrong:</p> <pre> β― echo $ 62676 β― { echo $; } 62676 β― ( echo $; ) 62676 </pre><br /> <p>One difference is, that the curly braces require you to end the last statement with a semicolon, whereas with the normal braces you can omit the last semicolon:</p> <pre> β― ( env; ls ) | wc -l 27 β― { env; ls } | wc -l > > ^C </pre><br /> <p>In case you know more (subtle) differences, please write me an E-Mail and let me know.</p> <p class="quote"><i>Update: A reader sent me an E-Mail and pointed me to the Bash manual page, which explains the difference between () and {} (I should have checked that by myself):</i></p> <pre> (list) list is executed in a subshell environment (see COMMAND EXECUTION ENVIRONMENT below). Variable assignments and builtin commands that affect the shell's environment do not remain in effect after the command completes. The return status is the exit status of list. { list; } list is simply executed in the current shell environment. list must be terβ minated with a newline or semicolon. This is known as a group command. The return status is the exit status of list. Note that unlike the metacharacβ ters ( and ), { and } are reserved words and must occur where a reserved word is permitted to be recognized. Since they do not cause a word break, they must be separated from list by whitespace or another shell metacharacter. </pre><br /> <p>So I was right that () is executed in a subprocess. But why does $ not show a different PID? Also here (as pointed out by the reader) is the answer in the manual page:</p> <pre> $ Expands to the process ID of the shell. In a () subshell, it expands to the process ID of the current shell, not the subshell. </pre><br /> <p>If we want print the subprocess PID, we can use the BASHPID variable:</p> <pre> β― echo $BASHPID; { echo $BASHPID; }; ( echo $BASHPID; ) 1028465 1028465 1028739 </pre><br /> <h2>Expansions</h2> <p>Let's start with simple examples:</p> <pre> β― echo {0..5} 0 1 2 3 4 5 β― for i in {0..5}; do echo $i; done 0 1 2 3 4 5 </pre><br /> <p>You can also add leading 0 or expand to any number range:</p> <pre> β― echo {00..05} 00 01 02 03 04 05 β― echo {000..005} 000 001 002 003 004 005 β― echo {201..205} 201 202 203 204 205 </pre><br /> <p>It also works with letters:</p> <pre> β― echo {a..e} a b c d e </pre><br /> <p>Now it gets interesting. The following takes a list of words and expands it so that all words are quoted:</p> <pre> β― echo \"{These,words,are,quoted}\" "These" "words" "are" "quoted" </pre><br /> <p>Let's also expand to the cross product of two given lists:</p> <pre> β― echo {one,two}\:{A,B,C} one:A one:B one:C two:A two:B two:C β― echo \"{one,two}\:{A,B,C}\" "one:A" "one:B" "one:C" "two:A" "two:B" "two:C" </pre><br /> <p>Just because we can:</p> <pre> β― echo Linux-{one,two,three}\:{A,B,C}-FreeBSD Linux-one:A-FreeBSD Linux-one:B-FreeBSD Linux-one:C-FreeBSD Linux-two:A-FreeBSD Linux-two:B-FreeBSD Linux-two:C-FreeBSD Linux-three:A-FreeBSD Linux-three:B-FreeBSD Linux-three:C-FreeBSD </pre><br /> <h2>- aka stdin and stdout placeholder</h2> <p>Some commands and Bash builtins use "-" as a placeholder for stdin and stdout:</p> <pre> β― echo Hello world Hello world β― echo Hello world | cat - Hello world β― cat - <<ONECHEESEBURGERPLEASE Hello world ONECHEESEBURGERPLEASE Hello world β― cat - <<< 'Hello world' Hello world </pre><br /> <p>Let's walk through all three examples from the above snippet:</p> <ul> <li>The first example is obvious (the Bash builtin "echo" prints its arguments to stdout).</li> <li>The second pipes "Hello world" via stdout to stdin of the "cat" command. As cat's argument is "-" it reads its data from stdin and not from a regular file named "-". So "-" has a special meaning for cat.</li> <li>The third and fourth examples are interesting as we don't use a pipe as of "|" but a so-called HERE-document and a HERE-string. But the end results are the same.</li> </ul> <p>The "tar" command understands "-" too. The following example tars up some local directory and sends the data to stdout (this is what "-f -" commands it to do). stdout then is piped via an SSH session to a remote tar process (running on buetow.org) and reads the data from stdin and extracts all the data coming from stdin (as we told tar with "-f -") on the remote machine:</p> <pre> β― tar -czf - /some/dir | ssh hercules@buetow.org tar -xzvf - </pre><br /> <p>This is yet another example of using "-", but this time using the "file" command:</p> <pre> $ head -n 1 grandmaster.sh #!/usr/bin/env bash $ file - < <(head -n 1 grandmaster.sh) /dev/stdin: a /usr/bin/env bash script, ASCII text executable </pre><br /> <p>Some more golfing:</p> <pre> $ cat - hello hello ^C $ file - #!/usr/bin/perl /dev/stdin: Perl script text executable </pre><br /> <h2>Alternative argument passing</h2> <p>This is a quite unusual way of passing arguments to a Bash script:</p> <pre> β― cat foo.sh #/usr/bin/env bash declare -r USER=${USER:?Missing the username} declare -r PASS=${PASS:?Missing the secret password for $USER} echo $USER:$PASS </pre><br /> <p>So what we are doing here is to pass the arguments via environment variables to the script. The script will abort with an error when there's an undefined argument.</p> <pre> β― chmod +x foo.sh β― ./foo.sh ./foo.sh: line 3: USER: Missing the username β― USER=paul ./foo.sh ./foo.sh: line 4: PASS: Missing the secret password for paul β― echo $? 1 β― USER=paul PASS=secret ./foo.sh paul:secret </pre><br /> <p>You have probably noticed this *strange* syntax:</p> <pre> β― VARIABLE1=value1 VARIABLE2=value2 ./script.sh </pre><br /> <p>That's just another way to pass environment variables to a script. You can write it as well as like this:</p> <pre> β― export VARIABLE1=value1 β― export VARIABLE2=value2 β― ./script.sh </pre><br /> <p>But the downside of it is that the variables will also be defined in your current shell environment and not just in the scripts sub-process.</p> <h2>: aka the null command</h2> <p>First, let's use the "help" Bash built-in to see what it says about the null command:</p> <pre> β― help : :: : Null command. No effect; the command does nothing. Exit Status: Always succeeds. </pre><br /> <p>PS: IMHO, people should use the Bash help more often. It is a very useful Bash reference. Too many fallbacks to a Google search and then land on Stack Overflow. Sadly, there's no help built-in for the ZSH shell though (so even when I am using the ZSH I make use of the Bash help as most of the built-ins are compatible). </p> <p>OK, back to the null command. What happens when you try to run it? As you can see, absolutely nothing. And its exit status is 0 (success):</p> <pre> β― : β― echo $? 0 </pre><br /> <p>Why would that be useful? You can use it as a placeholder in an endless while-loop:</p> <pre> β― while : ; do date; sleep 1; done Sun 21 Nov 12:08:31 GMT 2021 Sun 21 Nov 12:08:32 GMT 2021 Sun 21 Nov 12:08:33 GMT 2021 ^C β― </pre><br /> <p>You can also use it as a placeholder for a function body not yet fully implemented, as an empty function ill result in a syntax error:</p> <pre> β― foo () { } -bash: syntax error near unexpected token `}' β― foo () { :; } β― foo β― </pre><br /> <p>Or use it as a placeholder for not yet implemented conditional branches:</p> <pre> β― if foo; then :; else echo bar; fi </pre><br /> <p>Or (not recommended) as a fancy way to comment your Bash code:</p> <pre> β― : I am a comment and have no other effect β― : I am a comment and result in a syntax error () -bash: syntax error near unexpected token `(' β― : "I am a comment and don't result in a syntax error ()" β― </pre><br /> <p>As you can see in the previous example, the Bash still tries to interpret some syntax of all text following after ":". This can be exploited (also not recommended) like this:</p> <pre> β― declare i=0 β― $[ i = i + 1 ] bash: 1: command not found... β― : $[ i = i + 1 ] β― : $[ i = i + 1 ] β― : $[ i = i + 1 ] β― echo $i 4 </pre><br /> <p>For these kinds of expressions it's always better to use "let" though. And you should also use $((...expression...)) instead of the old (deprecated) way $[ ...expression... ] like this example demonstrates:</p> <pre> β― declare j=0 β― let j=$((j + 1)) β― let j=$((j + 1)) β― let j=$((j + 1)) β― let j=$((j + 1)) β― echo $j 4 </pre><br /> <h2>(No) floating point support</h2> <p>I have to give a plus-point to the ZSH here. As the ZSH supports floating point calculation, whereas the Bash doesn't:</p> <pre> β― bash -c 'echo $(( 1/10 ))' 0 β― zsh -c 'echo $(( 1/10 ))' 0 β― bash -c 'echo $(( 1/10.0 ))' bash: line 1: 1/10.0 : syntax error: invalid arithmetic operator (error token is ".0 ") β― zsh -c 'echo $(( 1/10.0 ))' 0.10000000000000001 β― </pre><br /> <p>It would be nice to have native floating point support for the Bash too, but you don't want to use the shell for complicated calculations anyway. So it's fine that Bash doesn't have that, I guess. </p> <p>In the Bash you will have to fall back to an external command like "bc" (the arbitrary precision calculator language):</p> <pre> β― bc <<< 'scale=2; 1/10' .10 </pre><br /> <p>See you later for the next post of this series. E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Defensive DevOps</title> <link href="gemini://foo.zone/gemfeed/2021-10-22-defensive-devops.gmi" /> <id>gemini://foo.zone/gemfeed/2021-10-22-defensive-devops.gmi</id> <updated>2021-10-22T10:02:46+03:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>I have seen many different setups and infrastructures during my carreer. My roles always included front-line ad-hoc fire fighting production issues. This often involves identifying and fixing these under time pressure, without the comfort of 2-week-long SCRUM sprints and without an exhaustive QA process. I also wrote a lot of code (Bash, Ruby, Perl, Go, and a little Java), and I followed the typical software development process, but that did not always apply to critical production issues.. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Defensive DevOps</h1> <pre> c=====e H ____________ _,,_H__ (__((__((___() //| | (__((__((___()()_____________________________________// |ACME | (__((__((___()()()------------------------------------' |_____| ASCII Art by Clyde Watson </pre><br /> <p class="quote"><i>Published by Paul at 2021-10-22</i></p> <p>I have seen many different setups and infrastructures during my carreer. My roles always included front-line ad-hoc fire fighting production issues. This often involves identifying and fixing these under time pressure, without the comfort of 2-week-long SCRUM sprints and without an exhaustive QA process. I also wrote a lot of code (Bash, Ruby, Perl, Go, and a little Java), and I followed the typical software development process, but that did not always apply to critical production issues.</p> <p>Unfortunately, no system is 100% reliable, and you can never be prepared for a subset of the possible problem-space. IT infrastructures can be complex. Not even mentioning Kubernetes yet, a Microservice-based infrastructure can complicate things even further. You can take care of 99% of all potential problems by following all DevOps best practices. Those best practices are not the subject of this blog post; this post is about the sub 1% of the issues arising from nowhere you can't be prepared for. </p> <p>Is there a software bug in a production, even though the software passed QA (after all, it is challenging to reproduce production behaviour in an artificial testing environment) and the software didn't show any issues running in production until a special case came up just now after it got deployed to production a week ago? Are there multiple hardware failure happening which causes loss of service redundancy or data inaccessibility? Is the automation of external customers connected to our infrastructure putting unexpectedly extra pressure on your grid, driving higher latencies and putting the SLAs at risk? You bet the solution is: Sysadmins, SREs and DevOps Engineers to the rescue. </p> <p>You agree that fixing production issues this way is not proactive but rather reactive. I prefer to call it defensive, though, as you "defend" your system against a production issue. But, at the same time, you have to take a cautious (defensive) approach to fix it, as you don't want to make things worse. </p> <p>Over time, I have compiled a list of fire-fighting automation strategies, which I would like to share here. </p> <h2>Meet Defensive DevOps</h2> <p>Defensive DevOps is a term I invented by myself. I define it this way:</p> <ul> <li>It is the practice of automating production issues away ASAP as they appear. </li> <li>For rapid development, ignore most of the CI and QA best practices.</li> <li>Ignore the SCRUM process (if your team does SCRUM), as it will take too long to implement a solution. </li> <li>Be extremely careful (defensive) executing any fixing code in production, taking all failure scenarios into consideration and always have a rollback plan at hand. </li> <li>Still deliver a high-quality solution so that no customer will ever notice that there was an issue in the first place.</li> </ul> <p>That sounds a bit crazy, but this is, unfortunately, in rare occasions the reality. As the question is not whether production issues will happen, the question is WHEN they will happen. Every large provider, such as Google, Netflix, and so on, suffered significant outages before, and I firmly believe that their engineers know what they are doing. But you can prepare for the unexpected only to a certain degree.</p> <h2>Don't fully automate from the beginning</h2> <p>Do you have to solve problem X? The best solution would be to fully automate it away, correct? No, the best way is to fix problem X manually first. Does the problem appear on one server or on thousand servers? The scale does not matter here. The point is that you should fix the problem at least once manually, so you understand the problem and how to solve it before implementing automation around it.</p> <p>You should also have a short meeting with your team. Every person may has a different perspective and can give valuable input for determining the best strategy. But, again, keep the session short and efficient. Focus on the facts. After all, you are the domain expert and you probably know what you are doing.</p> <p>Once you understand the problem, fix it on a different server again. This time maybe write a small program or script. Semi-automate the process, but don't fully automate it yet. Start the semi-automated solution manually on a couple of more servers and observe the result. You want to gain more confidence that this really solved the problem. This can take a couple of hours manually running it over and over again. During that process, you will improve your script iteratively.</p> <h2>Develop code directly on production systems</h2> <p>You have to develop code directly on a production system. This sounds a bit controversial, but you want to get a working solution ASAP, and there is a very high chance that you can't reproduce problem X in a development or QA environment. Or at least it will consume significant effort and time to reproduce the problem, and by the time your code is ready, it's already too late. So the most practical solution is to directly develop your solution against a production system with the problem at hand. </p> <p>You might not have your full-featured IDE available on a production system, but a text editor, such as Vim (or Neovim), is sufficient for writing scripts. Some editors allow you to edit files remotely. With Vim you can accomplish it with "vim scp://SERVER///path/to/file.sh". Every time you save the file, it will be automatically uploaded via SCP to the server. From there, you can execute it directly. This comes with the additional benefits of still having access to all the Vim plugins installed locally, which you might not have installed on any production machines. This approach also removes any network delays you might experience when running your editor directly on a remote machine. </p> <p>Unfortunately, it will be a bit more complicated when you rely on code reviews (e.g. in a FIPS environment). Pair-programming could be the solution here.</p> <h3>Don't make it worse</h3> <p>You want to triple-check that your script is not damaging your system even further. You might introduce a bug to the code, so there should always be a way to roll back any permanent change it causes. You have to program it in a defensive style:</p> <ul> <li>Make sure that all that your script does is logged to a file. Best, when it's a Bash script, use "set -x". This makes the script print all commands as they are executed. Always write the output to a file. This helps to verify that your script is working as intended. The log output should always include timestamps for each significant operation performed.</li> <li>Make sure that no command executed by your script is failing. You should use "set -e" in your script, which makes the whole script terminate immediately if a command in it exits with a non-zero status. This will save you from apparent errors, e.g. trying to move files to a non-existent directory or trying to operate on a non-existent file. </li> <li>Your script should never delete any files. If solving problem X involves deleting files, don't delete them but rename or move them to a separate directory so that these can be recovered just in case. </li> <li>When you rename/move files around, always add a timestamp to a directory or the end of the file name (e.g. with "mv FILE FILE.$(date +%s"). This ensures that a backup never gets overwritten by another backup during a subsequential run of your script. Alternatively, before renaming a file, check whether the destination file already exists or not. </li> <li>When solving problem X involves manipulating files in place, be ultra-cautious. Best try to avoid in-place file manipulation. But if you really have to, you should, if disk space permits, always create a backup of the file first. Depending on the particular case, you might add a timestamp to the backup or only keep the very first initial backup of a file.</li> <li>You should implement a "--dry" switch in your script. When you run the script in dry mode, it won't manipulate anything on the system, but it would only print out what it would do. Always run your script in dry mode before running it for real. </li> </ul> <p>Furthermore, when you write Bash script, always run the tool ShellSheck (https://shellshock.io/) on it. This helps to catch many potential issues before applying it in production. </p> <h2>Test your code</h2> <p>You probably won't have time for writing unit tests. But what you can do is to pedantically test your code manually. But you have to do the testing on a production machine. So how can you test your code in production without causing more damage? </p> <p>Your script should be idempotent. This means you can run it infinite times in a row, and you will always get the same result. For example, in the first run of the script, a file A get's renamed to A.backup. The second time you run the script, it attempts to do the same, but it recognises that A has already been renamed to A.backup and then it is skipping that step. This is very helpful for manually testing, as it means that you can re-run the script every time you extended it. You should dry-run the script at least once before running it for real. You can apply the same principle for almost all features you add to the code. </p> <p>You may also want to inject manual negative testing into your script. For example, you want to run a particular function F in your script but only if a certain pre-condition is met, and you want to ensure that the code branching works as expected. The pre-condition check could be pretty complex (e.g. N log messages containing a specific warning string are found in the applications logs, but only on the cluster leader server). You can flip the switch directly in the code manually (e.g. run F only, when the pre-condition isn't met) and then perform a dry run of the script and study the output. Once done, flip the switch back to its correct configuration. For double insurance, test the same on a different server type (e.g. on a follower and not on a leader system).</p> <p>By following these principles, you test every line of code while you are developing on it. </p> <h2>Automation</h2> <p>At one point, you will be tired of manually running your script and also confident enough to automate it. You could deploy it with a configuration management system such as puppet Puppet and schedule a periodic execution via cron, a systemd timer or even a separate background daemon process. You have to be extremely careful here. The more you automate, the more damage you can cause. You don't want to automate it on all servers involved at once, but you want to slowly ramp up the automation. </p> <p>First, automate it only on one single server and monitor the result closely. At first, only automate running the script in dry mode. Also, don't forget that you still should log everything that the script is doing. Once everything looks fine, you can automate the script on the canary server for real. It shouldn't be a disaster if something goes wrong as usually systems are designed in a HA fashion, where the same data is still at least on another server available. In the worst-case scenario, you could recover data from there or from the local backup files your script created.</p> <p>Now, you can add a handful more canary servers to the automation. You should keep close attention to what the automation is doing. You could use a tool like DTail for distributed log file following. At this point, you could also think of deploying a monitoring check (e.g. Icinga) to see whether your script is not terminating abnormally or logging warnings or errors.</p> <a class="textlink" href="https://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.html">DTail - The distributed log tail program</a><br /> <p>From there, you could automate the solution on more and more servers. Best, ramp up the automation to a handful of systems, and later to a whole line of servers (e.g. all secondary servers of a given cluster). And afterwards, automate it on all servers.</p> <p>Remember, whenever something goes wrong, you will have plenty of logs and backup files available. The disaster recovery would involve extending your script to take care of that too or writing a new script for rolling back the backups. </p> <h2>Out of office hours</h2> <p>If possible, don't deploy any automation shortly before out of office hours, such as in the evening, before holidays or weekends. The only exception would be that you, or someone else, will be available to monitor the automation out of office hours. If it is a critical issue, someone, for example, the on-call person, could take over. Or ask your boss to work now but to take off another day to compensate.</p> <p>You should add an easy off-switch to your automation so that everyone from your team knows how to pause it if something goes wrong in order to adjust the automation accordingly. Of course, you should still follow all the principles mentioned in this blog post when making any changes. </p> <h2>Retrospective</h2> <p>For every major incident, you need to follow up with an incident retrospective. A blame-free, detailed description of exactly what went wrong to cause the incident, along with a list of steps to take to prevent a similar incident from occurring again in the future.</p> <p>This usually means creating one or more tickets, which will be dealt with soon. Once the permanent fix is deployed, you can remove your ad-hoc automation and monitoring around it and focus on your regular work again.</p> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Keep it simple and stupid</title> <link href="gemini://foo.zone/gemfeed/2021-09-12-keep-it-simple-and-stupid.gmi" /> <id>gemini://foo.zone/gemfeed/2021-09-12-keep-it-simple-and-stupid.gmi</id> <updated>2021-09-12T09:39:20+03:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>A robust computer system must be kept simple and stupid (KISS). The fancier the system is, the more can break. Unfortunately, most systems tend to become complex and challenging to maintain in today's world. In the early days, so I was told, engineers understood every part of the system, but nowadays, we see more of the 'lasagna' stack. One layer or framework is built on top of another layer, and in the end, nobody has got a clue what's going on.. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Keep it simple and stupid</h1> <pre> _______________ |*\_/*|_______ | ___________ | .-. .-. ||_/-\_|______ | | | | | .****. .****. | | | | | | 0 0 | | .*****.*****. | | 0 0 | | | | - | | .*********. | | - | | | | \___/ | | .*******. | | \___/ | | | |___ ___| | .*****. | |___________| | |_____|\_/|_____| .***. |_______________| _|__|/ \|_|_.............*.............._|________|_ / ********** \ / ********** \ / ************ \ / ************ \ -------------------- -------------------- </pre><br /> <p class="quote"><i>Published by Paul at 2021-09-12, last updated at 2022-04-21</i></p> <p>A robust computer system must be kept simple and stupid (KISS). The fancier the system is, the more can break. Unfortunately, most systems tend to become complex and challenging to maintain in today's world. In the early days, so I was told, engineers understood every part of the system, but nowadays, we see more of the "lasagna" stack. One layer or framework is built on top of another layer, and in the end, nobody has got a clue what's going on.</p> <h1>Need faster hardware</h1> <p>This not just makes the system much more complex, difficult to maintain and challenging to troubleshoot, but also slow. So more experts are needed to support it. Also, newer and faster hardware is required to make it run smoothly. Often, it's so much easier to buy speedier hardware than rewrite a whole system from scratch from the bottom-up. The latter would require much more resources in the short run, but in the long run, it should pay off. Unfortunately, many project owners scare away from it as they only want to get their project done and then move on.</p> <h1>Too complex to be replaced</h1> <h2>On COBOL</h2> <p>Have a look at COBOL, a prevalent programming language of the past. No one is learning COBOL in college or university anymore, but many legacy systems still require COBOL experts. Why is this? It's just too scary to write everything from scratch. There's too much COBOL code out there that can't be replaced from today to tomorrow. </p> <a class="textlink" href="https://nymag.com/intelligencer/2020/04/what-is-cobol-what-does-it-have-to-do-with-the-coronavirus.html">https://nymag.com/intelligencer/2020/04/what-is-cobol-what-does-it-have-to-do-with-the-coronavirus.html</a><br /> <h2>On Kubernetes</h2> <p>Now have a look at Kubernetes (k8s), the current trendy infrastructure thing to use nowadays. Of course, there are many benefits of using k8s (auto-scaling, reproducible deployments, dynamic resource allocation and resource sharing, saving of hardware costs, good commercial for potential employees as it is the current hot sauce of infrastructure). But all of this also comes with costs: You need experts operating the k8s cluster (or you need to pay extra for a managed cluster in the cloud), increased complexity of the system (k8s comes with a steep learning curve). The latter not only applies to the engineers managing the k8s cluster - it also applies to the software engineers, who now have to develop 'cloud native' applications and, therefore, have to change how they developed software how they used to. They all need to be re-educated on what cloud-native means, and they also need to understand the key concepts of k8s for writing optimal software for it.</p> <h2>The younger generation of IT professionals</h2> <p>Maybe the younger generation knows all of this already after graduation, but then they are missing other critical parts of the system for sure. I have seen engineers who knew about containers and how to configure resource restrictions for a Docker container managed via k8s but have never heard the terms Linux control groups and Linux namespaces. So obviously, there is some knowledge gap of the underlying architecture. This can be a big problem when you have to troubleshoot such a system during a production incident and k8s adds a lot of abstraction to the mix which doesn't make it easier. </p> <p>Coming back to COBOL, k8s is on its way to becoming something similar. One day, k8s might not be the hottest tech stuff everyone wants to use. But there will be still many legacy k8s clusters around but not enough experts available to manage those:</p> <a class="textlink" href="https://www.techrepublic.com/article/why-kubernetes-is-our-modern-day-cobol-says-a-tech-expert/">https://www.techrepublic.com/article/why-kubernetes-is-our-modern-day-cobol-says-a-tech-expert/</a><br /> <p>Another article which stroke me is:</p> <a class="textlink" href="https://it.slashdot.org/story/21/09/23/163212/todays-students-dont-understand-the-basics-of-computer-operations">Today's Students Don't Understand the Basics of Computer Operations </a><br /> <p>And here is something to smile about:</p> <a class="textlink" href="https://christine.website/blog/theres-a-node-2021-10-02">https://christine.website/blog/theres-a-node-2021-10-02</a><br /> <h1>The bloated web</h1> <p>Another example is the modern web. Have you ever wondered why the internet becomes slower and slower nowadays? The modern web is so much like lasagna that I decided to use Gemini to be the primary protocol of my website. The HTML version of this website is just a fallback as many visitors don't know what Gemini is and don't have any compatible software installed for surfing the Geminispace:</p> <a class="textlink" href="2021-04-24-welcome-to-the-geminispace.html">2021-04-24-welcome-to-the-geminispace.html</a><br /> <p>The Gemtext protocol is KISS. There's no way to do other formattings than headings, links, paragraphs, lists, quotes, and bare text blocks (e.g., ASCII art or code snippets). There's no way to create bloated Gemini sites, and due to its limited capabilities, there's also no way to commercialise it (e.g. there's no good way to track the site visitors as things like cookies don't exist). By design, the Gemini protocol can't be extended, so there is no chance to abuse it even in the future. Gemini sites will stay KISS forever, and there won't be any fancy HTML/JavaScript frameworks like we see on the modern web.</p> <h1>Fancy log-management solutions</h1> <p>Yet another example I want to bring up is DTail, the distributed log tail program I wrote. There are many great and fancy log-management solutions available to choose from, and they all seem complex to set up and maintain. The ELK stack, for example, requires you to operate an ElasticSearch cluster (or multiple, if you are geo-redundant), Logstash (different configurations and instances, depending on your infrastructure) and a Kibana web-frontend (which also needs to be highly available). I have operated ElasticSearch clusters on multiple occasions, and I must say that it is not an easy task to optimise it for the particular workload you might encounter. I also have seen many ES clusters operated by other people, and I have seen these clusters failing a lot (so it's not just me). The reduced complexity of DTail also makes it more robust against outages. You won't troubleshoot your distributed application very well if the log management infrastructure isn't working either.</p> <a class="textlink" href="2021-04-22-dtail-the-distributed-log-tail-program.html">2021-04-22-dtail-the-distributed-log-tail-program.html</a><br /> <p>I don't say that the ELK stack doesn't work, but it requires experts and additional hardware resources to support it. But instead, if you keep your infrastructure simple (e.g. only use DTail), it will maintain pretty much by itself. </p> <h1>More KISS</h1> <h2>The Adslowbe PDF Reader</h2> <p>Another perfect example is the Adobe PDF reader. How can it be that the inventor of the PDF format creates such a terrible user experience with its official reader? The reader is awful bloated, and slow. There are much better alternatives around (especially for Linux and other UNIX like operating systems, look at Zathura for example). I believe the reason Adobe's reader is like this is featuritis, and 90% of the users don't use 90% of all available features. Less is more; keep it simple and stupid. </p> <h2>The power of plain text files</h2> <p>Speaking of file formats, never underestimate the power of plain text files. Plain text files don't require any special software to be opened, and they outlive the software which created them in the first place. You will still be able to read a plain text file on a modern computer system ten (or twenty) years from now, but you probably won't be able to read such an old version of an Adobe Photoshop image file if the software required for reading that format isn't supported anymore and doesn't run anymore on modern computers.</p> <h2>KISS for programmers</h2> <p>Not to mention, keeping things simple and stupid also reduces the potential malicious attack surface. It's not just about the software and services you use and operate. It's also about the software you write. Here is a nice article about the KISS principle in software development:</p> <a class="textlink" href="https://thevaluable.dev/kiss-principle-explained/">https://thevaluable.dev/kiss-principle-explained/</a><br /> <h1>When KISS is not KISS anymore</h1> <p>There is, however, a trap. The more you spend time with things, the more these things feel natural to you and you become an expert. The more you become an expert, the more you introduce more abstractions and other clever ways of doing things. For you, things seem to be KISS still, but another person may not be an expert and might not understand what you do. One of the fundamental challenges is to keep things really KISS. You might add abstraction upon abstraction to a system and don't even notice it until it is too late.</p> <h2>Other relevant readings</h2> <a class="textlink" href="https://unixsheikh.com/articles/is-the-madness-ever-going-to-end.html">Is the madness ever going to end?</a><br /> <p>Enough ranted for now :-). E-Mail me your comments to paul at buetow dot org!</p> <p class="quote"><i>Controversially, a lack of features is a feature. Enjoy your peace an quiet. - Michael W Lucas </i></p> </div> </content> </entry> <entry> <title>On being Pedantic about Open-Source</title> <link href="gemini://foo.zone/gemfeed/2021-08-01-on-being-pedantic-about-open-source.gmi" /> <id>gemini://foo.zone/gemfeed/2021-08-01-on-being-pedantic-about-open-source.gmi</id> <updated>2021-08-01T10:37:58+03:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>I believe that it is essential to always have free and open-source alternatives to any kind of closed-source proprietary software available to choose from. But there are a couple of points you need to take into consideration. . .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>On being Pedantic about Open-Source</h1> <pre> __ _____....--' .' ___...---'._ o -`( ___...---' \ .--. `\ ___...---' | \ \ `| | |o o | | | | \___'.-`. '. | | `---' '^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^' LGB - Art by lgbearrd </pre><br /> <p class="quote"><i>Published by Paul at 2021-08-01</i></p> <p>I believe that it is essential to always have free and open-source alternatives to any kind of closed-source proprietary software available to choose from. But there are a couple of points you need to take into consideration. </p> <h2>The costs of open-source</h2> <p>One benefit of using open-source software is that it doesn't cost anything, right? That's correct in many cases. However, in some cases you still need to spend a significant amount of time configuring the software to work for you. It will be more expensive to use open-source software than proprietary commercial one if you aren't careful. </p> <p>Not to say that I haven't seen the same effect with commercial software where people had to, after buying it, put a bunch of effort to make it work due to the lack of quality or due to high complexity. But that's either bad luck or bad decision-making. Most commercial providers I have worked with try to make it work for you, so you also will buy other products and services from them later on and don't lose you as a happy customer.</p> <h2>Commercial providers</h2> <p>Producers of commercial software want to earn money after all. This is to grow their businesses and also to be able to pay their employees, who also need to care for their families. Employees build up their careers, build houses, and are proud of their accomplishments in the company.</p> <p>So per se, commercial software is not a bad thing. Right? At least, commercial closed-source software is not a bad thing in its heart. Unfortunately, some companies have to keep their software closed-source to not lose their competitive edge over other competitors. </p> <h2>Earning on open-source</h2> <p>There are also companies that earn on open-source software. All the code they write is free for download and use, but you, as a customer, could pay for service and support if you are not an expert and can't manage it by yourself. </p> <p>I like this approach, as you can balance the effort and costs the way it suits you best, and in doubt, you can audit the source code. Are you already an expert? Perfect, you don't need to buy additional support for the software. Everything can be set up by yourself, given that you have the time and priority.</p> <p>Also, once an open-source project reached a certain size, it is unlikely to be abandoned one day. As long as at least one person is willing to be the open-source maintainer, the project won't die. Whereas commercial providers can decide from today to tomorrow to retire software or go bankrupt (unless you purchase Microsoft Word, I don't believe it will die anytime soon). </p> <h2>Open-source organizations and individual contributors</h2> <p>Besides corporations, millions of individual open-source contributors write free and open-source software not for money but for pleasure. Often, they are organized in non-profit organizations, working together to reach a common goal (it is worth mentioning that there are also many professionals, payed by large corporations, working full-time for non-profit open-source projects in order to push the features and reach the goals of the corporations). Sometimes, people don't agree on the project goal, so it gets forked, which can be a good thing. The more diversity, the better, as this is where competition and innovation happens. Also, the end user will end up with more choices. </p> <p>These open-source projects are of a very high quality standard and are rock-solid, if not better, alternatives to proprietary counterparts. If the project isn't backed by a large corporation already, you should donate to these open-source organizations and/or individual contributors. I have donated to some projects I use personally. Do you learn a foreign language and use Anki flashcards? It's entirely free and open-source, and they happily accept donations ensuring future maintenance and development.</p> <h2>Lesser known projects and the charm of clunkiness</h2> <p>Looking at the smaller, lesser-known open-source projects (not talking about established open-source projects like FreeBSD and Linux): You can't, however, expect the software to be perfect and bug-free. After all, most of the code is written for pleasure and fun in the developers' free time. Besides the developer himself, you might be the only user of the project. The software may be a bit clunky to use, and probably bugs are lurking around, and it might only work for a very specific use case.</p> <p>Clunkiness can be charmful, though. And it can also encourage you to contribute code to make it better. There is a lot of such code in personal GitHub and GitLab repositories. The quality of such small open-source projects varies drastically. Many hobbyist programmers see programming as an art and put tons of effort into their projects. Others upload broken crap, which is dangerous to use. So have a look at the code before you use it!</p> <h2>The security aspect</h2> <p>One of the main conceptions about open-source software is that it is more secure than closed-source software because everybody can read and fix the code. Is that actually true? You can only be sure when you audit the code by yourself. If you are like me, you won't have time to audit all the open-source software you use. It's impossible to audit more than 100 million lines of Linux kernel code. Static code analysis tools come in handy here, but they still require humans to look at the results.</p> <p>Security bugs in open-source projects are exposed to the public and fixed quickly, while we don't know exactly what happens to security bugs in closed-source ones. Still, hackers and security specialists can find them through reverse engineering and penetration testing. Overall, thinking of security, In my opinion it is still better to prefer open-source software because the more significant the project, the higher the probability that security bugs are found and fixed as more parties are looking into it. Furthermore, provided you have the necessary resources, you could still deduct an audit by yourself. The latter especially happens when companies with its own security and penetration testing departments are evaluating the use of open-source. This is something not every company can afford though.</p> <h2>Always watch out for open-source alternatives</h2> <p>Do you need Microsoft Word? Why don't you just use the Vim text editor or GNU Emacs to write your letters? If that's too nerdy, you can still use open-source alternatives such as AbiWord or LibreOffice. Larger organizations have the tendency to standardize the software their employees have to use. Unfortunately, as Microsoft Word is the de-facto standard text processing program, most companies prefer Word over LibreOffice. Same with Microsoft Excel vs LibreOffice Calc or other spreadsheet alternatives like Gnumeric. I don't know why that is; please E-Mail me, and I will update this blog article. I guess the devil lies in the detail here.</p> <p>I only use free and open-source operating systems on my personal Laptops, Desktop PCs and servers (FreeBSD and Linux based ones). Most of the programs and apps I use on them are free and open-source as well, and I am comfortable with it for over twenty years. Exceptions are the BIOSes and some firmwares of my devices. I also use Skype as most of my friends and family are using it. They are, unfortunately, proprietary software still. But I will be looking into Matrix as a Skype alternative when I have time. There are also open BIOS alternatives, but they usually don't work on my devices.</p> <h2>What about mobile?</h2> <p>I struggle to go 100% open-source on my Smartphone. I use a Samsung phone with the stock Android as provided by Samsung. I love the device as it is large enough to use as a portable reading and note-taking device, and it can also take decent pictures. As a cloud backup solution, I have my own NextCloud server (open-source). Android is mainly open-source software, but many closed parts are still included. I replaced most of the standard apps with free and open-source variants from the F-Droid store though.</p> <p>I could get a LineageOS based phone to get rid of the proprietary Android parts (I tried that out a couple of times in the past). But then a couple of convenient apps, such as Google Maps or Banking or Skype or the E-Ticket apps of various Airlines, various review apps when searching for restaurants, Audible (I think Audible offers an excellent service), etc., won't work anymore. The proprietary Google Maps is still the best maps app, even though there are open alternatives available. It's not that I couldn't live without these apps, but they make life a lot more convenient.</p> <h2>Know the alternatives</h2> <p>Thinking about alternative solutions is always a good idea. My advice is never to be entirely dependant on any proprietary software. Before you decide to use proprietary software, try to find alternatives in the open-source world. You might need to invest some time playing around with the options available. Maybe they are good enough for you, or maybe not.</p> <p>If you still want to use proprietary software, use it with caution. Have a look at the recent change at Google Photos: For a long time, "high quality" photos could be uploaded there quota-less for free. However, Google recently changed the model so that people exceeding a quota have to start paying for the extra space consumed. I am not against Google's decision, but it shows you that a provider can always change its direction. So you can't entirely rely on these. I repeat myself: Don't fully rely on anything proprietary, but you might still use proprietary software or services for your own convenience.</p> <h2>You can't control it all</h2> <p>The biggest problem I have with going 100% open-source is actually time. You can't control all the software you use or might be using in the future. You have only a finite amount of time available in your life. So you have to decide what's more important: Investigate and use an open-source alternative of every program and app you have installed, or rather spend quality time with your family and have a nice walk in the park or go to a sports class or cook a nice meal? You can't control it all in today's world of tech, not as a user and even not as a tech worker. There's a great blog post worth reading: </p> <a class="textlink" href="https://unixsheikh.com/articles/how-to-stay-sane-in-todays-world-of-tech.html">https://unixsheikh.com/articles/how-to-stay-sane-in-todays-world-of-tech.html</a><br /> <h2>The middle way</h2> <p>Regarding my personal Smartphone dilemma: I guess the middle way is to use two phones: </p> <ul> <li>Have a secondary, proprietary Android phone with Google Play store (or an Apple iPhone if this is more your thing) and all its benefits for occasional use. Use the proprietary phone only with intention. Such a phone implies some risks regarding your privacy. If you aren't careful, app providers will collect your personal data for building a digital profile of you, which gets used for online advertisement and other things. This doesn't only applies to the Smartphone, this also applies to some proprietary software (including cloud services such as Google Photos) you use on your home computer or websites you visit (I am looking at you, Facebook, Twitter and friends). Try to disable all tracking features on such a phone. It's not a guarantee that nobody will be collecting data from you anymore, but you should take at least the chance. Cal Newport once mentioned that you should not use privacy concerning apps as much anyway and instead spend more time on things which matter.</li> <li>Have a primary phone, entirely based on free and open-source software. There will be probably no app collecting your personal data. Try to use the primary phone for all of your everyday activities and fall back to the proprietary phone only for particular use cases. Once there is decent hardware (with a decent camera) running Linux (such as Mobian, for example) available, I will consider a purchase. The only 3rd party which then will still be able to track you will be your network provider. You could start your own phone network, but that seems overkill. There is already the Pinephone and the Librem 5 running a real Linux (Android is Linux based, but it doesn't count as a real Linux for me). Still, I want to wait a bit longer for better hardware to be available (I want to have a good camera always with me).</li> <li>You could also add a tertiary phone to the mix, which you only use for work and nothing else. That one will be very likely a proprietary phone too. You only have to keep this one around when you are working or when you are on-call.</li> </ul> <p>I have been playing with other smartphone OS alternatives, especially with MeeGo (which has died already) and SailfishOS, too. Security and privacy seem to be significantly improved compared to an Android. As a matter of fact, I bought a cheap and used Sony Xperia XA2 last year and installed SailfishOS on it. It's a nice toy, but it's still not the holy open-source grail as there are also proprietary parts in SailfishOS. Platforms such as Mobian, Ubuntu Touch and Plasma Mobile are more compelling to me. People must explore alternatives to Android and Apple here, as otherwise, you won't own any gadgets anymore:</p> <a class="textlink" href="https://news.slashdot.org/story/21/07/10/0120236/by-2030-you-wont-own-any-gadgets">https://news.slashdot.org/story/21/07/10/0120236/by-2030-you-wont-own-any-gadgets</a><br /> <p>Anyhow, any gadgets, including your phone, should be a tool you use. Don't let the phone use you!</p> <h2>The downside of being a nobody</h2> <p>Be aware that it might be to your disadvantage if you manage to go completely under cover without anyone collecting data from you. Suppose you are a nobody on the web (no social media profiles, no tracking history, etc.). In that case, you aren't behaving like the mass, and therefore you are suspicious. So it might be even a good thing to leave your marks here and there once in a while. You aren't hiding anything anyway, correct? Just be mindful what you are sharing about yourself. I share personal things very rarely on Facebook for example. And I only share a small subset of my personal life on my personal homepage and this blog and on all of my social media accounts. Nobody is interested in what I have for breakfast anyway I guess. Write me an E-Mail if you are interested in what I am having for breakfast.</p> <h2>Mobile open-source OSes are still evolving</h2> <p>You might have noticed that I wrote a lot about Smartphones in this article. The reason is that free and open-source software for Smartphones is still evolving. In contrast, for Laptops and Desktop PCs, it's already there. There is no reason to use proprietary operating systems such as Windows or macOS on your computers unless your employer forces you to use one of these. Why would they force you? It has to do with standardization again. The IT department only can manage so many platforms. It wouldn't be manageable by IT if every employee would install their own Linux distribution or one of the *BSDs. That might work for small startups but not for larger companies, especially not for a security-focused companies.</p> <p>I would love a standardized Linux at work, though. Dell and Lenovo also officially support Linux on their notebooks. The culprit may be knowledgeable IT staff maintaining and giving support to the Desktop Linux users. Not all colleagues are Linux geeks like you and me. I am using macOS for work, but I am not an Apple expert. Occasionally I have to contact IT support regarding some issues I have. I don't use the macOS GUI a lot; I mainly live in the terminal so I can run the same tools I also use on Linux.</p> <h2>Conclusion</h2> <p>Should you be pedantic about open-source software? It depends. It depends on your fundamental values and how much time you are ready to invest. Open-source software is not just free as in money, but also free as in freedom. You will gain back complete control of your personal data. Unfortunately, installing ready proprietary apps from the Play Store is much more convenient than building up a trustworthy open-source-based infrastructure by yourself. As a guideline, use proprietary software and services with caution. Be mindful about your choices and where you leave your digital fingerprints. In doubt, think less is more. Do you really need this new shiny app? What benefit does it provide to you? Probably you don't really need that shiny new app.</p> <p>You have better chances when you know how to manage your own server and install and manage alternatives to the big cloud providers by yourself. I have the advantage that I have work experience as a Linux Systems Administrator here. I mentioned NextCloud already. I use NextCloud for online photo and file storage, contact and calendar sync and as an RSS news feed server. You could do the same with your own E-Mail server, you can also host your own website and blog. I also mentioned Matrix as a Skype alternative (which could also be an alternative to WhatsApp, Skype, Telegram, Viber, ...). I don't know a lot about Matrix yet, but it seems to be a very neat alternative. I am ready to invest time in it as one of my future personal pet projects. Not only because I think it's better, but also because for fun and as a hobby. But this doesn't mean that I invest *all* of my personal free time in it.</p> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>The Well-Grounded Rubyist</title> <link href="gemini://foo.zone/gemfeed/2021-07-04-the-well-grounded-rubyist.gmi" /> <id>gemini://foo.zone/gemfeed/2021-07-04-the-well-grounded-rubyist.gmi</id> <updated>2021-07-04T10:51:23+01:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>When I was a Linux System Administrator, I have been programming in Perl for years. I still maintain some personal Perl programming projects (e.g. Xerl, guprecords, Loadbars). After switching jobs a couple of years ago (becoming a Site Reliability Engineer), I found Ruby (and some Python) widely used there. As I wanted to do something new, I then decided to give Ruby a go for all medium-sized programming and scripting projects.. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>The Well-Grounded Rubyist</h1> <p class="quote"><i>Published by Paul at 2021-07-04</i></p> <p>When I was a Linux System Administrator, I have been programming in Perl for years. I still maintain some personal Perl programming projects (e.g. Xerl, guprecords, Loadbars). After switching jobs a couple of years ago (becoming a Site Reliability Engineer), I found Ruby (and some Python) widely used there. As I wanted to do something new, I decided to give Ruby a go.</p> <p>You should learn or try out one new programming language once yearly anyway. If you end up not using the new language, that's not a problem. You will learn new techniques with each new programming language and this also helps you to improve your overall programming skills even for other languages. Also, having some background in a similar programming language makes it reasonably easy to get started. Besides that, learning a new programming language is kick-a** fun!</p> <a href="https://foo.zone/gemfeed/2021-07-04-the-well-grounded-rubyist/book-cover.jpg"><img src="https://foo.zone/gemfeed/2021-07-04-the-well-grounded-rubyist/book-cover.jpg" /></a><br /> <p>Superficially, Perl seems to have many similarities to Ruby (but, of course, it is entirely different to Perl when you look closer), which pushed me towards Ruby instead of Python. I have tried Python a couple of times before, and I managed to write good code, but I never felt satisfied with the language. I didn't love the syntax, especially the indentations used; they always confused me. I don't dislike Python, but I don't prefer to program in it if I have a choice, especially when there are more propelling alternatives available. Personally, it's so much more fun to program in Ruby than in Python.</p> <a href="https://foo.zone/gemfeed/2021-07-04-the-well-grounded-rubyist/book-backside.jpg"><img src="https://foo.zone/gemfeed/2021-07-04-the-well-grounded-rubyist/book-backside.jpg" /></a><br /> <p>Yukihiro Matsumoto, the inventor of Ruby, said: "I wanted a scripting language that was more powerful than Perl and more object-oriented than Python" - So I can see where some of the similarities come from. I personally don't believe that Ruby is more powerful than Perl, though, especially when you take CPAN and/or Perl 6 (now known as Raku) into the equation. Well, it all depends on what you mean with "more powerful". But I want to stay pragmatic and use what's already used at my workplace.</p> <h2>My Ruby problem domain</h2> <p>I wrote a lot of Ruby code over the last couple of years. There were many small to medium-sized tools and other projects such as Nagios monitoring checks, even an internal monitoring & reporting site based on Sinatra. All Ruby scripts I wrote do their work well; I didn't encounter any significant problems using Ruby for any of these tasks. Of course, there's nothing that couldn't be written in Perl (or Python), though, after all, these languages are all Turing-complete and all these languages also come with a huge set of 3rd party libraries :-).</p> <p>I don't use Ruby for all programming projects, though. </p> <ul> <li>I am using Bash for small sized (usually below 500 lines of code) scripts and ad-hoc command-line automation.</li> <li>I program in Google Go for more complex tools (such as DTail) and for problem solving involving data crunching.</li> <li>Occasionally, I write some lines of Java code for minor feature enhancements and fixes to improve the reliability of some the services.</li> <li>Sometimes, I still program in good old C. This is for special projects (e.g. I/O Riot) or low-level PoCs or SystemTap guru mode scripts.</li> </ul> <a class="textlink" href="https://foo.zone/gemfeed/2021-05-16-personal-bash-coding-style-guide.html">Also have a look at my personal Bash coding style.</a><br /> <a class="textlink" href="https://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.html">Read here about DTail - the distributed log tail program.</a><br /> <a class="textlink" href="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux.html">This is a magazine article about I/O Riot I wrote.</a><br /> <p>For all other in-between tasks I mainly use the Ruby programming language (unless I decide to give something new a shot once in a while).</p> <h2>Being stuck in Ruby-mediocrity</h2> <p>As a Site Reliability Engineer there were many tasks and problems to be solved as efficiently and quickly as possible and, of course, without bugs. So I learned Ruby relatively fast by doing and the occasional web search for "how to do thing X". I always was eager to get the problem at hand solved and as long as the code solved the problem I usually was happy.</p> <p>Until now, I never read a whole book or took a course on Ruby. As a result, I found myself writing Ruby in a Perl-ish procedural style (with Perl, you can do object-oriented programming too, but Perl wasn't designed from the ground up to be an object-oriented language). I didn't take advantage of all the specialities Ruby has to offer as I invested most of my time in the problems at hand and not in the Ruby idiomatic way of doing things.</p> <p>An unexpected benefit was that most of my Ruby code (probably not all, there are always dark corners in some old code bases lurking around) was easy to follow and extend or fix, even by people who usually don't speak Ruby, as there wasn't too much magic involved in my code - However, I could have done better still. Looking at other Ruby projects, I noticed over time that there is so much more to the language I wanted to explore. For example new techniques and the Ruby best practise, and much more about how things work under the hood, I wanted to learn about.</p> <h2>O'Reilly Safari Books Online</h2> <p>I do have an O'Reilly Safari Online subscription (thank you, employer). To my liking, I found the "The Well-Grounded Rubyist" book there (the text version and also the video version of it). I watched the video version for a couple of weeks, chunking the content into small pieces so it was able to fit into my schedule, increasing the playback speed for the topics I knew already well enough and slowed it down to actual pace when there was something new to learn and occasionally jumped back to the text book to review what I just learned. To my satisfaction, I was already familiar with over half of the language. But there was still the big chunk, especially how the magic happens under the hood in Ruby, which I missed out on, but I am happy now to be aware of it now.</p> <p>I also loved the occasional dry humour in the book: "An enumerator is like a brain in a science fiction movie, sitting on a table with no connection to a body but still able to think". :-)</p> <p>Will I rewrite and refactor all of my existing Ruby programs? Probably not, as they all do their work as intended. Some of these scripts will be eventually replaced or retired. But depending on the situation, I might refactor a module, class or a method or two once in a while. I already knew how to program in an object-oriented style from other languages (e.g. Java, C++, Perl Moose and plain) before I started Ruby, so my existing Ruby code is not as bad as you might assume after reading this article :-). In contrast to Java/C++, Ruby is a dynamic language, and the idiomatic ways of doing things differs from statically typed languages.</p> <h2>Key takeaways</h2> <p>These are my key takeaways. These only point out some specific things I have learned, and represent, by far, not everything I've learned from the book.</p> <h3>"Everything" is an object</h3> <p>In Ruby, everything is an object. However, Ruby is not Smalltalk. It depends on what you mean by "everything". Fixnums are objects. Classes also are, as instances of class Class. Methods, operators and blocks aren't but can be wrapped by objects via a "Proc". A simple assignment is not and can't. Statements like "while" also aren't and can't. Comments obviously also fall in the latter group. Ruby is more object-oriented than everything else I have ever seen, except for Smalltalk.</p> <p>In Ruby, like in Java/C++, classes are classes, objects are instances of classes, and there are class inheritances. There is single inheritance in Ruby, but with the power of mixing in modules, you can extend your classes in a better way than multiple class inheritances (like in C++) would allow. It's also different to Java interfaces, as interfaces in Java only come with the method prototypes and not with the actual method implementations like Ruby modules.</p> <h3>"Normal" objects and singleton objects</h3> <p>In Ruby, you can also have singleton objects. A singleton object can be an instance of a class but be modified after its creation (e.g. a method added to only this particular instance after its instantiation). Or, another variant of a singleton object is a class (yes, classes are also objects in Ruby). All of that is way better described in the book, so have a read by yourself if you are confused now; just remember: Rubys object system is very dynamic and flexible. At runtime, you can add and modify classes, objects of classes, singleton objects and modules. You don't need to restart the Ruby interpreter; you can change the code during runtime dynamically through Ruby code.</p> <h3>Domain specific languages</h3> <p>Due to Ruby's flexibility through object individualization (e.g. adding methods at runtime, or changing the core behaviour of classes, catching unknown method calls and dynamically dispatch and/or generate the missing methods via the "method_missing" method), Ruby is a very good language to write your own small domain specific language (DSL) on top of Ruby syntax. I only noticed that after reading this book. Maybe, this is one of the reasons why even the configuration management system Puppet once tried to use a Ruby DSL instead of the Puppet DSL for its manifests. I am not sure why the project got abandoned though, probably it has to do with performance. Do be honest, Ruby is not the fastest language, but it is fast enough for most use cases. And, especially from Ruby 3, performance is one of the main things being worked on currently. If I want performance, I can always use another programming language.</p> <h3>Ruby is "self-ish"</h3> <p>Ruby will fall back to the default "self" object if you don't specify an object method receiver. To give you an example, some more explanation is needed: There is the "Kernel" module mixed into almost every Ruby object. For example, "puts" is just a method of module "Kernel". When you write "puts :foo", Ruby sends the message "puts" to the current object "self". The class of object "self" is "Object". Class Object has module "Kernel" mixed in, and "Kernel" defines the method "puts". </p> <pre> >> self => main >> self.class => Object >> self.class.included_modules => [PP::ObjectMixin, Kernel] >> Kernel.class => Module >> Kernel.methods.grep(/puts/) => [:puts] >> puts 'Hello Ruby' Hello Ruby => nil >> self.puts 'Hello World' Hello World => nil </pre><br /> <p>Ruby offers a lot of syntactic sugar and seemingly magic, but it all comes back to objects and messages to objects under the hood. As all is hidden in objects, you can unwrap and even change the magic and see what's happening under the hood. Then, suddenly everything makes so much sense.</p> <h3>Functional programming</h3> <p>Ruby embraces an object-oriented programming style. But there is good news for fans of the functional programming paradigm: From immutable data (frozen objects), pure functions, lambdas and higher-order functions, lazy evaluation, tail-recursion optimization, method chaining, currying and partial function application, all of that is there. I am delighted about that, as I am a big fan of functional programming (having played with Haskell and Standard ML before).</p> <p>Remember, however, that Ruby is not a pure functional programming language. You, the Rubyist, need to explicitly decide when to apply a functional style, as, by heart, Ruby is designed to be an object-oriented language. The language will not enforce side effect avoidance, and you will have to enable tail-recursion optimization (as of Ruby 2.5) explicitly, and variables/objects aren't immutable by default either. But that all does not hinder you from using these features. </p> <p>I liked this book so much so that I even bought myself a (used) paper copy of it. To my delight, there was also a free eBook version in ePub format included, which I now have on my Kobo Forma eBook reader. :-)</p> <h2>Perl</h2> <p>Will I abandon my beloved Perl? Probably not. There are also some Perl scripts I use at work. But unfortunately I only have a limited amount of time and I have to use it wisely. I might look into Raku (formerly known as Perl 6) next year and use it for a personal pet project, who knows. :-). I also highly recommend reading the two Perl books "Modern Perl" and "Higher-Order Perl".</p> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Gemtexter - One Bash script to rule it all</title> <link href="gemini://foo.zone/gemfeed/2021-06-05-gemtexter-one-bash-script-to-rule-it-all.gmi" /> <id>gemini://foo.zone/gemfeed/2021-06-05-gemtexter-one-bash-script-to-rule-it-all.gmi</id> <updated>2021-06-05T19:03:32+01:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>You might have read my previous blog post about entering the Geminispace, where I pointed out the benefits of having and maintaining an internet presence there. This whole site (the blog and all other pages) is composed in the Gemtext markup language. . .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Gemtexter - One Bash script to rule it all</h1> <pre> o .,<>., o |\/\/\/\/| '========' (_ SSSSSSs )a'`SSSSSs /_ SSSSSS .=## SSSSS .#### SSSSs ###::::SSSSS .;:::""""SSS .:;:' . . \\ .::/ ' .'| .::( . | :::) \ /\( / /) ( | .' \ . ./ / _-' |\ . | _..--.. . /"---\ | ` | . | -=====================,' _ \=(*#(7.#####() | `/_.. , ( _.-''``';'-''-) ,. \ ' '+/// | .'/ \ ``-.) \ ,' _.- (( `-' `._\ `` \_/_.' ) /`-._ ) | ,'\ ,' _.'.`:-. \.-' / <_L )" | _/ `._,' ,')`; `-'`' | L / / / `. ,' ,|_/ / \ ( <_-' \ \ / `./ ' / /,' \ /|` `. | )\ /`._ ,'`._.-\ |) \' / `.' )-'.-,' )__) |\ `| : /`. `.._(--.`':`':/ \ ) \ \ |::::\ ,'/::;-)) / ( )`. | ||::::: . .::': :`-( |/ . | ||::::| . :| |==[]=: . - \ |||:::| : || : | | /\ ` | ___ ___ '|;:::| | |' \=[]=| / \ \ | /_ ||``|||::::: | ; | | | \_.'\_ `-. : \_``[]--[]|::::'\_;' )-'..`._ .-'\``:: ` . \ \___.>`''-.||:.__,' SSt |_______`> <_____:::. . . \ _/ `+a:f:......jrei''' </pre><br /> <p class="quote"><i>Published by Paul at 2021-06-05</i></p> <p>You might have read my previous blog post about entering the Geminispace, where I pointed out the benefits of having and maintaining an internet presence there. This whole site (the blog and all other pages) is composed in the Gemtext markup language. </p> <a class="textlink" href="https://foo.zone/gemfeed/2021-04-24-welcome-to-the-geminispace.html">Welcome to the Geminispace</a><br /> <p>This comes with the benefit that I can write content in my favourite text editor (Vim). </p> <h2>Motivation</h2> <p>Another benefit of using Gemini is that the Gemtext markup language is easy to parse. As my site is dual-hosted (Gemini+HTTP), I could, in theory, just write a shell script to deal with the conversion from Gemtext to HTML; there is no need for a full-featured programming language here. I have done a lot of Bash in the past, but I am also often revisiting old tools and techniques for refreshing and keeping the knowledge up to date here.</p> <a href="https://foo.zone/gemfeed/2021-06-05-gemtexter-one-bash-script-to-rule-it-all/blog-engine.jpg"><img alt="Motivational comic strip" title="Motivational comic strip" src="https://foo.zone/gemfeed/2021-06-05-gemtexter-one-bash-script-to-rule-it-all/blog-engine.jpg" /></a><br /> <p>I have exactly done that - I wrote a Bash script, named Gemtexter, for that:</p> <a class="textlink" href="https://codeberg.org/snonux/gemtexter">https://codeberg.org/snonux/gemtexter</a><br /> <p>In short, Gemtexter is a static site generator and blogging engine that uses Gemtext as its input format.</p> <h2>Output formats</h2> <p>Gemtexter takes the Gemntext Markup files as the input and generates the following outputs from it (you find examples for each of these output formats on the Gemtexter GitHub page):</p> <ul> <li>HTML files for my website</li> <li>Markdown files for a GitHub page</li> <li>A Gemtext Atom feed for my blog posts</li> <li>A Gemfeed for my blog posts (a particular feed format commonly used in Geminispace. The Gemfeed can be used as an alternative to the Atom feed).</li> <li>An HTML Atom feed of my blog posts</li> </ul> <p>I could have done all of that with a more robust language than Bash (such as Perl, Ruby, Go...), but I didn't. The purpose of this exercise was to challenge what I can do with a "simple" Bash script and learn new things.</p> <h2>Taking it as far as I should, but no farther</h2> <p>The Bash is suitable very well for small scripts and ad-hoc automation on the command line. But it is for sure not a robust programming language. Writing this blog post, Gemtexter is nearing 1000 lines of code, which is actually a pretty large Bash script.</p> <h3>Modularization </h3> <p>I modularized the code so that each core functionality has its own file in ./lib. All the modules are included from the main Gemtexter script. For example, there is one module for HTML generation, one for Markdown generation, and so on. </p> <pre> paul in uranus in gemtexter on π± main β― wc -l gemtexter lib/* 117 gemtexter 59 lib/assert.source.sh 128 lib/atomfeed.source.sh 64 lib/gemfeed.source.sh 161 lib/generate.source.sh 50 lib/git.source.sh 162 lib/html.source.sh 30 lib/log.source.sh 63 lib/md.source.sh 834 total </pre><br /> <p>This way, the script could grow far beyond 1000 lines of code and still be maintainable. With more features, execution speed may slowly become a problem, though. I already notice that Gemtexter doesn't produce results instantly but requires few seconds of runtime already. That's not a problem yet, though. </p> <h3>Bash best practises and ShellCheck</h3> <p>While working on Gemtexter, I also had a look at the Google Shell Style Guide and wrote a blog post on that:</p> <a class="textlink" href="https://foo.zone/gemfeed/2021-05-16-personal-bash-coding-style-guide.html">Personal bash coding style guide</a><br /> <p>I followed all these best practices, and in my opinion, the result is a pretty maintainable Bash script (given that you are fluent with all the sed and grep commands I used).</p> <p>ShellCheck, a shell script analysis tool written in Haskell, is run on Gemtexter ensuring that all code is acceptable. I am pretty impressed with what ShellCheck found. </p> <p>It, for example, detected "some_command | while read var; do ...; done" loops and hinted that these create a new subprocess for the while part. The result is that all variable modifications taking place in the while-subprocess won't reflect the primary Bash process. ShellSheck then recommended rewriting the loop so that no subprocess is spawned as "while read -r var; do ...; done < <(some_command)". ShellCheck also pointed out to add a "-r" to "read"; otherwise, there could be an issue with backspaces in the loop data.</p> <p>Furthermore, ShellCheck recommended many more improvements. Declaration of unused variables and missing variable and string quotations were the most common ones. ShellSheck immensely helped to improve the robustness of the script.</p> <a class="textlink" href="https://shellcheck.net">https://shellcheck.net</a><br /> <h3>Unit testing</h3> <p>There is a basic unit test module in ./lib/assert.source.sh, which is used for unit testing. I found this to be very beneficial for cross-platform development. For example, I noticed that some unit tests failed on macOS while everything still worked fine on my Fedora Linux laptop. </p> <p>After digging a bit, I noticed that I had to install the GNU versions of the sed and grep commands on macOS and a newer version of the Bash to make all unit tests pass and Gemtexter work.</p> <p>It has been proven quite helpful to have unit tests in place for the HTML part already when working on the Markdown generator part. To test the Markdown part, I copied the HTML unit tests and changed the expected outcome in the assertions. This way, I could implement the Markdown generator in a test-driven way (writing the test first and afterwards the implementation).</p> <h3>HTML unit test example</h3> <pre> gemtext='=> http://example.org Description of the link' assert::equals "$(generate::make_link html "$gemtext")" \ '<a class="textlink" href="http://example.org">Description of the link</a><br />' </pre><br /> <h3>Markdown unit test example</h3> <pre> gemtext='=> http://example.org Description of the link' assert::equals "$(generate::make_link md "$gemtext")" \ '[Description of the link](http://example.org) ' </pre><br /> <h2>Handcrafted HTML styles</h2> <p>I had a look at some ready off the shelf CSS styles, but they all seemed too bloated. There is a whole industry selling CSS styles on the interweb. I preferred an effortless and minimalist style for the HTML site. So I handcrafted the Cascading Style Sheets manually with love and included them in the HTML header template. </p> <p>For now, I have to re-generate all HTML files whenever the CSS changes. That should not be an issue now, but I might move the CSS into a separate file one day.</p> <p>It's worth mentioning that all generated HTML files and Atom feeds pass the W3C validation tests.</p> <p> </p> <h2>Configurability</h2> <p>In case someone else than me wants to use Gemtexter for his own site, it is pretty much configurable. It is possible to specify your own configuration file and your own HTML templates. Have a look at the GitHub page for examples.</p> <h2>Future features</h2> <p>I could think of the following features added to a future version of Gemtexter:</p> <ul> <li>Templating of Gemtext files so that the .gmi files are generated from .gmi.tpl files. The template engine could do such things as an automatic table of contents and sitemap generation. It could also include the output of inlined shell code, e.g. a fortune quote. </li> <li>Add support for more output formats, such as Groff, PDF, plain text, Gopher, etc.</li> <li>External CSS file for HTML.</li> <li>Improve speed by introducing parallelism and/or concurrency and/or better caching.</li> </ul> <h2>Conclusion</h2> <p>It was quite a lot of fun writing Gemtexter. It's a relatively small project, but given that I worked on that in my spare time once in a while, it kept me busy for several weeks. </p> <p>I finally revamped my personal internet site and started to blog again. I wanted the result to be exactly how it is now: A slightly retro-inspired internet site built for fun with unconventional tools. </p> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Personal Bash coding style guide</title> <link href="gemini://foo.zone/gemfeed/2021-05-16-personal-bash-coding-style-guide.gmi" /> <id>gemini://foo.zone/gemfeed/2021-05-16-personal-bash-coding-style-guide.gmi</id> <updated>2021-05-16T14:51:57+01:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>Lately, I have been polishing and writing a lot of Bash code. Not that I never wrote a lot of Bash, but now as I also looked through the 'Google Shell Style Guide' I thought it is time to also write my own thoughts on that. I agree to that guide in most, but not in all points. . .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Personal Bash coding style guide</h1> <pre> .---------------------------. /,--..---..---..---..---..--. `. //___||___||___||___||___||___\_| [j__ ######################## [_| \============================| .==| |"""||"""||"""||"""| |"""|| /======"---""---""---""---"=| =|| |____ []* ____ | ==|| // \\ // \\ |===|| hjw "\__/"---------------"\__/"-+---+' </pre><br /> <p class="quote"><i>Published by Paul at 2021-05-16</i></p> <p>Lately, I have been polishing and writing a lot of Bash code. Not that I never wrote a lot of Bash, but now as I also looked through the Google Shell Style Guide, I thought it is time also to write my thoughts on that. I agree with that guide in most, but not in all points. </p> <a class="textlink" href="https://google.github.io/styleguide/shellguide.html">Google Shell Style Guide</a><br /> <h2>My modifications</h2> <p>These are my modifications to the Google Guide.</p> <h3>Shebang</h3> <p>Google recommends using always...</p> <pre> #!/bin/bash </pre><br /> <p>... as the shebang line, but that does not work on all Unix and Unix-like operating systems (e.g., the *BSDs don't have Bash installed to /bin/bash). Better is:</p> <pre> #!/usr/bin/env bash </pre><br /> <h3>Two space soft-tabs indentation</h3> <p>I know there have been many tab- and soft-tab wars on this planet. Google recommends using two space soft-tabs for Bash scripts. </p> <p>I don't care if I use two or four space indentations. I agree, however, that we should not use tabs. I tend to use four-space soft-tabs as that's how I currently configured Vim for any programming language. What matters most, though, is consistency within the same script/project.</p> <p>Google also recommends limiting the line length to 80 characters. For some people, that seems to be an old habit from the '80s, where all computer terminals couldn't display longer lines. But I think that the 80 character mark is still a good practice, at least for shell scripts. For example, I am often writing code on a Microsoft Go Tablet PC (running Linux, of course), and it comes in convenient if the lines are not too long due to the relatively small display on the device.</p> <p>I hit the 80 character line length quicker with the four spaces than with two spaces, but that makes me refactor the Bash code more aggressively, which is a good thing. </p> <h3>Breaking long pipes</h3> <p>Google recommends breaking up long pipes like this:</p> <pre> # All fits on one line command1 | command2 # Long commands command1 \ | command2 \ | command3 \ | command4 </pre><br /> <p>I think there is a better way like the following, which is less noisy. The pipe | already indicates the Bash that another command is expected, thus making the explicit line breaks with \ obsolete:</p> <pre> # Long commands command1 | command2 | command3 | command4 </pre><br /> <h3>Quoting your variables</h3> <p>Google recommends always quote your variables. Generally, it would be best if you did that only for variables where you are unsure about the content/values of the variables (e.g., content is from an external input source and may contain whitespace or other special characters). In my opinion, the code will become quite noisy when you always quote your variables like this:</p> <pre> greet () { local -r greeting="${1}" local -r name="${2}" echo "${greeting} ${name}!" } </pre><br /> <p>In this particular example, I agree that you should quote them as you don't know the input (are there, for example, whitespace characters?). But if you are sure that you are only using simple bare words, then I think that the code looks much cleaner when you do this instead:</p> <pre> say_hello_to_paul () { local -r greeting=Hello local -r name=Paul echo "$greeting $name!" } </pre><br /> <p>You see, I also omitted the curly braces { } around the variables. I only use the curly braces around variables when it makes the code either easier/clearer to read or if it is necessary to use them:</p> <pre> declare FOO=bar # Curly braces around FOO are necessary echo "foo${FOO}baz" </pre><br /> <p>A few more words on always quoting the variables: For the sake of consistency (and for making ShellCheck happy), I am not against quoting everything I encounter. I also think that the larger the Bash script becomes, the more critical it becomes always to quote variables. That's because it will be more likely that you might not remember that some of the functions don't work on values with spaces in them, for example. It's just that I won't quote everything in every small script I write. </p> <h3>Prefer built-in commands over external commands</h3> <p>Google recommends using the built-in commands over available external commands where possible:</p> <pre> # Prefer this: addition=$(( X + Y )) substitution="${string/#foo/bar}" # Instead of this: addition="$(expr "${X}" + "${Y}")" substitution="$(echo "${string}" | sed -e 's/^foo/bar/')" </pre><br /> <p>I can't entirely agree here. The external commands (especially sed) are much more sophisticated and powerful than the built-in Bash versions. Sed can do much more than the Bash can ever do by itself when it comes to text manipulation (the name "sed" stands for streaming editor, after all).</p> <p>I prefer to do light text processing with the Bash built-ins and more complicated text processing with external programs such as sed, grep, awk, cut, and tr. However, there is also medium-light text processing where I would want to use external programs. That is so because I remember using them better than the Bash built-ins. The Bash can get relatively obscure here (even Perl will be more readable then - Side note: I love Perl).</p> <p>Also, you would like to use an external command for floating-point calculation (e.g., bc) instead of using the Bash built-ins (worth noticing that ZSH supports built-in floating-points).</p> <p>I even didn't get started with what you can do with awk (especially GNU Awk), a fully-fledged programming language. Tiny Awk snippets tend to be used quite often in Shell scripts without honouring the real power of Awk. But if you did everything in Perl or Awk or another scripting language, then it wouldn't be a Bash script anymore, wouldn't it? ;-)</p> <h2>My additions</h2> <h3>Use of 'yes' and 'no'</h3> <p>Bash does not support a boolean type. I tend just to use the strings 'yes' and 'no' here. I used 0 for false and 1 for true for some time, but I think that the yes/no strings are easier to read. Yes, the Bash script would need to perform string comparisons on every check, but if performance is crucial to you, you wouldn't want to use a Bash script anyway, correct?</p> <pre> declare -r SUGAR_FREE=yes declare -r I_NEED_THE_BUZZ=no buy_soda () { local -r sugar_free=$1 if [[ $sugar_free == yes ]]; then echo 'Diet Dr. Pepper' else echo 'Pepsi Coke' fi } buy_soda $I_NEED_THE_BUZZ </pre><br /> <h3>Non-evil alternative to variable assignments via eval</h3> <p>Google is in the opinion that eval should be avoided. I think so too. They list these examples in their guide:</p> <pre> # What does this set? # Did it succeed? In part or whole? eval $(set_my_variables) # What happens if one of the returned values has a space in it? variable="$(eval some_function)" </pre><br /> <p>However, if I want to read variables from another file, I don't have to use eval here. I only have to source the file:</p> <pre> % cat vars.source.sh declare foo=bar declare bar=baz declare bay=foo % bash -c 'source vars.source.sh; echo $foo $bar $baz' bar baz foo </pre><br /> <p>And suppose I want to assign variables dynamically. In that case, I could just run an external script and source its output (This is how you could do metaprogramming in Bash without the use of eval - write code which produces code for immediate execution):</p> <pre> % cat vars.sh #!/usr/bin/env bash cat <<END declare date="$(date)" declare user=$USER END % bash -c 'source <(./vars.sh); echo "Hello $user, it is $date"' Hello paul, it is Sat 15 May 19:21:12 BST 2021 </pre><br /> <p>The downside is that ShellCheck won't be able to follow the dynamic sourcing anymore.</p> <h3>Prefer pipes over arrays for list processing</h3> <p>When I do list processing in Bash, I prefer to use pipes. You can chain them through Bash functions as well, which is pretty neat. Usually, my list processing scripts are of a structure like this:</p> <pre> filter_lines () { echo 'Start filtering lines in a fancy way!' >&2 grep ... | sed .... } process_lines () { echo 'Start processing line by line!' >&2 while read -r line; do ... do something and produce a result... echo "$result" done } # Do some post-processing of the data postprocess_lines () { echo 'Start removing duplicates!' >&2 sort -u } genreate_report () { echo 'My boss wants to have a report!' >&2 tee outfile.txt wc -l outfile.txt } main () { filter_lines | process_lines | postprocess_lines | generate_report } main </pre><br /> <p>The stdout is always passed as a pipe to the next following stage. The stderr is used for info logging.</p> <h3>Assign-then-shift</h3> <p>I often refactor existing Bash code. That leads me to add and removing function arguments quite often. It's pretty repetitive work changing the $1, $2.... function argument numbers every time you change the order or add/remove possible arguments.</p> <p>The solution is to use of the "assign-then-shift"-method, which goes like this: "local -r var1=$1; shift; local -r var2=$1; shift". The idea is that you only use "$1" to assign function arguments to named (better readable) local function variables. You will never have to bother about "$2" or above. That is very useful when you constantly refactor your code and remove or add function arguments. It's something that I picked up from a colleague (a pure Bash wizard) some time ago:</p> <pre> some_function () { local -r param_foo="$1"; shift local -r param_baz="$1"; shift local -r param_bay="$1"; shift ... } </pre><br /> <p>Want to add a param_baz? Just do this:</p> <pre> some_function () { local -r param_foo="$1"; shift local -r param_bar="$1"; shift local -r param_baz="$1"; shift local -r param_bay="$1"; shift ... } </pre><br /> <p>Want to remove param_foo? Nothing easier than that:</p> <pre> some_function () { local -r param_bar="$1"; shift local -r param_baz="$1"; shift local -r param_bay="$1"; shift ... } </pre><br /> <p>As you can see, I didn't need to change any other assignments within the function. Of course, you would also need to change the function argument lists at every occasion where the function is invoked - you would do that within the same refactoring session.</p> <h3>Paranoid mode</h3> <p>I call this the paranoid mode. The Bash will stop executing when a command exits with a status not equal to 0:</p> <pre> set -e grep -q foo <<< bar echo Jo </pre><br /> <p>Here 'Jo' will never be printed out as the grep didn't find any match. It's unrealistic for most scripts to run in paranoid mode purely, so there must be a way to add exceptions. Critical Bash scripts of mine tend to look like this:</p> <pre> #!/usr/bin/env bash set -e some_function () { .. some critical code ... set +e # Grep might fail, but that's OK now grep .... local -i ec=$? set -e .. critical code continues ... if [[ $ec -ne 0 ]]; then ... fi ... } </pre><br /> <h2>Learned</h2> <p>There are also a couple of things I've learned from Google's guide.</p> <h3>Unintended lexicographical comparison.</h3> <p>The following looks like a valid Bash code:</p> <pre> if [[ "${my_var}" > 3 ]]; then # True for 4, false for 22. do_something fi </pre><br /> <p>... but it is probably an unintended lexicographical comparison. A correct way would be:</p> <pre> if (( my_var > 3 )); then do_something fi </pre><br /> <p>or</p> <pre> if [[ "${my_var}" -gt 3 ]]; then do_something fi </pre><br /> <h3>PIPESTATUS</h3> <p>I have never used the PIPESTATUS variable before. I knew that it's there, but I never bothered to understand how it works until now thoroughly.</p> <p>The PIPESTATUS variable in Bash allows checking of the return code from all parts of a pipe. If it's only necessary to check the success or failure of the whole pipe, then the following is acceptable:</p> <pre> tar -cf - ./* | ( cd "${dir}" && tar -xf - ) if (( PIPESTATUS[0] != 0 || PIPESTATUS[1] != 0 )); then echo "Unable to tar files to ${dir}" >&2 fi </pre><br /> <p>However, as PIPESTATUS will be overwritten as soon as you do any other command, if you need to act differently on errors based on where it happened in the pipe, you'll need to assign PIPESTATUS to another variable immediately after running the command (don't forget that [ is a command and will wipe out PIPESTATUS).</p> <pre> tar -cf - ./* | ( cd "${DIR}" && tar -xf - ) return_codes=( "${PIPESTATUS[@]}" ) if (( return_codes[0] != 0 )); then do_something fi if (( return_codes[1] != 0 )); then do_something_else fi </pre><br /> <h2>Use common sense and BE CONSISTENT.</h2> <p>The following two paragraphs are thoroughly quoted from the Google guidelines. But they hit the hammer on the head:</p> <p class="quote"><i>If you are editing code, take a few minutes to look at the code around you and determine its style. If they use spaces around their if clauses, you should, too. If their comments have little boxes of stars around them, make your comments have little boxes of stars around them too.</i></p> <p class="quote"><i>The point of having style guidelines is to have a common vocabulary of coding so people can concentrate on what you are saying rather than on how you are saying it. We present global style rules here, so people know the vocabulary. But local style is also important. If the code you add to a file looks drastically different from the existing code around it, the discontinuity throws readers out of their rhythm when they go to read it. Try to avoid this.</i></p> <h2>Advanced Bash learning pro tip</h2> <p>I also highly recommend having a read through the "Advanced Bash-Scripting Guide" (not from Google). I use it as the universal Bash reference and learn something new every time I look at it.</p> <a class="textlink" href="https://tldp.org/LDP/abs/html/">Advanced Bash-Scripting Guide</a><br /> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Welcome to the Geminispace</title> <link href="gemini://foo.zone/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi" /> <id>gemini://foo.zone/gemfeed/2021-04-24-welcome-to-the-geminispace.gmi</id> <updated>2021-04-24T19:28:41+01:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>Have you reached this article already via Gemini? You need a special client for that, web browsers such as Firefox, Chrome, Safari etc. don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is: ... to read on visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Welcome to the Geminispace</h1> <p class="quote"><i>Published by Paul at 2021-04-24, last updated at 2021-06-18, ASCII Art by Andy Hood</i></p> <p>Have you reached this article already via Gemini? It requires a Gemini client; web browsers such as Firefox, Chrome, Safari, etc., don't support the Gemini protocol. The Gemini address of this site (or the address of this capsule as people say in Geminispace) is:</p> <a class="textlink" href="gemini://foo.zone">gemini://foo.zone</a><br /> <p>However, if you still use HTTP, you are just surfing the fallback HTML version of this capsule. In that case, I suggest reading on what this is all about :-).</p> <pre> /\ / \ | | |NASA| | | | | | | ' ` |Gemini| | | |______| '-`'-` . / . \'\ . .' ''( .'\.' ' .;' '.;.;' ;'.;' ..;;' AsH </pre><br /> <h2>Motivation</h2> <h3>My urge to revamp my personal website</h3> <p>For some time, I had to urge to revamp my personal website. Not to update the technology and its design but to update all the content (+ keep it current) and start a small tech blog again. So unconsciously, I began to search for an excellent platform to do all of that in a KISS (keep it simple & stupid) way.</p> <h3>My still great Laptop running hot</h3> <p>Earlier this year (2021), I noticed that my almost seven-year-old but still great Laptop started to become hot and slowed down while surfing the web. Also, the Laptop's fan became quite noisy. This was all due to the additional bloat such as JavaScript, excessive use of CSS, tracking cookies+pixels, ads, and so on there was on the website. </p> <p>All I wanted was to read an interesting article, but after a big advertising pop-up banner appeared and made everything worse, I gave up and closed the browser tab.</p> <h2>Discovering the Gemini internet protocol</h2> <p>Around the same time, I discovered a relatively new, more lightweight protocol named Gemini, which does not support all these CPU-intensive features like HTML, JavaScript, and CSS. Also, tracking and ads are unsupported by the Gemini protocol.</p> <p>The "downside" is that due to the limited capabilities of the Gemini protocol, all sites look very old and spartan. But that is not a downside; that is, in fact, a design choice people made. It is up to the client software how your capsule looks. For example, you could use a graphical client, such as Lagrange, with nice font renderings and colours to improve the appearance. Or you could use a very minimalistic command line black-and-white Gemini client. It's your (the user's) choice.</p> <a href="https://foo.zone/gemfeed/2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png"><img alt="Screenshot Amfora Gemini terminal client surfing this site" title="Screenshot Amfora Gemini terminal client surfing this site" src="https://foo.zone/gemfeed/2021-04-24-welcome-to-the-geminispace/amfora-screenshot.png" /></a><br /> <a href="https://foo.zone/gemfeed/2021-04-24-welcome-to-the-geminispace/lagrange-screenshot.png"><img alt="Screenshot graphical Lagrange Gemini client surfing this site" title="Screenshot graphical Lagrange Gemini client surfing this site" src="https://foo.zone/gemfeed/2021-04-24-welcome-to-the-geminispace/lagrange-screenshot.png" /></a><br /> <p>Why is there a need for a new protocol? As the modern web is a superset of Gemini, can't we use simple HTML 1.0 instead? That's a good and valid question. It is not a technical problem but a human problem. We tend to abuse the features once they are available. You can ensure that things stay efficient and straightforward as long as you are using the Gemini protocol. On the other hand, you can't force every website on the modern web to only create plain and straightforward-looking HTML pages.</p> <h2>My own Gemini capsule</h2> <p>As it is effortless to set up and maintain your own Gemini capsule (Gemini server + content composed via the Gemtext markup language), I decided to create my own. What I like about Gemini is that I can use my favourite text editor and get typing. I don't need to worry about the style and design of the presence, and I also don't have to test anything in ten different web browsers. I can only focus on the content! As a matter of fact, I am using the Vim editor + its spellchecker + auto word completion functionality to write this. </p> <p>This site was generated with Gemtexter. You can read more about it here:</p> <a class="textlink" href="https://foo.zone/gemfeed/2021-06-05-gemtexter-one-bash-script-to-rule-it-all.html">Gemtexter - One Bash script to rule it all</a><br /> <h2>Gemini advantages summarised</h2> <ul> <li>Supports an alternative to the modern bloated web</li> <li>Easy to operate and easy to write content</li> <li>No need to worry about various web browser compatibilities</li> <li>It's the client's responsibility how the content is designed+presented</li> <li>Lightweight (although not as lightweight as the Gopher protocol)</li> <li>Supports privacy (no cookies, no request header fingerprinting, TLS encryption)</li> <li>Fun to play with (it's a bit geeky, yes, but a lot of fun!)</li> </ul> <h2>Dive into deep Gemini space</h2> <p>Check out one of the following links for more information about Gemini. For example, you will find a FAQ that explains why the protocol is named Gemini. Many Gemini capsules are dual-hosted via Gemini and HTTP(S) so that people new to Gemini can sneak peek at the content with a regular web browser. Some people go as far as tri-hosting all their content via HTTP(S), Gemini and Gopher.</p> <a class="textlink" href="gemini://gemini.circumlunar.space">gemini://gemini.circumlunar.space</a><br /> <a class="textlink" href="https://gemini.circumlunar.space">https://gemini.circumlunar.space</a><br /> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>DTail - The distributed log tail program</title> <link href="gemini://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi" /> <id>gemini://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program.gmi</id> <updated>2021-04-22T19:28:41+01:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal Gemini capsule too. ...to read on visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>DTail - The distributed log tail program</h1> <p class="quote"><i>Published by Paul at 2021-04-22, last updated at 2021-04-26</i></p> <a href="https://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/title.png"><img alt="DTail logo image" title="DTail logo image" src="https://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/title.png" /></a><br /> <p>This article first appeared at the Mimecast Engineering Blog but I made it available here in my personal internet site too.</p> <a class="textlink" href="https://medium.com/mimecast-engineering/dtail-the-distributed-log-tail-program-79b8087904bb">Original Mimecast Engineering Blog post at Medium</a><br /> <p>Running a large cloud-based service requires monitoring the state of huge numbers of machines, a task for which many standard UNIX tools were not really designed. In this post, I will describe a simple program, DTail, that Mimecast has built and released as Open-Source, which enables us to monitor log files of many servers at once without the costly overhead of a full-blown log management system.</p> <p>At Mimecast, we run over 10 thousand server boxes. Most of them host multiple microservices and each of them produces log files. Even with the use of time series databases and monitoring systems, raw application logs are still an important source of information when it comes to analysing, debugging, and troubleshooting services.</p> <p>Every engineer familiar with UNIX or a UNIX-like platform (e.g., Linux) is well aware of tail, a command-line program for displaying a text file content on the terminal which is also especially useful for following application or system log files with tail -f logfile.</p> <p>Think of DTail as a distributed version of the tail program which is very useful when you have a distributed application running on many servers. DTail is an Open-Source, cross-platform, fairly easy to use, support and maintain log file analysis & statistics gathering tool designed for Engineers and Systems Administrators. It is programmed in Google Go.</p> <h2>A Mimecast Pet Project</h2> <p>DTail got its inspiration from public domain tools available already in this area but it is a blue sky from-scratch development which was first presented at Mimecastβs annual internal Pet Project competition (awarded with a Bronze prize). It has gained popularity since and is one of the most widely deployed DevOps tools at Mimecast (reaching nearly 10k server installations) and many engineers use it on a regular basis. The Open-Source version of DTail is available at:</p> <a class="textlink" href="https://dtail.dev">https://dtail.dev</a><br /> <p>Try it out β We would love any feedback. But first, read onβ¦</p> <h2>Differentiating from log management systems</h2> <p>Why not just use a full-blown log management system? There are various Open-Source and commercial log management solutions available on the market you could choose from (e.g. the ELK stack). Most of them store the logs in a centralized location and are fairly complex to set up and operate. Possibly they are also pretty expensive to operate if you have to buy dedicated hardware (or pay fees to your cloud provider) and have to hire support staff for it.</p> <p>DTail does not aim to replace any of the log management tools already available but is rather an additional tool crafted especially for ad-hoc debugging and troubleshooting purposes. DTail is cheap to operate as it does not require any dedicated hardware for log storage as it operates directly on the source of the logs. It means that there is a DTail server installed on all server boxes producing logs. This decentralized comes with the direct advantages that there is no introduced delay because the logs are not shipped to a central log storage device. The reduced complexity also makes it more robust against outages. You wonβt be able to troubleshoot your distributed application very well if the log management infrastructure isnβt working either.</p> <a href="https://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif"><img alt="DTail sample session animated gif" title="DTail sample session animated gif" src="https://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dtail.gif" /></a><br /> <p>As a downside, you wonβt be able to access any logs with DTail when the server is down. Furthermore, a server can store logs only up to a certain capacity as disks will fill up. For the purpose of ad-hoc debugging, these are not typically issues. Usually, itβs the application you want to debug and not the server. And disk space is rarely an issue for bare metal and VM-based systems these days, with sufficient space for several weeksβ worth of log storage being available. DTail also supports reading compressed logs. The currently supported compression algorithms are gzip and zstd.</p> <h2>Combining simplicity, security and efficiency</h2> <p>DTail also has a client component that connects to multiple servers concurrently for log files (or any other text files).</p> <p>The DTail client interacts with a DTail server on port TCP/2222 via SSH protocol and does not interact in any way with the systemβs SSH server (e.g., OpenSSH Server) which might be running at port TCP/22 already. As a matter of fact, you donβt need a regular SSH server running for DTail at all. There is no support for interactive login shells at TCP/2222 either, as by design that port can only be used for text data streaming. The SSH protocol is used for the public/private key infrastructure and transport encryption only and DTail implements its own protocol on top of SSH for the features provided. There is no need to set up or buy any additional TLS certificates. The port 2222 can be easily reconfigured if you preferred to use a different one.</p> <p>The DTail server, which is a single static binary, will not fork an external process. This means that all features are implemented in native Go code (exception: Linux ACL support is implemented in C, but it must be enabled explicitly on compile time) and therefore helping to make it robust, secure, efficient, and easy to deploy. A single client, running on a standard Laptop, can connect to thousands of servers concurrently while still maintaining a small resource footprint.</p> <p>Recent log files are very likely still in the file system caches on the servers. Therefore, there tends to be a minimal I/O overhead involved.</p> <h2>The DTail family of commands</h2> <p>Following the UNIX philosophy, DTail includes multiple command-line commands each of them for a different purpose:</p> <ul> <li>dserver: The DTail server, the only binary required to be installed on the servers involved.</li> <li>dtail: The distributed log tail client for following log files.</li> <li>dcat: The distributed cat client for concatenating and displaying text files.</li> <li>dgrep: The distributed grep client for searching text files for a regular expression pattern.</li> <li>dmap: The distributed map-reduce client for aggregating stats from log files.</li> </ul> <a href="https://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dgrep.gif"><img alt="DGrep sample session animated gif" title="DGrep sample session animated gif" src="https://foo.zone/gemfeed/2021-04-22-dtail-the-distributed-log-tail-program/dgrep.gif" /></a><br /> <h2>Usage example</h2> <p>The use of these commands is almost self-explanatory for a person already used to the standard command line in Unix systems. One of the main goals is to make DTail easy to use. A tool that is too complicated to use under high-pressure scenarios (e.g., during an incident) can be quite detrimental.</p> <p>The basic idea is to start one of the clients from the command line and provide a list of servers to connect to with βservers. You also must provide a path of remote (log) files via βfiles. If you want to process multiple files per server, you could either provide a comma-separated list of file paths or make use of file system globbing (or a combination of both).</p> <p>The following example would connect to all DTail servers listed in the serverlist.txt, follow all files with the ending .log and filter for lines containing the string error. You can specify any Go compatible regular expression. In this example we add the case-insensitive flag to the regex:</p> <pre> dtail βservers serverlist.txt βfiles β/var/log/*.logβ βregex β(?i:error)β </pre><br /> <p>You usually want to specify a regular expression as a client argument. This will mean that responses are pre-filtered for all matching lines on the server-side and thus sending back only the relevant lines to the client. If your logs are growing very rapidly and the regex is not specific enough there might be the chance that your client is not fast enough to keep up processing all of the responses. This could be due to a network bottleneck or just as simple as a slow terminal emulator displaying the log lines on the client-side.</p> <p>A green 100 in the client output before each log line received from the server always indicates that there were no such problems and 100% of all log lines could be displayed on your terminal (have a look at the animated Gifs in this post). If the percentage falls below 100 it means that some of the channels used by the servers to send data to the client are congested and lines were dropped. In this case, the color will change from green to red. The user then could decide to run the same query but with a more specific regex.</p> <p>You could also provide a comma-separated list of servers as opposed to a text file. There are many more options you could use. The ones listed here are just the very basic ones. There are more instructions and usage examples on the GitHub page. Also, you can study even more of the available options via the βhelp switch (some real treasures might be hidden there).</p> <h2>Fitting it in</h2> <p>DTail integrates nicely into the user management of existing infrastructure. It follows normal system permissions and does not open new βholesβ on the server which helps to keep security departments happy. The user would not have more or less file read permissions than he would have via a regular SSH login shell. There is a full SSH key, traditional UNIX permissions, and Linux ACL support. There is also a very low resource footprint involved. On average for tailing and searching log files less than 100MB RAM and less than a quarter of a CPU core per participating server are required. Complex map-reduce queries on big data sets will require more resources accordingly.</p> <h2>Advanced features</h2> <p>The features listed here are out of the scope of this blog post but are worthwhile to mention:</p> <ul> <li>Distributed map-reduce queries on stats provided in log files with dmap. dmap comes with its own SQL-like aggregation query language.</li> <li>Stats streaming with continuous map-reduce queries. The difference to normal queries is that the stats are aggregated over a specified interval only on the newly written log lines. Thus, giving a de-facto live stat view for each interval.</li> <li>Server-side scheduled queries on log files. The queries are configured in the DTail server configuration file and scheduled at certain time intervals. Results are written to CSV files. This is useful for generating daily stats from the log files without the need for an interactive client.</li> <li>Server-side stats streaming with continuous map-reduce queries. This for example can be used to periodically generate stats from the logs at a configured interval, e.g., log error counts by the minute. These then can be sent to a time-series database (e.g., Graphite) and then plotted in a Grafana dashboard.</li> <li>Support for custom extensions. E.g., for different server discovery methods (so you donβt have to rely on plain server lists) and log file formats (so that map-reduce queries can parse more stats from the logs).</li> </ul> <h2>For the future</h2> <p>There are various features we want to see in the future.</p> <ul> <li>A spartan mode, not printing out any extra information but the raw remote log files would be a nice feature to have. This will make it easier to post-process the data produced by the DTail client with common UNIX tools. (To some degree this is possible already, just disable the ANSI terminal color output of the client with -noColors and pipe the output to another program).</li> <li>Tempting would be implementing the dgoawk command, a distributed version of the AWK programming language purely implemented in Go, for advanced text data stream processing capabilities. There are 3rd party libraries available implementing AWK in pure Go which could be used.</li> <li>A more complex change would be the support of federated queries. You can connect to thousands of servers from a single client running on a laptop. But does it scale to 100k of servers? Some of the servers could be used as middleware for connecting to even more servers.</li> <li>Another aspect is to extend the documentation. Especially the advanced features such as map-reduce query language and how to configure the server-side queries currently do require more documentation. For now, you can read the code, sample config files or just ask the author for that! But this will be certainly addressed in the future.</li> </ul> <h2>Open Source</h2> <p>Mimecast highly encourages you to have a look at DTail and submit an issue for any features you would like to see. Have you found a bug? Maybe you just have a question or comment? If you want to go a step further: We would also love to see pull requests for any features or improvements. Either way, if in doubt just contact us via the DTail GitHub page.</p> <a class="textlink" href="https://dtail.dev">https://dtail.dev</a><br /> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Realistic load testing with I/O Riot for Linux</title> <link href="gemini://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux.gmi" /> <id>gemini://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux.gmi</id> <updated>2018-06-01T14:50:29+01:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>This text first was published in the german IT-Administrator computer Magazine. 3 years have passed since and I decided to publish it on my blog too. . .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Realistic load testing with I/O Riot for Linux</h1> <pre> .---. / \ \.@-@./ /`\_/`\ // _ \\ | \ )|_ /`\_`> <_/ \ jgs\__/'---'\__/ </pre><br /> <p class="quote"><i>Published by Paul at 2018-06-01, last updated at 2021-05-08</i></p> <h2>Foreword</h2> <p>This text first was published in the german IT-Administrator computer Magazine. 3 years have passed since and I decided to publish it on my blog too. </p> <a class="textlink" href="https://www.admin-magazin.de/Das-Heft/2018/06/Realistische-Lasttests-mit-I-O-Riot">https://www.admin-magazin.de/Das-Heft/2018/06/Realistische-Lasttests-mit-I-O-Riot</a><br /> <p>I havn't worked on I/O Riot for some time now, but all what is written here is still valid. I am still using I/O Riot to debug I/O issues and pattern once in a while, so by all means the tool is not obsolete yet. The tool even helped to resolve a major production incident at work caused by disk I/O.</p> <p>I am eagerly looking forward to revamp I/O Riot so that it uses the new BPF Linux capabilities instead of plain old Systemtap (or alternatively: Newer versions of Systemtap can also use BPF as the backend I have learned). Also, when I wrote I/O Riot initially, I didn't have any experience with the Go programming language yet and therefore I wrote it in C. Once it gets revamped I might consider using Go instead of C as it would spare me from many segmentation faults and headaches during development ;-). I might also just stick to C for plain performance reasons and just refactor the code dealing with concurrency.</p> <p>Pleace notice that some of the screenshots show the command "ioreplay" instead of "ioriot". That's because the name has changed after taking those.</p> <h1>The article</h1> <p>With I/O Riot IT administrators can load test and optimize the I/O subsystem of Linux-based operating systems. The tool makes it possible to record I/O patterns and replay them at a later time as often as desired. This means bottlenecks can be reproduced and eradicated. </p> <p>When storing huge amounts of data, such as more than 200 billion archived emails at Mimecast, it's not only the available storage capacity that matters, but also the data throughput and latency. At the same time, operating costs must be kept as low as possible. The more systems involved, the more important it is to optimize the hardware, the operating system and the applications running on it.</p> <h2>Background: Existing Techniques</h2> <p>Conventional I/O benchmarking: Administrators usually use open source benchmarking tools like IOZone and bonnie++. Available database systems such as Redis and MySQL come with their own benchmarking tools. The common problem with these tools is that they work with prescribed artificial I/O patterns. Although this can test both sequential and randomized data access, the patterns do not correspond to what can be found on production systems.</p> <p>Testing by load test environment: Another option is to use a separate load test environment in which, as far as possible, a production environment with all its dependencies is simulated. However, an environment consisting of many microservices is very complex. Microservices are usually managed by different teams, which means extra coordination effort for each load test. Another challenge is to generate the load as authentically as possible so that the patterns correspond to a productive environment. Such a load test environment can only handle as many requests as its weakest link can handle. For example, load generators send many read and write requests to a frontend microservice, whereby the frontend forwards the requests to a backend microservice responsible for storing the data. If the frontend service does not process the requests efficiently enough, the backend service is not well utilized in the first place. As a rule, all microservices are clustered across many servers, which makes everything even more complicated. Under all these conditions it is very difficult to test I/O of separate backend systems. Moreover, for many small and medium-sized companies, a separate load test environment would not be feasible for cost reasons.</p> <p>Testing in the production environment: For these reasons, benchmarks are often carried out in the production environment. In order to derive value from this such tests are especially performed during peak hours when systems are under high load. However, testing on production systems is associated with risks and can lead to failure or loss of data without adequate protection.</p> <h2>Benchmarking the Email Cloud at Mimecast</h2> <p>For email archiving, Mimecast uses an internally developed microservice, which is operated directly on Linux-based storage systems. A storage cluster is divided into several replication volumes. Data is always replicated three times across two secure data centers. Customer data is automatically allocated to one or more volumes, depending on throughput, so that all volumes are automatically assigned the same load. Customer data is archived on conventional, but inexpensive hard disks with several terabytes of storage capacity each. I/O benchmarking proved difficult for all the reasons mentioned above. Furthermore, there are no ready-made tools for this purpose in the case of self-developed software. The service operates on many block devices simultaneously, which can make the RAID controller a bottleneck. None of the freely available benchmarking tools can test several block devices at the same time without extra effort. In addition, emails typically consist of many small files. Randomized access to many small files is particularly inefficient. In addition to many software adaptations, the hardware and operating system must also be optimized.</p> <p>Mimecast encourages employees to be innovative and pursue their own ideas in the form of an internal competition, Pet Project. The goal of the pet project I/O Riot was to simplify OS and hardware level I/O benchmarking. The first prototype of I/O Riot was awarded an internal roadmap prize in the spring of 2017. A few months later, I/O Riot was used to reduce write latency in the storage clusters by about 50%. The improvement was first verified by I/O replay on a test system and then successively applied to all storage systems. I/O Riot was also used to resolve a production incident caused by disk I/O load.</p> <h2>Using I/O Riot</h2> <p>First, all I/O events are logged to a file on a production system with I/O Riot. It is then copied to a test system where all events are replayed in the same way. The crucial point here is that you can reproduce I/O patterns as they are found on a production system as often as you like on a test system. This results in the possibility of optimizing the set screws on the system after each run.</p> <h3>Installation</h3> <p>I/O Riot was tested under CentOS 7.2 x86_64. For compiling, the GNU C compiler and Systemtap including kernel debug information are required. Other Linux distributions are theoretically compatible but untested. First of all, you should update the systems involved as follows:</p> <pre> % sudo yum update </pre><br /> <p>If the kernel is updated, please restart the system. The installation would be done without a restart but this would complicate the installation. The installed kernel version should always correspond to the currently running kernel. You can then install I/O Riot as follows:</p> <pre> % sudo yum install gcc git systemtap yum-utils kernel-devel-$(uname -r) % sudo debuginfo-install kernel-$(uname -r) % git clone https://github.com/mimecast/ioriot % cd ioriot % make % sudo make install % export PATH=$PATH:/opt/ioriot/bin </pre><br /> <p>Note: It is not best practice to install any compilers on production systems. For further information please have a look at the enclosed README.md.</p> <h3>Recording of I/O events</h3> <p>All I/O events are kernel related. If a process wants to perform an I/O operation, such as opening a file, it must inform the kernel of this by a system call (short syscall). I/O Riot relies on the Systemtap tool to record I/O syscalls. Systemtap, available for all popular Linux distributions, helps you to take a look at the running kernel in productive environments, which makes it predestined to monitor all I/O-relevant Linux syscalls and log them to a file. Other tools, such as strace, are not an alternative because they slow down the system too much.</p> <p>During recording, ioriot acts as a wrapper and executes all relevant Systemtap commands for you. Use the following command to log all events to io.capture:</p> <pre> % sudo ioriot -c io.capture </pre><br /> <a href="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure1-ioriot-io-recording.png"><img alt="Screenshot I/O recording" title="Screenshot I/O recording" src="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure1-ioriot-io-recording.png" /></a><br /> <p>A Ctrl-C (SIGINT) stops recording prematurely. Otherwise, ioriot terminates itself automatically after 1 hour. Depending on the system load, the output file can grow to several gigabytes. Only metadata is logged, not the read and written data itself. When replaying later, only random data is used. Under certain circumstances, Systemtap may omit some system calls and issue warnings. This is to ensure that Systemtap does not consume too many resources.</p> <h3>Test preparation</h3> <p>Then copy io.capture to a test system. The log also contains all accesses to the pseudo file systems devfs, sysfs and procfs. This makes little sense, which is why you must first generate a cleaned and playable version io.replay from io.capture as follows:</p> <pre> % sudo ioriot -c io.capture -r io.replay -u $USER -n TESTNAME </pre><br /> <p>The parameter -n allows you to assign a freely selectable test name. An arbitrary system user under which the test is to be played is specified via paramater -u.</p> <h3>Test Initialization</h3> <p>The test will most likely want to access existing files. These are files the test wants to read but does not create by itself. The existence of these must be ensured before the test. You can do this as follows:</p> <pre> % sudo ioriot -i io.replay </pre><br /> <p>To avoid any damage to the running system, ioreplay only works in special directories. The tool creates a separate subdirectory for each file system mount point (e.g. /, /usr/local, /store/00,...) (here: /.ioriot/TESTNAME, /usr/local/.ioriot/TESTNAME, /store/00/.ioriot/TESTNAME,...). By default, the working directory of ioriot is /usr/local/ioriot/TESTNAME.</p> <a href="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure2-ioriot-test-preparation.png"><img alt="Screenshot test preparation" title="Screenshot test preparation" src="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure2-ioriot-test-preparation.png" /></a><br /> <p>You must re-initialize the environment before each run. Data from previous tests will be moved to a trash directory automatically, which can be finally deleted with "sudo ioriot -P".</p> <h3>Replay</h3> <p>After initialization, you can replay the log with -r. You can use -R to initiate both test initialization and replay in a single command and -S can be used to specify a file in which statistics are written after the test run.</p> <p>You can also influence the playback speed: "-s 0" is interpreted as "Playback as fast as possible" and is the default setting. With "-s 1" all operations are performed at original speed. "-s 2" would double the playback speed and "-s 0.5" would halve it.</p> <a href="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure3-ioriot-replay.png"><img alt="Screenshot replaying I/O" title="Screenshot replaying I/O" src="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure3-ioriot-replay.png" /></a><br /> <p>As an initial test, for example, you could compare the two Linux I/O schedulers CFQ and Deadline and check which scheduler the test runs the fastest. They run the test separately for each scheduler. The following shell loop iterates through all attached block devices of the system and changes their I/O scheduler to the one specified in variable $new_scheduler (in this case either cfq or deadline). Subsequently, all I/O events from the io.replay protocol are played back. At the end, an output file with statistics is generated:</p> <pre> % new_scheduler=cfq % for scheduler in /sys/block/*/queue/scheduler; do echo $new_scheduler | sudo tee $scheduler done % sudo ioriot -R io.replay -S cfq.txt % new_scheduler=deadline % for scheduler in /sys/block/*/queue/scheduler; do echo $new_scheduler | sudo tee $scheduler done % sudo ioriot -R io.replay -S deadline.txt </pre><br /> <p>According to the results, the test could run 940 seconds faster with Deadline Scheduler:</p> <pre> % cat cfq.txt Num workers: 4 hreads per worker: 128 otal threads: 512 Highest loadavg: 259.29 Performed ioops: 218624596 Average ioops/s: 101544.17 Time ahead: 1452s Total time: 2153.00s % cat deadline.txt Num workers: 4 Threads per worker: 128 Total threads: 512 Highest loadavg: 342.45 Performed ioops: 218624596 Average ioops/s: 180234.62 Time ahead: 2392s Total time: 1213.00s </pre><br /> <p>In any case, you should also set up a time series database, such as Graphite, where the I/O throughput can be plotted. Figures 4 and 5 show the read and write access times of both tests. The break-in makes it clear when the CFQ test ended and the deadline test was started. The reading latency of both tests is similar. Write latency is dramatically improved using the Deadline Scheduler.</p> <a href="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure4-ioriot-read-latency.png"><img alt="Graphite visualization of the mean read access times in ms with CFQ and Deadline Scheduler." title="Graphite visualization of the mean read access times in ms with CFQ and Deadline Scheduler." src="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure4-ioriot-read-latency.png" /></a><br /> <a href="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure5-ioriot-write-latency.png"><img alt="Graphite visualization of the average write access times in ms with CFQ and Deadline Scheduler." title="Graphite visualization of the average write access times in ms with CFQ and Deadline Scheduler." src="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure5-ioriot-write-latency.png" /></a><br /> <p>You should also take a look at the iostat tool. The iostat screenshot shows the output of iostat -x 10 during a test run. As you can see, a block device is fully loaded with 99% utilization, while all other block devices still have sufficient buffer. This could be an indication of poor data distribution in the storage system and is worth pursuing. It is not uncommon for I/O Riot to reveal software problems.</p> <a href="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure6-iostat.png"><img alt="Output of iostat. The block device sdy seems to be almost fully utilized by 99%." title="Output of iostat. The block device sdy seems to be almost fully utilized by 99%." src="https://foo.zone/gemfeed/2018-06-01-realistic-load-testing-with-ioriot-for-linux/figure6-iostat.png" /></a><br /> <h2>I/O Riot is Open Source</h2> <p>The tool has already proven to be very useful and will continue to be actively developed as time and priority permits. Mimecast intends to be an ongoing contributor to Open Source. You can find I/O Riot at:</p> <a class="textlink" href="https://github.com/mimecast/ioriot">https://github.com/mimecast/ioriot</a><br /> <h2>Systemtap</h2> <p>Systemtap is a tool for the instrumentation of the Linux kernel. The tool provides an AWK-like programming language. Programs written in it are compiled from Systemtap to C- and then into a dynamically loadable kernel module. Loaded into the kernel, the program has access to Linux internals. A Systemtap program written for I/O Riot monitors when, with which parameters, at which time, and from which process I/O syscalls take place and their return values.</p> <p>For example, the open syscall opens a file and returns the responsible file descriptor. The read and write syscalls can operate on a file descriptor and return the number of read or written bytes. The close syscall closes a given file descriptor. I/O Riot comes with a ready-made Systemtap program, which you have already compiled into a kernel module and installed to /opt/ioriot. In addition to open, read and close, it logs many other I/O-relevant calls.</p> <a class="textlink" href="https://sourceware.org/systemtap/">https://sourceware.org/systemtap/</a><br /> <h2>More refereces</h2> <a class="textlink" href="http://www.iozone.org/">IOZone</a><br /> <a class="textlink" href="https://www.coker.com.au/bonnie++/">Bonnie++</a><br /> <a class="textlink" href="https://graphiteapp.org">Graphite</a><br /> <a class="textlink" href="https://en.wikipedia.org/wiki/Memory-mapped_I/O">Memory mapped I/O</a><br /> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Object oriented programming with ANSI C</title> <link href="gemini://foo.zone/gemfeed/2016-11-20-object-oriented-programming-with-ansi-c.gmi" /> <id>gemini://foo.zone/gemfeed/2016-11-20-object-oriented-programming-with-ansi-c.gmi</id> <updated>2016-11-20T22:10:57+00:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>You can do a little of object-oriented programming in the C Programming Language. However, that is, in my humble opinion, limited. It's easier to use a different programming language than C for OOP. But still it's an interesting exercise to try using C for this.. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Object oriented programming with ANSI C</h1> <pre> ___ ___ ____ ____ / _ \ / _ \| _ \ / ___| | | | | | | | |_) |____| | | |_| | |_| | __/_____| |___ \___/ \___/|_| \____| </pre><br /> <p class="quote"><i>Published by Paul at 2016-11-20, updated 2022-01-29</i></p> <p>You can do a little of object-oriented programming in the C Programming Language. However, that is, in my humble opinion, limited. It's easier to use a different programming language than C for OOP. But still it's an interesting exercise to try using C for this.</p> <h2>Function pointers</h2> <p>Let's have a look at the following sample program. All you have to do is to add a function pointer such as "calculate" to the definition of struct "something_s". Later, during the struct initialization, assign a function address to that function pointer:</p> <pre> #include <stdio.h> typedef struct { double (*calculate)(const double, const double); char *name; } something_s; double multiplication(const double a, const double b) { return a * b; } double division(const double a, const double b) { return a / b; } int main(void) { something_s mult = (something_s) { .calculate = multiplication, .name = "Multiplication" }; something_s div = (something_s) { .calculate = division, .name = "Division" }; const double a = 3, b = 2; printf("%s(%f, %f) => %f\n", mult.name, a, b, mult.calculate(a,b)); printf("%s(%f, %f) => %f\n", div.name, a, b, div.calculate(a,b)); } </pre><br /> <p>As you can see, you can call the function (pointed by the function pointer) with the same syntax as in C++ or Java:</p> <pre> printf("%s(%f, %f) => %f\n", mult.name, a, b, mult.calculate(a,b)); printf("%s(%f, %f) => %f\n", div.name, a, b, div.calculate(a,b)); </pre><br /> <p>However, that's just syntactic sugar for:</p> <pre> printf("%s(%f, %f) => %f\n", mult.name, a, b, (*mult.calculate)(a,b)); printf("%s(%f, %f) => %f\n", div.name, a, b, (*div.calculate)(a,b)); </pre><br /> <p>Output:</p> <pre> pbuetow ~/git/blog/source [38268]% gcc oop-c-example.c -o oop-c-example pbuetow ~/git/blog/source [38269]% ./oop-c-example Multiplication(3.000000, 2.000000) => 6.000000 Division(3.000000, 2.000000) => 1.500000 </pre><br /> <p>Not complicated at all, but nice to know and helps to make the code easier to read!</p> <h2>That's not OOP, though</h2> <p>However, that's not really how it works in object-oriented languages such as Java and C++. The method call in this example is not a method call as "mult" and "div" in this example are not "message receivers". I mean that the functions can not access the state of the "mult" and "div" struct objects. In C, you would need to do something like this instead if you wanted to access the state of "mult" from within the calculate function, you would have to pass it as an argument:</p> <pre> mult.calculate(mult,a,b)); </pre><br /> <h2>Real object oriented programming with C</h2> <p>If you want to take it further, hit "Object-Oriented Programming with ANSI-C" into your favourite internet search engine or follow the link below. It goes as far as writing a C preprocessor in AWK, which takes some object-oriented pseudo-C and transforms it to plain C so that the C compiler can compile it to machine code. This is similar to how the C++ language had its origins.</p> <a class="textlink" href="https://www.cs.rit.edu/~ats/books/ooc.pdf">https://www.cs.rit.edu/~ats/books/ooc.pdf</a><br /> <h2>OOP design patterns in the Linux Kernel</h2> <p>Big C software projects, like Linux, also follow some OOP techniques:</p> <a class="textlink" href="https://lwn.net/Articles/444910/">https://lwn.net/Articles/444910/</a><br /> <p>C is a very old programming language with it's quirks. This might be one of the reasons why Linux will also let Rust code in.</p> <p>E-Mail me your comments to paul at buetow dot org!</p> </div> </content> </entry> <entry> <title>Spinning up my own authoritative DNS servers</title> <link href="gemini://foo.zone/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi" /> <id>gemini://foo.zone/gemfeed/2016-05-22-spinning-up-my-own-authoritative-dns-servers.gmi</id> <updated>2016-05-22T18:59:01+01:00</updated> <author> <name>Paul Buetow</name> <email>comments@mx.buetow.org</email> </author> <summary>Finally, I had time to deploy my own authoritative DNS servers (master and slave) for my domains 'buetow.org' and 'buetow.zone'. My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to set your own authoritative DNS servers for your domains. From now I am making use of that option.. .....to read on please visit my site.</summary> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <h1>Spinning up my own authoritative DNS servers</h1> <p class="quote"><i>Published by Paul at 2016-05-22</i></p> <h2>Background</h2> <p>Finally, I had time to deploy my authoritative DNS servers (master and slave) for my domains "buetow.org" and "buetow.zone". My domain name provider is Schlund Technologies. They allow their customers to edit the DNS records (BIND files) manually. And they also allow you to set your authoritative DNS servers for your domains. From now, I am making use of that option.</p> <a class="textlink" href="http://www.schlundtech.de">Schlund Technologies</a><br /> <h2>All FreeBSD Jails</h2> <p>To set up my authoritative DNS servers, I installed a FreeBSD Jail dedicated for DNS with Puppet on my root machine as follows:</p> <pre> include freebsd freebsd::ipalias { '2a01:4f8:120:30e8::14': ensure => up, proto => 'inet6', preflen => '64', interface => 're0', aliasnum => '5', } include jail::freebsd class { 'jail': ensure => present, jails_config => { dns => { '_ensure' => present, '_type' => 'freebsd', '_mirror' => 'ftp://ftp.de.freebsd.org', '_remote_path' => 'FreeBSD/releases/amd64/10.1-RELEASE', '_dists' => [ 'base.txz', 'doc.txz', ], '_ensure_directories' => [ '/opt', '/opt/enc' ], 'host.hostname' => "'dns.ian.buetow.org'", 'ip4.addr' => '192.168.0.15', 'ip6.addr' => '2a01:4f8:120:30e8::15', }, . . } } </pre><br /> <h2>PF firewall</h2> <p>Please note that "dns.ian.buetow.org" is just the Jail name of the master DNS server (and "caprica.ian.buetow.org" the name of the Jail for the slave DNS server) and that I am using the DNS names "dns1.buetow.org" (master) and "dns2.buetow.org" (slave) for the actual service names (these are the DNS servers visible to the public). Please also note that the IPv4 address is an internal one. I have a PF to use NAT and PAT. The DNS ports are being forwarded (TCP and UDP) to that Jail. By default, all ports are blocked, so I am adding an exception rule for the IPv6 address. These are the PF rules in use:</p> <pre> % cat /etc/pf.conf . . # dns.ian.buetow.org rdr pass on re0 proto tcp from any to $pub_ip port {53} -> 192.168.0.15 rdr pass on re0 proto udp from any to $pub_ip port {53} -> 192.168.0.15 pass in on re0 inet6 proto tcp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state pass in on re0 inet6 proto udp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state . . </pre><br /> <h2>Puppet managed BIND zone files</h2> <p>In "manifests/dns.pp" (the Puppet manifest for the Master DNS Jail itself), I configured the BIND DNS server this way:</p> <pre> class { 'bind_freebsd': config => "puppet:///files/bind/named.${::hostname}.conf", dynamic_config => "puppet:///files/bind/dynamic.${::hostname}", } </pre><br /> <p>The Puppet module is a pretty simple one. It installs the file "/usr/local/etc/named/named.conf" and it populates the "/usr/local/etc/named/dynamicdb" directory with all my zone files.</p> <p>Once (Puppet-) applied inside of the Jail, I get this:</p> <pre> paul uranus:~/git/blog/source [4268]% ssh admin@dns1.buetow.org.buetow.org pgrep -lf named 60748 /usr/local/sbin/named -u bind -c /usr/local/etc/namedb/named.conf paul uranus:~/git/blog/source [4269]% ssh admin@dns1.buetow.org.buetow.org tail -n 13 /usr/local/etc/namedb/named.conf zone "buetow.org" { type master; notify yes; allow-update { key "buetoworgkey"; }; file "/usr/local/etc/namedb/dynamic/buetow.org"; }; zone "buetow.zone" { type master; notify yes; allow-update { key "buetoworgkey"; }; file "/usr/local/etc/namedb/dynamic/buetow.zone"; }; paul uranus:~/git/blog/source [4277]% ssh admin@dns1.buetow.org.buetow.org cat /usr/local/etc/namedb/dynamic/buetow.org $TTL 3600 @ IN SOA dns1.buetow.org. domains.buetow.org. ( 25 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; Infrastructure domains @ IN NS dns1 @ IN NS dns2