💾 Archived View for gemini.eletrotupi.com › ~porcellis › regarding-reliable-software.en.gmi captured on 2022-06-04 at 00:13:07. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
I spend at least 8h of my day writing code, watching a text editor and some
terminals. Thinking about data structures, thinking about how to efficiently
store data, and how to teach my computer (or others) to do things that I don't
want to have to do manually, because the computer is more trustworthy than me
at certain tasks. It can calculate faster, remember things for a longer
period of time, it can talk to people faster than I would ever be able to, and I
delegate many of these tasks to it. But I _can't trust_ it.
We live in a time where trust in a computer is normal, and even _expected_. It's
expected that I utilize them to check my bank account balance, to find a ride
home, or to guide me to new places. It's expected that I share my personal data
with them and trust that its being kept somewhere safe. I trust that they will
deliver the internet service I have purchased, that my fire alarm will warn me
in time, so on and so forth.
The most direct means by which we can influence these problems is by writing
software. Having done just that every day for at least the last six years, it
stuns me every time someone says that they trust software to, or even wishes
that it would control or influence more of their lives. People who write the
code you use can and do make mistakes, leaving doors open and letting bugs slip
in. I know because I make these mistakes, and work with people that make these
mistakes, every day.
The maxim that "all software breaks" is the status quo. And of course, even in
environments where there's deliberate software quality assurance, be it via
automatized tests, manual testing, or internal and/or open source code review,
bugs still slip into production. New vulnerabilities arise everyday in
software, even in code that has been in use for 15 years. It doesn't prevent
"autonomous" cars from killing people. It doesn't prevent massive breaches from
exposing users data, be that entity public or private.
Self-driving Uber call killed woman on NBC News
Security Linux Kernel vulnerability
List of known websites with data breach
Brazilian Government massive data breach in 2021
I keep these ideas in mind when thinking about my own projects. When I write
code for a company, I have little decision-making power regarding these matters,
which is evident in the fact that we use MongoDB as a relational database
because it was hot and new at the time. But, for my own work, I make different
choices. I prefer reliable software, for obvious reasons; upgradable hardware,
even if it's older; strongly typed languages which can pre-empt my mistakes; and
whichever database is appropriate for the task, not the job listing. If my
software works, and it works for a long period of time, I need a good goddamn
reason to change it.
Change is a part of software development lifecycle, and must be managed
appropriately. Most of the time, I argue that the best management is no change
at all. But, change demands reason - they need to have a real provocative need,
or else they just accumulate in a pile of new problems that they inveitably
bring along.
One of these poor motivators, in my observation, it the desire to fit universal
solutions into software that should instead address a specific domain.
Complexity grows exponentially when a certain project does not have a fixed
scope and does not draw a line regarding how much it can expand to and
accomodate every little problem and specific need of its users.
i3, the window manager that I use, reached a mature state where it looks
like it's stagnated, but it just doesn't want more features, so it can focus on
bug fixes and performance, which will eventually give the project more
robustness and stability. You can then combine with other tools to fill a
specific use-case.
Another issue I have noticed, is the obsession with solving trivial problems
with huge and problematic solutions. Instead of opting for the simplest path,
picking the most overengineered one.
The Ruby's minitest testing suite is a nice example of that, as it
is small and simple. It does not try to perform magic, nor does it try to be
smarter than the developer who uses it. It relies on the basic ruby concepts of
classes, modules and methods. It only does what it must, in the most simplest
and least intrusive way it can. There's no DSL, there's no huge set of
assertation matchers.
My text editor, vim has the same predictable behaviour for over a decade, and
it's forks keep the same default behaviour, so I know I can hop into some
random machine and know it will work the same way everywhere. All it has to do
it's input some text unto a screen, If I need something else, I can customize
it to fit my needs.
Weechat, the IRC client I have used to talk with friends for at least three
years, runs smoothly while the slack app that I have to use to communicate with
my work colleagues can't spend more than a hour without crashing, presenting
some form of lag or eating half of my RAM, even though its main functionlity of
sending and receiving real-time text messages is a problem which was solved
over 30 years ago.
So I choose reliable software: simple, auditable, performant. Even if that means
need to sacrifice some features or somehow limiting the capability to connect or
compose with external tools. And embedding these ideals in my life, as a
developer means always looking with a critical eye for what it seems "easy",
"new and shiny" solutions for old problems and always seeking to understand the
trade-offs they bring along, which usually means to kill stability and/or
offloading that cost as a maintenance burden and/or complexity.
--
The post "Regarding Reliable Software" was published in March 29, 2021.
The content of this site is under the terms of Creative Commons CC-BY-SA. The
code is available under GPL-3.0.