💾 Archived View for gemini.hitchhiker-linux.org › gemlog › finished_software.gmi captured on 2024-03-21 at 15:22:46. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-11-04)
-=-=-=-=-=-=-
I was listening to a podcast recently and one of the guests made a statement to the effect that software is never finished. This made me think of a recent thread in which the subject of discussion was the FLOSS equivalent of WordPad. I would argue that wordpad had long ago achieved that elusive (at least in the software world) quality of being "finished", much like Ms. Paint, or the majority of the shell text processing utilities in a BSD system. What more should be added to "more"? It's done. So long as the interfaces it relies on don't change out from under it and no security vulnerabilities are found, there is no good reason to continue adding to it's codebase.
My answer to the question was, for the record, Ted. And Ted is also, IMO, finished software. It's stated goal is to be a feature complete word processor for the Rich Text format, and it fully implements everything that you can do in RichText. The only possible reason to refresh the codebase at this point would be to port it to a new toolkit if X11 actually goes away (Ted uses Xlib).
While I will freely acknowledge that not all software falls into this category, and in particular the larger the scope of a piece of software the harder it is to ever be "done", I think this mindset has gone a long way towards screwing up not just Open Source software but also goes a long way to explain why the Big Web and Browsers have become the Eldrich Horror that they are today.
Gopher dates back to 1991 and RFC 1436. It survives to this day largely unchanged. One of the big draws of Gemini, at least to me, is that since it's early days it has largely stabilized and you can count on the protocol itself, and the Gemtext markup format, to be largely the same now as it was then and likely into the future. Http, on the other hand, is seeing a dramatic push to move from http v1 to v2 and now to v3. That's not even taking into account the markup language html in all of it's various forms over the years, plus the continual evolution of Javascript. We also now have things such as WebSockets. A lot of this is useful stuff, to be sure, but it has locked everyone into a never ending upgrade cycle where a web browser is literally larger than a complete operating system and running `git clone --recursive` on the Chromium source tree will not even complete if left to run overnight. This upgrade cycle is so intense that we update our browser more often than any other piece of software and the browser lifecycle is largely the driving force behind the hardware lifecycle itself, as modern browsers cannot run on underpowered hardware (hardware which would have been considered overpowered ten years ago).
I have a love/hate relationship with Gnome. On the one hand, I greatly admire that the Gnome devs are willing to experiment and forge ahead into uncharted territory. On the other hand, I'm sick to death of their attitude regarding deprecations, removing interfaces and breaking API. Gnome 45 once again breaks API for Gnome Shell extensions so thoroughly that basically every single extension must be updated. What's worse is that this has already happened multiple times in the past.
It's also pretty interesting seeing how often the Gnome core apps rotate to the new and shiny, leaving old pograms that are perfectly functional to die in favor of new projects. Not long ago Gnome got a brand new text editor which replaced Gedit. Now, I never was really a Gedit user, and I haven't spent a lot of time in TextEditor either. I'm just pointing out the long standing pattern of just rewriting the same applications every few years. Think about how many times Gnome has had a new and shiny default music player. It's a lot. Like, really a lot.
This duplication of effort has definitely been noted before, but usually people are talking about how Linux has a dozen different desktop environments and hundreds of distributions. What I'm referring to is the same project continually rewriting components every few years primarily for the purpose of cosmetic changes to keep up with the times. It gets even worse when they get competitive about it. I'm sure at least some readers will remember that Gnome 3 and KDE 4 came out the same year, leading to a situation where both of the flagship desktop environments on Linux were mostly unusable and badly broken for a fairly extensive period of time.
Browsers are once again one of the worst culprits of this type of "refresh" behavior. Just how many times in it's history has a "ui refresh" been the major selling point behind a new release of Firefox? Users actually get pissed when you start moving buttons around on them and changing their workflows due to hiring a new designer who wants to take your project in a "bold new direction", but it seems like we are destined to keep repeating this constant cycle of churn forever sometimes.
The biggest of the big Web platforms fall into this trap quite often as well. I've all but stopped using Facebook, but what I remember of their platform was that you could never actually get comfortable using it because something was destined to move around pretty much every day. Google has been a huge culprit here as well, constantly tweaking their interfaces and twiddling knobs in ways that baffle users, and usually with the mandate to get ever increasing amounts of advertising in front of your eyeballs.
Even appliance-like devices such as streaming sticks show this tendency. A couple homes in the past my home entertainment system included a MythTV box with Kodi running on an Android TV box, the Xaomi MiBox. I actually loved the MiBox for the first two years I had it, and Android TV had at that time what I considered the perfect interface - a simple grid of your apps. Then Google announced a "major upgrade" to Android TV, which amounted to endless rows or algorithmically generated "suggestions" and completely destroyed anything pleasant about using it. I honest would have considered the prior interface to be one of those rare pieces of software that could be considered "finished", with no further development or tweaking required. It did exactly what it needed to do without wasting a single clock cycle on anything I didn't want it to do.
I can't stress enough how unacceptable it would have been years ago for a consumer device like this to be sold in a store and then at some random point in the future it's entire user facing interface changes. But that's just the tip of the iceberg when you consider that the Automobile industry has also jumped on this bandwagon.
Computers have been in automobiles for a very long time. Even during the age of the carburetor, we began seeing electronic ignition modules replacing points and breakers. There was good reason for this. A points and breaker system, while it is relatively simple, is prone to drifting out of adjustment or suffering a buildup of grime and carbon which impairs it's function. It's not surprising that this is one of the first places that computers made their entrance into cars, but they were quickly followed by electronic fuel injection systems of increasing complexity. In it's simplest form, electronic fuel injection consists of a single injector positioned at the throttle body, combined with airflow and oxygen sensors which help the computer to meter the fuel at a more precise ratio than a carburetor can, particularly during engine startup. As these systems became more complex we added variable length intake runners, variable valve timing, exhaust gas recirculation, misfire detection and more. Eventually, instead of separate systems controlling your engine and various other systems everything either interconnected or became unified, adding sensors for things like tire pressure, conveniences such as remote start, computer controlled shift points, keyless entry, RFID chipped keys, you name it. In the past ten years this slow creap became an avalanche as our "infotainment" systems were also integrated into the ever advancing beast.
This in itself isn't a completely bad thing, but it has side effects linked to capitalism, greed and sometimes outright stupidity. Tesla builds cars without an actual physical door latch actuator. Imagine a situation where there has been an electrical fire and the cabin is filling with smoke, but you can't get out because the doors refuse to operate. Fun, right? How about an "infotainment" system so badly coded that it becomes completely bricked by an incoming file's extension not matching the data in the file. This was an actual recall. They didn't bother checking mime magic, and there was no way to re-flash the software "in the field", causing people to lose the majority of their cars instrumentation because they tuned into the wrong radio station at the wrong time. We also have such fuckery as built in features that the company rents back to you, ala BMW's heated seats. Today's cars all come networked whether you want them to be or not, and there doesn't seem to be much in the way of hardening going on. Gone are the days of parking at the point with your date when you consider someone could very well be listening and watching everything you do through your car's interior cameras and microphones and selling your personal data for extra profit. It's a little known fact but today's automakers are among the largest suppliers for the steady stream of personal data that massive tech companies are so addicted to. After all, they know everywhere you go, who rides with you, who you talk to while driving, which restaurants you went to on your way, and could easily listen in on all of your conversations on the way. It would just be wasteful to not squeeze some more profit out of this goldmine.
Bringing us back to the subject for a moment, these networked cars also get something that never used to be a thing back when computers in cars were relegated to the tasks of controlling combustion - over the air updates. You may well find that your car at some point in the future has gotten an OTA update that you never knew about or explicitly agreed to that has serious bugs, or that rearranges your dash display based on the latest design trends. Or you may find that your windshield wipers have now been classified as a subscription only, "premium" service. Gone are the days when the software in your car was flashed to ROM. Now everything is fluid in the same way that it has become in your smartphone.
Actually, that heading is misleading in that it was NEVER like this until recently. My first computer was the infamous Tandy TRS-80, which came with DOS flashed directly to ROM, and no user facing method to update or replace it. The OS was considered a finished product. My first game console, like so many people my age, was an Atari 2600. Now, the Atari really didn't have an operating system. The games were written to run on the bare hardware with no abstraction. But the games themselves were ALL finished products that could not be changed once they left the factory. This pattern continued with my next game console, the original Nintendo game system.
After the Tandy, the Atari and the Nintendo I went a number of years without a computer. My next computer came with Windows ME installed on it. A short time after getting it we got dialup internet, and the modem I bought came with it's driver in a CD. I was at that time very into photography, and even though I was working primarily with film I also bought a scanner and was very interested in this new digital photography thing. Not having the money for Photoshop, I bought a copy of Photoshop Elements, a stripped down version. Once again, it came on a CD with no expectation that I would ever update it in any way. That was just how software used to be delivered. You couldn't rush something to market and then patch bugs later because there was no later after you shipped it. Once shipped, it was a finished product, bugs and all.
I'm not about to tell you that this was better and that we should give up on the modern way that we ship software though. Around the same time that I bought that copy of Photoshop for broke guys, I discovered Gimp and then Linux in quick succession and realized very quickly that this was a better model in many ways, and not just because I was habitually broke and liked free things. I do like free things. But no, there are a lot of reasons why it's better to get your software in the form of an open source project besides cost. Still, the idea that software can never be "finished" is a lie, and a harmful one at that.
If I could make one point and have it stick it would be that a software project should have a clear definition of it's intended feature set and a strict definition of what is in or out of scope for that project. I'm a firm believer that it's often better to write a new program than to bolt a feature on later that you never planned for. Now, you can build a certain degree of flexibility into your program's architecture, but it's never possible to know what new technologies or hardware capabilities are going to come along in the future. You are damning yourself and your project to a lot of pain if you don't exercise some restraint in your goals and planning here.
BSD ex/vi is a great example of a feature complete program. Once it reached a point where it was considered good enough to replace the original ex/vi in the 386BSD source tree (Bill Joy's original ex/vi was patent encumbered) it was considered feature complete and has largely only seen security updates in the decades since. There have been a few quality of life improvements like better charset support, but in general if you desire something with more features than Vi you installed one of the many later editors with a larger feature set that it inspired. I wouldn't want to see it change much.
One of my absolute favorite moments in software history was during the early weeks of the OpenBSD sponsored fork of OpenSSL, LibreSSL, when the developers announced that they had thrown out well over 10,000 lines of code that they deemed never should have been written.
I try to follow this philosophy in my own projects when I deem it's appropriate. I wrote a Sudo replacement a while back called Jah in honer of Bob Marley which I consider feature complete. Unlike sudo, jah has no config file to parse. It doesn't excise environment variables for you, so if you're concerned with passing environment vars to a priviledged command you should run it as `env -i jah <program>`. It doesn't let you run commands as anyone other than the root user and group. It's sole purpose is to allow members of the "wheel" group to run priviledged commands to administer their system. I have absolutely zero interest in adding features to it, because that would make them anti-features for it's intended purpose - a dead simple easy to audit replacement for sudo, for the 99.99% of the time that you only need 0.00001% of sudo's feature set and thus don't want to risk having a complex and impossible to fully comprehend 04755 root:root binary on your system.
One of my other projects also happens to be a text editor, Vapad. It's got some nice features it inherited just by using GtkSourceview, but I purposely limited it to something I still consider "basic". There are few enough preferences that all of the user facing settings can be toggled using entries in the hamburger menu, so there is no preferences window. I like it this way. Now, I did add some features for the coming release, but they're only there to support running Vapad in other environments (mobile) by making the interface adaptive. But now that's done, Vapad is feature complete. I'll release new versions if there are bugs, or if people submit more language translations.
If I were to remake Star Wars today, during the scene in Empire where the Falcon is running from Vader's star destroyer Han would pull the lever to "go to lightspeed" only to find that the hyperdrive was currently unavailable due to an automatic software update.
Think about infrastructure for a moment. Things like traffic lights and elevators. Imagine if we treated their operating systems in the Laissez-faire way that we treat "productivity" software. Imagine we rolled out the elevators in a 99 floor building downtowm while their control systems were still in Beta, with obvious bugs that 3% of the time left people stranded for hours on the 42nd floor - with the doors closed. Imagine that once things settled down, the engineers immediately began working on a new replacement for the software running those elevators because the interface wasn't deemed beautiful enough. Imagine traffic lights stopped working one day because the developers were following new human interface guidelines that clearly state that "traffic lights are a bad design pattern and they're not coming back". How about a nuclear submarine that got an OTA update while submerged using ULF radio, where the bytes were coming in at a few tens of bytes a second. Imagine that's a security patch to prevent another nation from taking control of a pressure valve in the reactor's cooling system remotely, and imagine the sphincter clenching going on in that fragile metal tube as they watch the bytes crawling in.
Some software can afford to follow this modern pattern we've fallen into, where "software is never finished". I don't understand why we put up with it being this pervasive, however. If I buy a device that has "no user servicable parts" then I want to know that it's not only functioning perfectly on day one but that it's always going to function exactly the same way. I don't really want my car to be networked in the first place. I grew up with manual window cranks and door locks. I actually have a strong preference for that type of construction in consumer devices, because every extra gadget you add is another failure point. When those gadgets are controlled by a networked computer, they are also attack surface.
You probably can't do much to affect the overall trend of course, but collectively we can demand better. When writing software, do so with a clear end goal in mind. Make that goal something achievable, and once you reach it stop adding features and focus on making the code bulletproof until it's obvious there isn't anything more you know to look at. Then announce that your program is "feature complete" and find something better to do.
When choosing software, put these same values forward. I've mentioned Harelang before, but let me put it up as an example again here. Hare is a programming language designed and built by Drew Devault and a community of other volunteers. Before they even released it to the public Hare had an official and complete specification, and it should be interesting to note that the specification is actually smaller than that of C. Hare is a compiled systems level language complete with pointers and manual memory management. To keep things simple, and ensure the goals for the project were attainable, choices were made to only support Open Source operating systems, and to leave out Generics and threading. As a result they managed to bring the language from concept to working on multiple architectures and operating systems in just a few years. Hare's website clearly states that once all of the features in the spec are implemented then the language is "finished", and any new features would mean creating a new language.
Put simply, no. We can innovate just fine without creating constant churn and keeping the things we build in a perpetually half baked state. Some things which are more complex are necessarily going to remain in a state of flux for longer periods, and that is also fine. As people continue to find new ways to use computers then obviously interface patterns will have to change to match them. There is nothing wrong with Gnome making adaptive interfaces so that their software can run unmodified on both a desktop computer and a smartphone. That's a worthwhile innovation that we absolutely need. But we need to be better at balancing the cost of perpetual churn against letting people get on with the business of using what we've created to do their own creative work.
How many houses would get built if the carpenters spent all day customizing their hammers? We got further by creating a completely new tool, the nail gun, than we could ever have gotten with any modification to the hammer. It was better to consider the claw hammer design complete and think about whether a completely different paradigm was possible than to focus forever on tweaking hammers. What's more, the simple and lowly claw hammer coexists in harmony with the new tool because sometimes rather than driving a nail really quickly you just need to bash something really hard or break two pieces apart. You'll get further using a claw hammer for those tasks than you will using a nail gun. In the same way, there are times when I absolutely prefer to open something in BSD ex/vi rather than NeoVim with all of my customizations. It's exponentially faster and handles a 10k line csv file just fine, while an IDE would grind to a halt under that abuse.
But that's all just, like, my opinion, man.
All content for this site is licensed as CC BY-SA.
© 2023 by JeanG3nie