💾 Archived View for thrig.me › blog › 2024 › 02 › 12 › tech-goldilocks.gmi captured on 2024-07-09 at 01:17:32. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2024-03-21)

-=-=-=-=-=-=-

Tech Goldilocks

Getting tech just right can be tricky; on the one had you have the type that spends hours on IRC asking how to make a read(2) call not block, whether something can ask whether there are bytes available to be read, how to deal with timeouts and unexpected handle closures, and how deal with all the prior complications with multiple handles at the same time. The general advice here is to use an async library. You can write some or all of this yourself (and it might be good practice to do so, possibly in private) but on the other hand libevent or various other async libraries or languages do exist. Yes, there can be a learning curve, but there's also a learning curve to figure out select(2) or poll(2) or whatever, and to step on (some of) the same rakes that the authors of the async library have already stepped on and know how to avoid, depending on the maturity of the code in question. Here, the complication of an abstraction that handles async I/O is probably a good thing.

On the other hand, one can end up with code that takes hours to build… it's just a small module, so what is wrong? Details eventually emerge; they are using a fairly heavyweight build and package system, and somehow angular is involved plus "nodejs tools like grunt and bower", various git repositories, probably overly complicated semantic versioning and git tags, and other things we doubtless haven't been told about, yet. Containers? Continuous integration? A cluster of Kubernetes? In this case the tech is probably "too hot" (from all the CPU and human time wasted) and maybe one should rethink some of that complexity. (Unless you've convinced someone to give you money to keep all those balls of mud spinning.)

A metric to consider might be "footguns per square meter" which is apparently pretty high for things like self-hosted kubernetes, and probably also high for bespoke I/O code that someone threw together. A related concern is "can this thing be debugged", which may or may not be the case. One might encounter problems that result in "well, here's your new, empty Exchange calendar system, sorry about all those events that used to be in there". This was at a company swimming with resources and who paid for support from Microsoft. Apparently the Energy Policy Act of 2005 was a bit too complicated a curve ball for some date and time code. A related question to "can this thing be debugged" is "how long will it take to stand up an new empty instance after the (maybe) undebuggable thing falls over, and how often do we expect the thing to fall over?" Does the accelerator sometimes get stuck down? That could be a problem! In the case of bespoke I/O code you would probably be looking at a rewrite to something more complicated (and hopefully tested), or sometimes you will end up sticking straw and duct tape into the engine to try to keep it working. Welcome to ops!