💾 Archived View for envs.net › ~kkernig › gemlog › 2021-02-03-saturation_effects.gmi captured on 2022-04-29 at 11:36:48. Gemini links have been rewritten to link to archived content

View Raw

More Information

➡️ Next capture (2022-05-11)

-=-=-=-=-=-=-

On exponential growth and saturation effects in computer techology

Exponential growth is on the minds of many these days, as well as a "flattening of the curve". In computer technology, it is also very prominent. Moore's law anticipated the CPU transistor count, memory density and hard disk space for many decades quite well. But there are prominent cases where technology development saturated.

The history of electronic calculators

1962 can be seen as the year that electronic calculators started. Before, there were only the "big" programmable computers, and relais-based desk calculators (Casio 14-A), which however were very expensive and did not sell much. The ANITA was first sold that year, a vacuum tube-based desk calculator with Nixie tubes as a display. It did not calculate in binary, but in decimal, using "Dekatrons" (decimal counter tubes). Just a year later, the Friden EC-130 was presented, an all-transistor desk calculator with a CRT display and a magnetostriction-excited memory delay ring. Many more desk calculators like these were made by various companies throughout the mid and late 1960s, adding more features, while still being based on discrete transistor electronics. These devices were bulky and sold for thousands of dollars (1960s dollars!), but for many scientific and commercial applications they were worth it.

Then, things went rather quickly. In the late 1960s, integrated circuits became cheaper than a discrete realization of their fuction, and the integration level was rapidly increased further. In 1970, the first microprocessors were made and used in early handheld calculators. In 1972, the HP-35 was released for $395. It was a scientific pocket calculator, featuring trigonometric functions as well as exponents and logarithms. By 1977, basic calculators were cheap, with an LCD screen, and operating months or years on button cell batteries.

Rapid saturation

For basic calculators (typically featuring +, -, *, /, %, sqare root and accumulating memory), the development pretty much ended here. A modern one doesn't offer anything more than one from 1977. This certainly isn't because of technological limitations, but simply because most people don't need a pocket calculator with more than those functions. Scientific calculators developed for a few more years, but by the mid-80s, models equivalent to modern ones were cheaply available as well. I know there is still some development in powerful computer algebra systems (CAS), but on the one hand, few of those interested in these advanced features need them on the go, and would rather use a laptop or desktop computer. And on the other hand, with the immense popularity of smartphones, most would rather use an advanced calculator application on them than to carry a separate device.

Desktop computers

This saturation also has reached desktop computers. You can use a 12 year old computer (e.g. Athlon 64 X2, 4 GB RAM) perfectly well for nearly all "typical" desktop/office tasks. If you give it an SSD and run a light- or midweight OS on it (e.g. MX Linux), it is likely to be very fast and smooth to operate. The computer will certainly hit its limits when you open 50 tabs in Firefox, start up modern video games or try 4k video editing, but this doesn't exactly render the computer impractical.

For comparison, think of using a 12 year old computer in the year 2000: A 80386 machine with 2 MB of RAM (a typical modern system for 1988) is pretty much useless by then, except for running legacy software/hardware. You can't run a desktop OS on such a machine, and you can't even run a modern terminal-only OS (BSD or GNU/Linux without X).

A new system architecture?

By now, the set of tasks that typical users want to perform on their computers is fairly well-established. It seems unlikely to me that any groundbreaking new applications for wide use on desktop computers will show up in the near future. This is a really good starting point for the development of a new system architecture, to get rid of many legacy complications, and to move towards more open and resource-respecting hardware.

I think it would be great to have a new RISC-V based system which features an entirely open hardware design: no mystery chips doing mystery things in their mystery firmware. Sensible standards would be enforced for all core system components: every sound/video/network chip has to be usable with generic free (GPL) drivers. None of that x86 legacy clutter reaching back to the original IBM-PC. This would make operating systems simpler, smaller and more efficient. Less energy would be required to manufacture and operate such systems. It would be easier for people to get into system administration, as it doesn't require knowledge of 40 years of development quirks and workarounds (concerning the PC hardware at least). It would overall be a better computer system for most people.

As much as I like this idea, I see two main points of criticism:

1) The system would only be gloriously great if it is widely adopted. If it is realized, but remains obscure, well, damn. If it gains a significant following, but not quite enough for hardware manufacturers and software developers to really join in, it will be cumbersome for all and eventually fizzle out. The ideas presented above therefore have to appeal to more than just some technology enthusiasts.

2) The idea of solving a problem by throwing more new and fancy technology at it has brought us our modern living standard (in the "western" world, that is), but also the impending climate catastrophe, among other globalization issues. The way things are right now is not long-term sustainable. And it is very much questionable if it is appropriate to throw more new and fancy technology at this problem to solve it.

"Computers help us solving problems we wouldn't have without them."