💾 Archived View for dece.space › docs › permacomputing_update_2021.gmi captured on 2023-09-08 at 16:13:57. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2022-03-01)
-=-=-=-=-=-=-
This document is an HTML to Gemtext conversion of the Permacomputing 2021 updates article by Viznut, made with md2gemini and cleaned up by hand. I only took the liberty to give links more meaningful legends.
It is now more than a year since I wrote my "early notes" about Permacomputing. At that time, I was not yet aware of anyone else having similar ideas, so I've now decided to write an update that connects my ideas with the existing discussions and activities. I also want to share some new ideas I have been pondering about. This text is about 33K characters / 4900 words long, so allocate your time accordingly.
my "early notes" about Permacomputing
The "biosphere-aware computing scene" is quite fragmented. There are many different islands (groups and individuals) that use different terminology and that are only now discovering each other. It is therefore important to build bridges between the islands.
Computing within Limits workshops that started in 2015 form an important hub but have been rather invisible from non-academic perspectives. Many interesting papers have come out of these workshops, but I would really like to see more practical and/or longer-term projects that go beyond the shortish workshop papers. Computing within Limits branched out of a larger field of "sustainable" ITC that is known to have huge problems.
Computing within Limits workshops
Another hub is in the Fediverse, particularly around the Mastodon server Merveilles.town that centers around creativity and sustainable technology. Many of these productive hackers, artists and activists also participate in the "smolnet"/"smallnet", including the space of the Gemini protocol. My Permacomputing article was very well received in these circles and many have adopted the concept for their use.
Then there's the "Sustainable Internet" activism that has the Branch online magazine. I tend to lump this together with the various "solar web" projects such as the solar-powered version of Low-Tech Magazine and the Solar Protocol. Also somewhat related is the Small File Media Festival that criticizes the carbon footprint of streamed media with smallish (video) files. This is an area where the demoscene could make important contributions.
Solar-powered version of Low-Tech Magazine
In addition to the generic groups of like-minded people, there are specific projects, such as Collapse OS, whose participants don't necessarily have connections to wider groups.
Occasionally, an online article pops up that expresses similar concerns and ideas as I did with the Permacomputing essay, like Wim Vanderbauwhede's Frugal computing. It is great to see that many different people independently come to similar conclusions, but this can also be seen as a sign that we need more social media activism and awareness-rising even to make all the concerned people find each other.
Marloes de Valk has been mapping this scattered "pluriverse" and its terminology, but I have the feeling that this only scratches the surface, and that there's a lot of relevant practice going on in e.g. non-Western countries.
A major problem with this "pluriverse" is the lack of a common name to be used in communication. "Permacomputing" scored quite high in De Valk's Fediverse poll, and I have no objections against using it for this purpose. Something like "radically sustainable computing" might also be a good umbrella term ("radically" being the keyword that differentiates it from the greenwashed capitalism of "Sustainable ITC").
Many of the early Computing within Limits papers discuss collapse and scarcity scenarios from somewhat bleak viewpoints. In later years, the research community started to reframe itself in more positive ways by drawing inspiration from e.g. Hes & du Plessis' *Regenerative Sustainability* and Escobar's *Designs for the Pluriverse* – just like Permacomputing draws inspiration from Permaculture. But even when focusing on a positive vision, one should not take anything for granted. If a vision cannot survive a collapse of industrial production or network infrastructure, it isn't resilient enough.
An important paper in the collapse vein is Jang et al.'s *Unplanned Obsolescence: Hardware and Software After Collapse* that e.g. estimates lifetimes for various hardware components, with the conclusion that it may be possible to maintain some of current computer hardware for several human generations even if the entire semiconductor industry collapsed right now. Solderpunk (the creator of the afore-mentioned Gemini) has a concrete proposal for a "standard salvaged computing platform" based on smartphone/tablet e-waste. I'm sure that there are components with much longer potential lifespans (Jang et al. estimate current mobile hardware to be able to persist for about one generation), but at least there would be heaps of this type of junk available in the early years. I'm personally interested by the possibilities of microcontroller-based smartcards (that are even more ubiquitous than mobile phones but have entirely different challenges).
Unplanned Obsolescence: Hardware and Software After Collapse
Standard salvaged computing platform
Jang et al. also have a few interesting words about maintenance culture. In the same way as religious organizations continued to maintain ancient Chinese roads that no longer received governmental support, computing could be maintained in a post-collapse world by "semi-ascetic cultural organizations whose primary focus may or may not be computing". I have personally been fascinated by the potential of monastery-like communities to preserve science and technology even during "dark ages" when the society at large sees no value in them. In medieval Europe, some monasteries even refined and advocated technologies such as water power.
Wind and Water in the Middle Ages Fluid Technologies from Antiquity to the Renaissance
The term *collapse informatics* comes from Bill Tomlinson who suggests that one should look into the existing computing practices in groups that have voluntarily chosen to live off-grid or in other "collapse-like" conditions. I might also want to include those who do so involuntarily, as well as those who have made "collapse-compatible" decisions specifically with computing (e.g. artists who specialize in old hardware).
I don't know if there is going to be a collapse, but I'm quite sure that the entire society needs to reduce energy consumption, lengthen technological lifespans and reduce superfluous dependencies. Recognizing the possiblity of a collapse may help coordinate these changes. *Designing for disassembly* is an example of a concrete goal that supports hardware longevity in both collapse and non-collapse scenarios.
In profit-oriented societies, people often try to make themselves and their fields of expertise as important and useful as possible. It has therefore been delightful to learn about visions that detach computing from all utilitarian purposes.
Brendan Howell's *Rustic Computing* is an artistic project that depicts computing as "the pastime of dilettantes, amateur scientists and gentleman tabulators who construct machines to manipulate abstract symbols with no practical application". Computer components are built using pre-industrial technology, which reminds me of early mechanical computers such as Zuse's Z1. When computers are built with non-pollutive technologies, they don't need to justify their existence by paying back their ecological debts. And since they have no practical purpose, they don't even have to be faster or better than manual paper-and-pencil calculation. They can just be interesting and important the way they are.
I see much of the same attitude in *Compudanzas*, a research project that reimagines computing in the form of "seemingly useless" activities such as rituals and dancing.
In Steve Lord's idea of *Heirloom Computing*, a computer that has been made to last for many generations can be a piece of family history that evolves with the family, keeping permanent traces from every generation that has used it, and does not need to have any purpose besides this.
As suggested by Jang et al., a post-collapse society that has eventually lost all of its artificial computing capacity may still want to continue the practice of computer science in a purely theoretical level, as a form of mathematics. This is another example of how computing may remain meaningful for some pockets of culture even with no ability to run any potential applications.
Detachment from utilitarism may (perhaps paradoxically) give way to a deeper importance and meaning. I'm particularly thinking about Yuk Hui's idea of *Cosmotechnics* which refers to a unified harmony between technology, culture and non-human nature. Modern technological thinking lost this harmony by turning everything into utilitarian resources. An interesting point made by Hui is that every culture should find its own approach to cosmotechnics – so, we would be replacing a homogenous global utilitarian monoculture with a rich and diverse polyculture.
It is often difficult to even imagine a kind of computer culture that does not suffer from unlimited growth. Even the most interesting real-world examples (such as the Soviet computing culture) exist somewhat in the shadow of Western developments and ideologies. So, there's no real "other" to contrast the growth-obsessed mainstream computing with.
Computing within Limits papers have also given me an impression that some scholars even find it difficult to imagine e.g. how software development could take place without the Internet. In cases like this, I might suggest looking into the actual history and listening to people who have experienced it. Even though the history of computing isn't nearly as diverse as it could or should be, it is still worthwhile to study it. And definitely not only the mainstream "winners' history" but everything from the various cultures and subcultures.
Eriksson and Pargman have suggested the use of counterfactual history to assist imagination. Sadly, their own *Coalworld* scenario (with the point of divergence being an early-seventies "peak oil" event) has not yet reached the point where computing can be elaborated. I wish there was more speculation (both fiction-oriented and academically rigorous works) that would present thoroughly-imagined alternatives to the actual history.
I've already mentioned several "alternative paradigms of computing": *frugal computing*, *heirloom computing*, *rustic computing*, *collapse informatics*. But there are still a few more to add:
"Regenerative computing" is Mann et al.'s idea of applying Hes & du Plessis' "Regenerative sustainability" to computing. The most Permacomputing-relevant part of the Limits'18 is quite dense, so I'll quote it verbatim: (number 7 refers to Hes & du Plessis' 2014 book "Designing for hope: pathways to regenerative sustainability")
Designing for hope: pathways to regenerative sustainability
(3) Move beyond efficiency as the primary lever available to computing.
These new narratives should look to nature and ecology to demonstrate the
interplay between computing, society and biological systems where limits of
these systems are respected and worked with.
(4) Integrate ecological worldviews into computing's narratives and
processes both the theory such as living systems and deep ecology, and
values sets:
* Integrity - maintaining the wholeness of [wider] systems, ensuring that
structure and relationships remain intact and functioning as they
should.
* Inclusivity - "interacting with the world in its entirety" [7, p. 35],
engaging and integrating with all dimensions, levels of existence and
knowledge.
* Harmony - all elements cooperate through relationships that are
respectful in order to avoid dissonance.
* Respect - all parts of the world have intrinsic worth and all existence
is part of the extended self, and therefore all self-respect is extended to
mutual respect for the world.
* Mutuality - "we are in this together, and what happens to ‘others’ will
also have an effect on self" - see: compassion, treating others the same as
yourself.
* Positive reciprocity - "reciprocating in a way that is
of benefit to and advances the relationship between
self and extended self" [7, p. 35].
* Fellowship - an extension of mutuality and positive reciprocity, where
the world is co-created by humans in partnership with nature.
* Responsibility - morally accountability for the consequences of our
actions in an uncertain and unpredictable world
* Humility - change is constant, we cannot know the true consequences of
our actions
* Non-attachment - In order to adapt to changing circumstances it is
important to uphold non-attachment in order to decouple from “the futility
of trying to hold onto anything in an ever changing world including ideas,
dogmas and strategies” [7, p. 36]
"Convivial computing", from Fischer & Lemke's 1987 paper, is an earlier example of taking ideas from ecologically conscious thinking into computing (in this case, from Ivan Illich's book "Tools for Conviviality"). Even earlier, Lee Felsenstein had been inspired by the same book when designing the Osborne 1 personal computer. In both cases, however, the ecological aspects of Illich's thought are ignored. Also, Fischer & Lemke's paper doesn't feel at all like a forgotten masterpiece of groundbreaking thought – the ideas actually seem to be very much in line with what was implemented in the "RAD tools" of the 1990s. And some of these tools (Delphi, Visual Basic) felt like the epitome of bloat at the time.
"Benign computing" basically advocates keeping things small in order to keep the problems caused by them small. Currently, huge problems are created by huge, centrally-managed systems built with the principles of abstraction and indirection. Raghavan's critique of these principles is very similar to how I see "maximalism and virtualism". I also completely agree with Raghavan about that "the utopian notion of creating new technology that is strictly "beneficial" or that advances "development"" must be rejected.
My Permacomputing article from 2020 is basically a vision of a new kind of computing that works in a radically different way in a radically different society. It does not give many guidelines towards actual practice or how to transition towards permacomputing, so maybe I should cover this area a little bit.
I have been reluctant to name specific technologies or design constraints for permacomputing. This is because I want to support a diverse polyculture of ideas and possibilities. Asking what is the most suitable programming language for permacomputing is a bit like asking what is the most suitable plant for permaculture – the entire question contradicts itself. There is no "silver bullet" – there isn't one even in the mainstream industry despite its continuous attempts to uniformize everything. However, there can be design wisdom about the strengths, weaknesses and mutual interactions of specific elements, and this wisdom helps with choosing a language, a plant, an algorithm or a design pattern for a specific place.
In software, nothing that can be run locally is "poisonous" per se. Even if something consumes a lot of energy, it does not need to mean more than that the consumption must be restricted to when that energy is available. Far more important questions are how the hardware is obtained and maintained, and how the energy is produced.
I have noticed that many "sustainable" or even "low-tech" computing projects have been built on cheap DIY-oriented boards such as Raspberry Pi. Even though these may be among the best of the currently available options, it should be noted that they have been designed for hackability and replaceability rather than longevity or repairability. There might be a need for a radically repairable and modifiable hardware basis to fulfill similar purposes. Radical modifiability might include the ability to interface with a large variety of different chips (processors, SoCs etc.) – this would help maximize the usable lifespans of those chips.
Keeping systems very simple but very capable is a good guideline for a lot of permacomputing, but particularly so for the crucial basic software used to enable salvaged/makeshift hardware. Bare-hardware Forth systems (such as Collapse OS or OpenBIOS) are very capable for their low complexity, and can be small enough even for rudimentary 8-bit microcontrollers.
One possible approach to simplicity is to try to keep things simple enough that they can be thoroughly understood and (re)implemented by one person. This applies not only to application programs but the dependent elements as well (programming language, operating system, firmware, hardware). This is not to say that people should write everything from scratch but to keep the complexity human-graspable. The ideal of human-sized computing is particularly applicable to systems that are used as tools (because tools in general should be thoroughly understandable to their users). Also, in decentralized "post-collapse" societies, the local all-around experts ("village hackers") should be able to master all aspects of the local computing systems in order to maintain them and to adapt them to various local needs. All this becomes much easier if complexities are kept low or moderate.
The effective complexity of a software program can be estimated by summing its executable size with the size of the minimum set of dependencies required to run it (including the OS components). Alternatively, one can calculate its bootstrap complexity (by summing the size of all code and data required to compile the program, the dependencies, and the entire dependency network of the toolset required for the compilation in the smallest system that can run them). These types of assessment strongly favor programs that are written in non-bloated languages and can be made run on bare hardware – even if they can also run in bloated environments and use their special features.
One way to deal with huge platforms is to create "pockets of simplicity" such as simple virtual machines that can also run on bare hardware. Emulators of existing hardware platforms are a special case of this. VMs are particularly suitable for small things that require far less computation than what the hardware is capable of. A virtual machine may also help eliminate compatibility problems and code rot, if it is unambiguously defined and the definition is canonized (permanently frozen). If approached with mainstream engineering attitudes, however, VMs may easily lead to "Java-like" problems (wastefulness, incompatibilities, etc.) Setting artificial limits to memory usage and execution speeds may prevent some of these developments. One might also want to think about how to statically translate VM programs into native code for running on platforms that are actually small.
Uxn, an example of simple virtual machine
In mainstream computing, "ease of use" is usually implemented as "superficial simplicity" or "pseudo-simplicity", i.e. as an additional layer of complexity that hides the underlying layers. Meanwhile, systems that are actually very simple and elegant are often presented in ways that make them look complex to laypeople (think about the esoteric syntax of Forth or Lisp, for example). Ideally, UIs should reflect, amplify and illustrate the underlying elegance instead of trying to hide or misrepresent the inner workings. The earliest versions of the Apple Macintosh OS manage to do this to some extent (the system is not much more complex than the UI representation, every file is represented by an icon, program files are stand-alone without external dependencies, etc.)
When minimizing the internal complexity of a system, however, it should not be isolated from the complexity of the external world. Computers are dependent on energy availability, temperature and other conditions, so they should be able to adjust their operation to the changes in these conditions – even if environmental monitoring is not among their designated tasks.
Permacomputing has so far been defined in ways that emphasize generic ideas and a wide diversity of possibilities. However, in order to actually create something that represents permacomputing, one needs to make a lot of specific design decisions. Concrete examples (either real projects or mockups) may help with this. In order to cover the possibility space, we need a lot of different examples from different points of view.
One possible starting point is to think about a general-purpose single-user computer that remains usable and relevant as long as possible even in a collapse scenario. Of course, any computer should be end-user-programmable and have some kind of programming interface to facilitate it, but what would be the concrete applications this kind of computer would be used for?
I assume that viewing text files from physical storage devices (such as flash memory) is what would persist the longest in any scenario. A few gigabytes of storage would be enough for an entire library of literature that could be valuable for centuries. And accessing it would be possible (although not comfortable) even with very rudimentary post-collapse I/O devices (such as a few switches and indicators for a manual serial protocol – somewhat like using a Morse code telegraph).
It may be theoretically possible to even read data directly from a USB flash drive with this kind of manual "telegraphy", but the complexity of the USB protocol would probably get overwhelming. Fortunately, a complex protocol implies that there is a (re)programmable microcontroller in the device, so one may want to reprogram it to support a simpler protocol. One could also add a "backdoor" that enables the device to run arbitrary programs from the drive, thus unleashing its potential for general-purpose computing. It may even be possible to get a USB stick to drive "proper" interface devices such as display screens despite the low number of I/O pins (two output pins are enough for composite video, but LCD panels unfortunately tend to need much more, so some kind of a multiplexer would be required). This could help reduce and postpone the need for "Morse code".
These could become general guidelines for maximizing the lifespans of arbitrary programmable devices: 1) make it as straightforward as possible to run arbitrary code, 2) support an electrically simple interface that can even be operated manually in times of far-future scarcity.
Another persistent application besides file viewing would be text editing. It has been prominent in personal computers since the early years and probably will be in just about any scenario. It would also imply the need for general file management tasks such as copying files between storage devices. Programs for doing these tasks would be among the first to implement for any "permacomputer" that does not require special expertise to use.
Telecommunication is important but does not require computers – messages may well be relayed with classical amateur radio methods. Also, computer file-sharing networks can well be based on physical media. However, the existence of a radio and a computer makes it appealing to combine the two. A program for transferring files and text streams over abritrary channels, in one- or two-way protocols or packet protocols, with or without error correction and/or encryption, would be a fine inclusion to the set of "collapse software".
Of course, supporting far-future post-collapse scenarios does not mean that one should stick to far-future post-collapse practices – rather, it ensures that there are fallbacks for everything. You can use a high-resolution screen today, but the system will work fine even with tomorrow's more rudimentary display. You can run a complex OS today, but the simple OS in the firmware ROM is also perfectly fine for editing a text document.
I imagine that this "simple OS" would normally look either like a plain Forth interpreter or an orthodox file manager (i.e. a Norton Commander clone), depending on whether the computer is connected to a sufficient screen or not. For screens that are a bit too small for the OFM there might also be an intermediate option that resembles early-2000s cellphone interfaces. All of these modes would be usable even with quirky input devices (such as a game controller, a single telegraph key or a barely functional touchscreen). Hardware is accessed via Forth words that can be straightforwardly redefined if there are unexpected hardware changes (such as specific glitches that need to be bypassed).
The OFM would allow one to browse, view and manipulate files, run executable files, edit text files and enter Forth commands. It could also be set up as a bootloader to load a more complex OS, but loading one would often be unnecessary, as many programs (especially ones that favor single-tasking) would also be available as "Forth" executables (that may also be native binaries that may or may not use Forth words) or as "ROM" files runnable with a simple VM.
Systems that don't have much capacity to spare would perhaps only have a plain Forth interpreter, or if even that would be too bloated, something like the standard byte protocol used by smartcards.
Longevity maximization easily leads to an emphasis on conservative and well-tested ideas, so this example may sound a little bit bleak. A fancier starting point (such as one based on ideas from unconventional computing) would perhaps give more room for fancier permacomputing ideas that take more distance from fossil-era computing.
There are many projects that address the sustainability problems of the World Wide Web. Activism for sustainable websites, solar-powered servers, new protocols, simpler document formats. However, these often take the underlying Internet for granted. The access may perhaps be slow at times or places, and a solar-powered server may be sometimes offline, but any place of the world is still supposedly accessible from any other place of the world at any time. The weirdness of this assumption may not even be obvious to modern Internet users – after all, it is in the core of nearly every major service/protocol (perhaps apart from Email and Usenet that can also propagate over temporary connections).
I see need for a decentralized protocol that works painlessly in conditions where everything is not constantly available. Where individual servers or communication links may be online or offline depending on circumstances. Where other parts of the network may only be accessible via temporary connections, physical file-sharing or data mules. Where your messages still reach their destinations, where you still get the files you need, and where "social media" discussions can still thrive, despite all these logistical constraints.
For some inspiration for the required mindset, one may think about how files were collected and propagated in "pre-Internet" conditions (BBSes, friend-to-friend file copying) and how to make these processes as automatic as possible.
I don't often think about how to do business in the capitalist economy, but in the early 2021 I asked myself what kind of IT company (or other type of IT-related organization) would thrive both before and after a collapse. I wanted to challenge my prejudice that anything you do for profit/living will always be somewhat "greenwashed" instead of properly sustainable.
Here are my ideas of how a relatively small "permacomputing company" could operate in the "age of abundance":
Art researchers recognize the concept of "postdigital" as a reaction against the so-called "digital revolution" that took place in the nineties, and especially against the typically "digital" esthetics. Using cassette tapes for music in a world where digital music formats are ubiquitous is an obvious example of "postdigital".
But not all that is called "postdigital" is non-digital. Actually, much of it is very profoundly digital – pixel art and glitch art for example. The term is somewhat misleading – it does not only mean "the non-digital that comes after digital", but can also be read as "a later form of digital" or "something that comes after the digital revolution". It particularly seems to set itself apart from the "progress narrative" that wants to continuously replace everything with "bigger and better". This makes the idea relevant to permacomputing as well.
When advocating lifestyles that abandon maximalism, it is important to frame it in a positive way. Settling for simple and coarse things does not need to be a "sacrifice" but something genuinely better than the mainstream alternative. "Postdigitality" is already a prominent force in e.g. indie games that often choose to use pixel graphics as a "modern" esthetic preference rather than as "retro nostalgia". This gives hope that a major paradigm shift is possible for the mainstream digital culture in general.
During the global pandemic, many people have been extremely dependent on prohibitively complex digital blackboxes. I therefore assume that, once the pandemic is over, many people will want to distance themselves from the mainstream digital world. To concentate on non-digital things but also to find a healthier relationship with the digital. I think this is something that advocates of radically sustainable computing should tap into.
(Added 2021-08-27) Here are some links I failed to include in the original version of this page:
Modding Fridays: "An online community of people interested to learn together about the maintenance, repurposing, and reappropriation of supposedly obsolete consumer electronics, for fun and profit. We see our interest as part of a broader conversation on post-digital culture, permacomputing and repair culture". Includes a wiki and an XMPP chatroom.
Civboot is an educational project aiming at simplifying the requirements and dependencies of computer technology as well as increasing humanity's ability to understand it.
Permacomputing at the XXIIVV Wiki
Written by Ville-Matias "Viznut" Heikkilä.