💾 Archived View for dece.space › docs › permacomputing.gmi captured on 2024-08-24 at 23:47:33. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2022-03-01)

-=-=-=-=-=-=-

This document is an HTML to Gemtext conversion of the Permacomputing article by Viznut, made with md2gemini and cleaned up by hand.

Original Web page

Permacomputing

This is a collection of random thoughts regarding the application of permacultural ideas to the computer world.

See also: Permacomputing update 2021

Permacomputing update 2021

Some have tried to connect these worlds before (WikiWikiWeb's Permaculture article; Kent Beck's short-lived idea of Permaprogramming), but these have mostly concentrated on enhancing software engineering practices with some ideas from gardening. I am more interested in the aspect of cultural and ecological permanence. That is, how to give computers a meaningful and sustainable place in a human civilization that has a meaningful and sustainable place in the planetary biosphere.

WikiWikiWeb's Permaculture article

Kent Beck

Permaprogramming

1. Problem

Over the last few hundred years of human civilization, there has been a dramatic increase in the consumption of artificially produced energy. In the overarching story, this is often equated with "progress".

In the computer world, this phenomenon gets multiplied by itself: "progress" facilitates ever greater densities of data storage and digital logic, thus dramatically exploding the availability of computing resources. However, the abundance has also caused an equivalent explosion in wastefulness, which shows in things like mindblowingly ridiculous hardware requirements for even quite trivial tasks.

At the same time, computers have been failing their utopian expectations. Instead of amplifying the users' intelligence, they rather amplify their stupidity. Instead of making it possible to scale down the resource requirements of the material world, they have instead become a major part of the problem. Instead of making the world more comprehensible, they rather add to its incomprehensibility. And they often even manage to become slower despite becoming faster.

Utopian expectations

In both computing and agriculture, a major issue is that problems are too often "solved" by increasing controllability and resource use. Permaculture takes another way, advocating methods that "let nature do the work" and thus minimize the dependence on artificial energy input. Localness and decentralization are also major themes in the thought.

What makes permacultural philosophy particularly appealing (to me) is that it does not advocate "going back in time" despite advocating a dramatic decrease in use of artificial energy. Instead, it trusts in human ingenunity in finding clever hacks for turning problems into solutions, competition into co-operation, waste into resources. Very much the same kind of creative thinking I appreciate in computer hacking.

The presence of intelligent life in an ecosystem can be justified by its strengthening effect. Ideally, humans could make ecosystems more flexible and more resilient because of their ability to take leaps that are difficult or impossible for "unintelligent" natural processes. The existence of computers in a human civilization can be justified by their ability to augment this potential.

2. Physical resources

2.1. Energy

Permaculture emphasizes resource-sensitivity. Computers primarily use electricity, so to them resource-sensitivity primarily means 1) adapting to changes in energy conditions and 2) using the available energy wisely. Today's computers, even mobile ones, are surprisingly bad at this. This is partially due to their legacy as "calculation factories" that are constantly guaranteed all the resources they "need".

In permacomputing, intense non-urgent computation (such as long machine learning batches) would take place only when a lot of surplus energy is being produced or there is a need for electricity-to-heat conversion. This requires that the computer is aware of the state of the surrounding energy system.

At times of low energy, both hardware and software would prefer to scale down: background processes would freeze, user interfaces would become more rudimentary, clock frequencies would decrease, unneeded processors and memory banks would power off. At these times, people would prefer to do something else than interact with computers.

It is often wise to store energy for later use. Flywheels are a potential alternative to chemical batteries. They have similar energy densities (MJ/kg) but require no rare-earth materials and last for decades or centuries instead of mere years.

Flywheels

Energy densities

2.2. Silicon

IC fabrication requires large amounts of energy, highly refined machinery and poisonous substances. Because of this sacrifice, the resulting microchips should be treasured like gems or rare exotic spices. Their active lifespans would be maximized, and they would never be reduced to their raw materials until they are thoroughly unusable.

Instead of planned obsolescence, there should be planned longevity.

Broken devices should be repaired. If the community needs a kind of device that does not exist, it should preferrably be built from existing components that have fallen out of use. Chips should be designed open and flexible, so that they can be reappropriated even for purposes they were never intended for.

Complex chips should have enough redundancy and bypass mechanisms to keep them working even after some of their internals wear out. (In a multicore CPU, for instance, many partially functioning cores could combine into one fully functioning one.)

Chips that work but whose practical use cannot be justified can find artistic and other psychologically meaningful use. They may also be stored away until they are needed again (especially if the fabrication quality and the storage conditions allow for decades or centuries of "shelf life").

Use what is available. Even chips that do "evil" things are worth considering if there's a landfill full of them. Crack their DRM locks, reverse-engineer their black boxes, deconstruct their philosophies. It might even be possible to reappropriate something like Bitcoin-mining ASICs for something artistically interesting or even useful.

Minimized on-chip feature size makes it possible to do more computation with less energy but it often also means increased fragility and shorter lifespans. Therefore, the densest chips should be primarily used for purposes where more computation actually yields more. (In entertainment use, for example, a large use of resources is nothing more than a decadent esthetic preference.)

Alternatives to semiconductors should be actively researched. Living cells might be able to replace microchips in some tasks sometime in the future.

Alternatives to semiconductors

Living cells

Once perfectly clean ways of producing microchip equivalents have been taken to use, the need for "junk fetishism" will probably diminish.

2.3. Miscellaneous

Whenever bright external light is available, displays should be able to use it instead of competing against it with their own backlight. (See: Transflective LCD)

Transflective LCD

Personally-owned computers are primarily for those who dedicate themselves to the technology and thus spend considerable amounts of time with it. Most other people would be perfectly happy with shared hardware. Even if the culture and society embraced computers more than anything else, requiring everyone to own one would be an overkill.

3. Observation and interaction

The first item in many lists of permacultural principles is "Observe and interact." I interpret this as primarily referring to a bidirectional and co-operative relationship with natural systems: you should not expect your garden to be easily top-down controllable like an army unit but accept its quirkiness and adapt to it.

3.1. Observation

Observation is among the most important human skills computers can augment. Things that are difficult or impossible for humans to observe can be brought within human cognitive capacity by various computational processes. Gathered information can be visualized, slight changes and pattern deviances emphasized, slow processes sped up, forecasts calculated. In Bill Mollison's words, "Information is *the* critical potential resource. It becomes a resource only when obtained and acted upon."

Computer systems should also make their own inner workings as observable as possible. If the computer produces visual output, it would use a fraction of its resources to visualize its own intro- and extrospection. A computer that communicates with radio waves, for example, would visualize its own view of the surrounding radio landscape.

Current consumer-oriented computing systems often go to ridiculous lengths to actually prevent the user from knowing what is going on. Even error messages have become unfashionable; many websites and apps just pretend everything is fine even if it isn't. This kind of extreme unobservability is a major source of technological alienation among computer users.

The visualizations intended for casual and passive observation would be pleasant and tranquil while making it easy to see the big picture and notice the small changes. Tapping into the inborn human tendency to observe the natural environment may be a good idea when designing visualizers. When the user wants to observe something more closely, however, there is no limit in how flashy, technical and "non-natural" the presentation can be, as long as the observer prefers it that way.

3.2. Yin and yang hacking

Traditional computer hacking is often very "yang". A total understanding and control of the target system is valued. Changing a system's behavior is often an end in itself. There are predefined goals the system is pushed towards. Optimization tends to focus on a single measurable parameter. Finding a system's absolute limits is more important than finding its individual strengths or essence.

In contrast, "yin" hacking accepts the aspects that are beyond rational control and comprehension. Rationality gets supported by intuition. The relationship with the system is more bidirectional, emphasizing experimentation and observation. The "personality" that stems from system-specific peculiarities gets more attention than the measurable specs. It is also increasingly important to understand when to hack and when just to observe without hacking.

The difference between yin and yang hacking is similar to the difference between permaculture and industrial agriculture. In the latter, a piece of nature (the field) is forced (via a huge energy investment) into an oversimplified state that is as predictable and controllable as possible. Permaculture, on the other hand, emphasizes a co-operative (observing and interacting) relationship with the natural system.

Yang hacking is quite essential to computing. After all, computers are based on comprehensible and deterministic models that tiny pieces of nature are "forced" to follow. However, there are many kinds of systems where the yin way makes much more sense (e.g. the behavior of neural networks is often very difficult to analyze rationally).

Even the simplest programmable systems have a "yin" aspect that stems from the programmability itself. Also, taking the yang type of optimization to the extreme (like in the sub-kilobyte demoscene categories), one often bumps into situations where the yin way is the only way forward.

Halting problem

Intellectual laziness may sometimes result in computing that is too yin. An example would be trying to use a machine learning system to solve a problem before even considering it analytically.

3.2.1. Processes

There are many kinds of computational processes. Some produce a final definitive result, some improve their result gradually. Some yield results very quickly, some need more time.

Computing world still tends to prefer classic, mainframe-style processes that are one-shot and finite. No improvement over previous results, just rerun the entire batch from scratch. Even when a process is naturalistic, slow, gradual and open-ended – as in many types of machine learning – computer people often force it into the mainframeishly control-freaky framework. Some more "yin-type" attitude would be definitely needed.

4. Progress

The fossil-industrial story of linear progress has made many people believe that the main driver for computer innovation would be the constant increase of computing capacity. I strongly disagree. I actually think it would be more accurate to state that some innovation has been possible despite the stunting effect of rapid hardware growth (although this is not a particularly accurate statement either).

The space of technological possibilities is not a road or even a tree: new inventions do not require "going forward" or "branching on the top" but can often be made from even quite "primitive" elements. The search space could be better thought about as a multidimensional rhizomatic maze: undiscovered areas can be expected to be found anywhere, not merely at the "frontier". The ability to speed fast "forward" on a "highway of technology" tends to make people blind to the diversity of the rhizome: the same boring ideas get reinvented with ever higher specs, and genuinely new ideas get downplayed.

The linear-progressivist idea of technological obsolescence may stem from authoritarian metaphors: there may only be one king at a time. This idea easily leads to an impoverished and monocultural view of technology where there is room for only a handful of ideas at a time.

Instead of technological "progress" (that implies constant abandoning of the old), we should consider expanding the diversity and abundance of ideas. Different kinds of technology should be seen as supporting each other rather than competing against each other for domination.

In nature, everything is interdependent, and these interdependencies tend to strengthen the whole. In technology, however, large dependency networks and "diversity of options" often make the system more fragile. Civilization should therefore try to find ways of making technological dependencies work more like those in nature, as well as ways of embracing technological diversity in fruitful ways.

5. Programming

Programmability is the core of computing and the essence of computer literacy. Therefore, users must not be deliberately distanced from it. Instead, computer systems and user cultures should make programming as relevant, useful and accessible as possible.

Any community that uses computers would have the ability to create its own software. A local software would address local needs better than the generic "one size fits all" solutions would.

Rather than huge complex "engines" that can be reconfigured for different requirements, there would be sets of building blocks that could be used to create programs that only have the features necessary to fill their given purposes.

Most of today's software engineering practices and tools were invented for a "Moore's law world" where accumulation, genericity and productization are more important than simplicity and resource-sensitivity. New practices and tools will be needed for a future world that will no longer approve wasteful use of resources.

Optimization/refactoring is vitally important and should take place on all levels of abstraction, by both human and AI codecrafters.

Ideally, it would be possible to invent and apply esoteric tricks without endangering the clarity or correctness of the main code (by separating the problem definition from implementation details, for example). It might be wise to maintain databases for problem solutions, optimization/refactoring tricks and reduction rules and develop ways to (semi)automatically find and apply them.

6. Software

There are many kinds of software, and very few principles apply to all of them. Some programs are like handheld tools, some programs are like intelligent problem-solvers, some programs are like gears in an engine, and some programs are nothing like any of those.

6.1. Dumb programs

A program that is intended to be like a tool should be understandable, predictable and wieldy. It should be simple enough that a proficient user can produce an unambiguous and complete natural-language description of what it does (and how). Ideally, the actual executable program would not be larger than this description.

The ideal wieldiness may be compared to that of a musical instrument. The user would develop a muscle-memory-level grasp of the program features, which would make the program work like an extension of the user's body (regardless of the type of input hardware). There would be very few obstacles between imagination and expression.

The absolute number of features is not as important as the flexibility of combining them. Ideally, this flexibility would greatly exceed the intentions of the original author of the program.

6.2. Smart programs

In addition to what is commonly thought as artificial intelligence, smartness is also required in tasks such as video compression and software compilation. Anybody/anything intending to perform these tasks perfectly will need to know a large variety of tricks and techniques, some of which might be hard to discover or very specific to certain conditions.

It is always a nice bonus if a smart program is comprehensible and/or uses minimal resources, but these attributes are by no means a priority. The results are the most important.

On minimal resources

One way to justify the large resource consumption of a smart program is to estimate how much resources its smartness saves elsewhere. The largest savings could be expected in areas such as resource and ecosystem planning, so quite large artificial brains could be justified there. Brains whose task is to optimize/refactor large brains may also be large.

When expanding a "dumb" tool-like program with smartness, it should never reduce the comprehensibility and wieldiness of the core tool. It should also be possible to switch off the smartness at any time.

6.2.1. Artificial intelligence

Artificial intellects should not be thought about as competing against humans in human-like terms. Their greatest value is that they are different from human minds and thus able to expand the intellectual diversity of the world. AIs may be able to come up with ideas, designs and solutions that are very difficult for human minds to conceive. They may also lessen the human burden in some intellectual tasks, especially the ones that are not particularly suitable for humans. Since we are currently in the middle of a global environmental crisis that needs a rapid and complete redesign of the civilization, we should co-operate with AI technology as much as we can.

AI may also be important as artificial otherness. In order to avoid a kind of "anthropological singularity" where all meaning is created by human minds, we should learn to embrace any non-human otherness we can find. Wild nature is the traditional source of otherness, and a contact with extraterrestrial lifeforms would provide another. Interactions with artificial intelligence would help humans enrich their relationships with otherness in general.

6.3. Automation

Permaculture wants to develop systems where nature does most of the work, and humans mostly do things like maintenance, design and building. A good place for computerized automation would therefore be somewhere between natural processes and human labor.

Mere laziness does not justify automation: modern households are full of devices that save relatively little time but waste a lot of energy. Automation is at its best at continuous and repetitive tasks that require a lot of time and/or effort from humans but only a neglectable amount of resources from a programmable device.

6.4. Maintenance

Many programs require long-term maintenance due to changing requirements and environments. This is an area where gardening wisdom can be useful. A major difference is that a software program is much easier to (re)create from scratch than a garden.

Most changes to a program tend to grow its size/complexity. This effect should be balanced with refactoring (that reduces the size/complexity). The need for refactoring is often disregarded in today's "Moorean" world where software bloat is justified by constant hardware upgrades. In an ideal world, however, the constant maintenance of a program would be more likely to make it smaller and faster than to bloat it up.

Programs whose functionality does not change should not require maintenance other than preservation. In order to eliminate "platform rot" that would stop old software from working, there would be compatibility platforms that are unambiguously defined, completely static (frozen) and easy to emulate/virtualize.

7. Culture

7.1. Relationship with technology

Any community that uses a technology should develop a deep relationship to it. Instead of being framed for specific applications, the technology would be allowed to freely connect and grow roots to all kinds of areas of human and non-human life. Nothing is "just a tool" or "just a toy", nobody is "just a user".

Technology and the Character of Contemporary Life

There would be local understanding of each aspect of the technology. Not merely the practical use, maintenance and production but the cultural, artistic, ecological, philosophical and historical aspects as well. Each local community would make the technology locally relevant.

Appropriate technology

Each technology would have one or more "scenes" where the related skills and traditions are maintained and developed. Focal practices are practiced, cultural artifacts are created, enthusiasm is roused and channeled, inventions are made. The "scenes" would not replace formal institutions or utilitarian practices but would rather provide an undergrowth to support them.

No technology should be framed for a specific demographic segment or a specific type of people. The "scenes" should embrace and actively extend the diversity of their participants.

Theoretical and practical understanding are equally important and support one another. Even the deepest academic theorist would sometimes make their hands dirty in order to strengthen their theory, and even the most pragmatic tinkerer would deepen their practice with some theoretical wisdom.

7.2. Telecommunication

The easiest way to send a piece of information between two computers should always be the one that uses the least energy without taking too much time. The allowable time would depend on the context: in some cases, a second would be too much, while in some others, even several days would be fine. If the computers are within the same physical area, direct peer-to-peer links would be preferred.

When there are multiple simultaneous recipients for the same data, broadcast protocols would be preferred. For high-bitrate transfers (e.g. streaming video), shared broadcasts would also be culturally encouraged: it is a better idea to join a common broadcast channel than request a separate serving of the file.

General-purpose communication platforms would not have entertainment as a design priority. The exchange of messages and information would be slow and contemplative rather than fast and reactive. In public discussion, well-thought-out and fact-based views would be the most respected and visible ones.

Communication networks may very well be global and the protocols standardized, but the individual sites (platforms, forums, interfaces, BBSes) would be primarily local. Global, proprietary social media services would not be desirable, as they enforce the same "one size fits all" monoculture everywhere.

All the most commonly needed information resources would be available at short or moderate distances. A temporary loss of intercontinental network connection would not be something most users would even notice.

It should be easy to save anything from the network into local files. "Streaming-only" or other DRM-locked media would not exist.

People would be aware of where their data is physically located and prefer to have local copies of anything they consider important.

Any computer should be usable without a network connection.

7.3. Audiovisual media

Many people prefer to consume their audiovisual media at resolutions and data rates that are as high as possible (thus consuming as much energy as possible). This is, of course, an extremely unsustainable preference.

There are countless ways, most of them still undiscovered, to make low and moderate data complexities look good — sometimes good enough that increased resolution would no longer improve them. Even compression artifacts might look so pleasant that people would actually prefer to have them.

For extreme realism, perfection, detail and sharpness, people would prefer to look at nature.

7.4. Commons

Societies should support the development of software, hardware and other technology in the same way as they support scientific research and education. The results of the public efforts would be in the public domain, freely available and freely modifiable. Black boxes, lock-ins, excessive productization and many other abominations would be marginalized.

Written by Ville-Matias "Viznut" Heikkilä.

Viznut

This work is licensed under a Creative Commons Attribution 4.0 International License.

Creative Commons Attribution 4.0 International License