💾 Archived View for schinkel.bevuta.com › rants › war-on-gp-computing.txt captured on 2023-11-04 at 11:44:55.

View Raw

More Information

-=-=-=-=-=-=-

[https://cheapskatesguide.org/articles/war-on-gp-computing-farnell.html]

Why we will win the war for general-purpose computing 

7-15-21

Dr. Andy Farnell is a regular reader of the Cheapskate's Guide. When he read "Taking a Stand in the War on General-Purpose Computing" in February, he felt it was too pessimistic. So, he decided to write his own article to give his perspective. This is his article.


Recent online articles have sounded the alarm over an escalating "war" on General Purpose Computing (GPC). Tech Giants have consolidated more power during the pandemic and are flexing their muscles, locking down systems more tightly, and becoming audacious in their open disrespect for digital rights and privacy. Much of this occurs under the pretext of security or compliance. Personal computers, once tools of choice, are being recklessly pushed aside in some societies in favour of "necessary" always-on, permanently connected mobile appliances.

Here I hope to offer a different perspective and explain why, as a technological optimist 1, I don't think this will work out. In this essay we will examine why there's a problem, and make the case that advanced technological societies must always retain open, general purpose personal computers, strictly under the control of their users, and that the market for these is set to grow.

Climate change is in the process of teaching us that mono-cultures built in the service of a few powerful industries are a risk. Ongoing heatwaves, droughts, storms, wildfires, and extinctions should remind us that this is exactly what happened with road transport. We built cities and whole ways of life around cars, to benefit a few big firms - and now we are reaping the fruit of that "convenience". Similarly, smartphones are becoming opaque appliances, unowned by, and inscrutable to their users, foisted upon society as passports, identities, means of payment, access to healthcare and education. What could go wrong?

We are rushing headlong into a catastrophe. Blinded by "convenience" our horizons of real choice are closing in. Cybersecurity is in total disarray and there is no basis for a reasonable person to trust these devices or the companies that supply and maintain them. The physical provenance of devices is itself questionable, being manufactured in nations considered our political and ideological enemies. We are ignoring worrying supply, control, resilience and e-waste issues. Many unresolved serious questions remain over physical and mental health risks from smartphones. Yet daily, breathless industry shills tell us that this is all "inevitable".

As users see their smartphones weaponized against them, and find few real alternatives, some are expressing fears that Tech Giants are plotting an oblique coup in all but name, and positioning to usurp national governments with their own brands of cybernetic governance. They are building in control and exclusion, disinformation, private digital money and surveillance capitalism into gadgets we seem unable to step away from. I believe this threatens Western liberal democracy, fought for at such cost 80 years ago.

Firstly, let's ask, what is a General Purpose Computer, and why is that important? The idea of a Universal Machine, to most people, seems grandiose. Like something from science fiction. Perhaps "universal" overstates the case. Computers cannot provide light, energy, food, medicine or transport matter. Yet, they can help with all those things. They do so by a process known as computing, which transforms symbols, usually taken to be numbers. We can express problems as questions in these symbols, and get answers. Even though the computing devices themselves are made of silicon, and are now mostly about the size of a human hand, the machines that solve our problems are really made of invisible software, which we call "application programs", or in the vernacular "Apps".

The universal computer was a theoretical creation of mathematicians going back to Al Kwarismi Musa, eponymous creator of the "Algorithm" in the 8th century AlKhwarizmi, and most famously by Charles Babbage and then Alan Turing. Of those, only Turing lived to see a working computer created 2. In the 1950s at Harvard, John von Neumann designed a workable microprocessor using the latest silicon chips, and digital computers became reality.

It was Ada Lovelace who offered the first prophetic insights into the social power of programming as a language for creativity. She imagined art, music, and thinking machines built from pure code, by anyone who could understand mathematics. With a general purpose computer anybody can create an application program. That is their extraordinarily progressive, enabling power. Apps, being made of invisible words, out of pure language, are like poems or songs. Anybody can make and share a poem, a recipe, or an idea like a computer program. This is the beauty and passion at the heart of the hacker culture.

Some of the most significant wars of human history were not fought on far-away battlefields, but in our own streets and courtrooms. These were battles to secure access to information, to make laws, to be equally represented, to publish and for individual rights to freely and privately communicate. After Johannes Gutenberg's invention of the printing press in 1440 a century of upheavals followed. Established powers tried to limit the spread of printing. The prototypes of all modern censorship, copyright, defamation and sedition laws originate in this period. De-platforming in the 15th century meant having a mob smash up your press and then publicly burn you on a pyre of your own books.

In one respect, silicon resembles the early printing presses. The manufacture of computer hardware is limited to a few factories, mainly in China, all of which are controlled by a few global mega-corporations. This wasn't always the case; in my own small country of Britain we had dozens of chip and computer manufacturers between the 1950s and 1980s like Marconi-Elliott, Plessey and Ferranti. Acorn pioneered the RISC architecture (ARM). The Inmos "transputer" led the world in multi-core parallelism. Hundreds more spin-offs from UK universities like Cambridge and Manchester led the world in innovative tech ideas.

Our government mismanaged and squandered that head-start, and acquisitions slowly drained the UK innovation fountain until today a mere ten or so low volume semiconductor companies remain and we are selling these off to the Chinese as fast as corrupt government officials can cash their bribe cheques. In the process we are pissing away national security and autonomy to unseen globalist powers. Other countries have similar stories.

In the 1970s and 1980s a great project of digital literacy swept the world. Governments did not want their citizens to be "left behind in the technology race." My generation were taught to think of programming as a life skill, indeed an essential ability for modern life. A great many of us have come to know computing and the empowering force of program code as part of our lives. General Purpose Computing has become a way of being. It is seen as a right and necessity. Programming languages and open hardware are widely considered commons. This is what created the software industry. For developers, they are the air we breathe and the ground we stand on.

Some people chose to share their programs for the good of humanity, and a few people made a great deal of money from selling their programs. Bill Gates is perhaps the most famous of those taking the commercial route, while Richard Stallman is maybe the best known of those who urged a sharing approach. So profitable was the production of software, having essentially zero reproduction cost, that eventually, giant tech corporations dominated the world. Much of what they sold was taken from the commons, from the freely shared software and ideas made by ordinary people.

Soon the Tech Giants faced a problem. They had to compete with free and openly shared application programs made by everyone else as part of their day-to-day lives. The freedom and societal wealth created at the bottom was a threat to power. Thus the mythology of the "bad hacker" was cultivated, first in terms of network intruders (see Stirling's account of the Hacker Crackdown Stirling92) and later as the "Pirate" (Lessig's Free Culture provides a clear account Lessig04). An early split emerged between those who saw computer code as a commons for broad empowerment, and those who wished to privatise it and use it as a means of control.

General purpose computers enable unparalleled freedom and opportunity to those wanting to build common resources. The story of computing after about 2000 is the tale of how commercial interests tried to sabotage the prevalence of general purpose computers. The saga is too broad and far-reaching to examine here. It is already a matter of great concern for historians of computing, and still fresh in the living memory of most developers, that a vicious and sustained attack on open computing has taken place in the past two decades.

Cynics might say that governments went too far by encouraging digital literacy and home computing, and thus gave up too much power. Though they claimed to care that citizens not be "left behind", it was really the politicians and their friends in industry who did not want to be left behind. Industry cannot advance without a workforce. In reality, a technically educated populace was a short-term means to an industrial end, not a bold ideology.

Educating a generation of computer scientists is not like importing temporary immigrant labour. Like a returning army, they do not magically disappear when unneeded. The fate of any group, whether a nation or a species, cannot be separated from the welfare of each individual member. And now the genie is out of the bottle. Billions of people live with the reality and expectation of access to technology they fully control, and the opportunity to study and pursue lawful business in that highly creative field. Computing changed culture.

Today we talk of an ongoing "war" to preserve general purpose computers against Tech Giants who would like to 'pull up the ladder' and prevent usurpers benefiting from the same privilege they were given and still enjoy. A sinister ploy is the attempt by some who hold digital power to rewrite the culture, history and narrative of computer science.

A substantial number of people actually believe that a man called Steve Jobs, once an exec at the Apple company, invented computers. Some US politicians have claimed they 'created' the internet. In truth, the great minds behind what runs our society today are unknown to ordinary people. They came from places like IBM, Bell Labs, the US ARPA project and various universities between the 1960s and 1990s. Knowledge of their names and their contributions is part of hacker culture, and our duty to preserve and retell the correct version of computing history. Nonetheless, a "popular" version of the "technology narrative" exists, in which great benevolent companies like Google created, as if by magic, all the riches of the modern world.

In contrast to a general purpose computer, an appliance performs a fixed function. Think of a toaster. Appliances were commonplace long before computers entered our homes in the early 1980s. Indeed, the wave of 'home computing' and 'digital literacy' rolled out in the 80s felt forced to many of my generation who could not see much point having a gadget with undefined function. Why have something like a computer in the home?

Part of the digital literacy programme was to explain and justify computers. To that end many creative and bizarre needs were conjured-up, from home automation to cataloguing your stamp collection. Home computing didn't really stick. Unless kids got into playing or creating early computer games, most 'home computers' went into a drawer or on a shelf and sat there gathering dust.

From here two branches forked out. Word-processing was the killer application which Alan Sugar's Amstrad PCW and the Commodore PET moved in to fill. The latter was a marketed as a more "general purpose" machine running BASIC, while the former pitched itself more task specifically, and in its final iteration which sold 8 million units a custom GUI sat above CP/M Plus operating system. It was a forerunner of an "appliance computer".

Along with the TRS-80 these were aimed at a new "Office PC" market that leads to the IBM PC and to Bill Gates's great opportunity to create an operating system as a product for general purpose use. What makes Gates smart here is realising that people would value general purpose machines and that an operating system could actually be sold as a product. An operating system is an app/tool. It is just the application that facilitates all the others. How far Microsoft have moved from that position, much like the distance between Apple's "Nineteen Eighty Four" advert and the reality of their patriarchal control philosophy.

Today, I have dozens of useful appliances in my house, which are all ostensibly computers, but have no use as general purpose machines. A handheld audio recorder, digital thermometer, clock, telephone, oscilloscope, and active loudspeakers are just the ones I can see from where I now sit. Appliances contain 'embedded' computers that are no longer general purpose. Appliances offer advantages of doing one task well and efficiently, being uncomplicated and durable.

My first Nokia 'chocolate bar' phone was an appliance that lasted me ten years. The phone in my room is a rather beautiful 1980s vintage model on a landline, still powered from a 50 volt exchange supply. When workers recently severed the power cable in our street it was the only electronic item in the house still working, having been designed for resilience. Well built single function appliances are great.

An argument persists about what kind of technology is best for the environment. A downside to single function appliances is e-waste. Having one device for each job adds up to a lot of plastic and electronics. To balance that, appliances can be built to last a long time compared to rapidly evolving computers – but in practice they are engineered to break in order to sell more.

The environmental "benefits" of smartphones as multi-functional devices are thus over-rated. We throw away over fifty million tonnes of e-waste, that's a billion devices each year, into landfills, squandering irreplaceable rare-earth metals, and putting dangerous heavy metals, plastics and mutagenic chemicals into the environment. Unless you are suggestible to contrived needs cooked up by marketing people, a handful of appliances will serve most households. Unless encumbered by remote kill switches, designed obsolescence or deliberate obstacles to repair, most will last for decades.

Another advantage of appliances is functional stability. Unless experiencing psychosis or the effects of recreational psychedelics, I do not expect the clock on my wall to transform into a telephone or my vacuum cleaner to start suggesting recipes and movies. Sane and legible functions of technology are important to us. A normal person does not expect their technology to transmogrify, or to deceive them. Only a decade ago, claiming that your television was spying on you might have resulted in detainment and medication. Today it is a reality for anyone who has been suckered into buying a so-called 'Smart TV'.

Another downside of single-purpose appliances is space and clutter. Despite falling population in developed countries we live in ever smaller spaces due to spiralling real estate prices. Before the pandemic we were also on a harmful trajectory toward gratuitous mobility, with some people travelling hundreds of miles per day between multiple jobs. So, some people eschew carrying several specialised devices.

The utility trade-off between general purpose and fixed function technology is rich and fascinating. One cannot definitively say that either is better. Consider the task of packing for a camping expedition. Folks who enjoy serious outdoor activities will praise the virtues of a separate low-tech compass, map, torch, walkie-talkies and so forth. We might additionally pack a GPS enabled tracker, or take along a smartphone even when going beyond the urban signal range, but these are considered 'backups'. Which leads us to consider the Swiss Army Knife, or the matter of multi-functionalism.

In the 1990s 'multi-tasking' was a buzzword. The ability to do ten things badly instead of one thing well was hailed as a virtue. People actually wrote with pride on their resumes that they were "good at multi-tasking" (probably as an unconscious euphemism for pliancy).

Gene Roddenberry likely had an influence on our ideas of multi-functional technology through the fictional 'Tricorder' of Star Trek lore. Shrinking electronics allowed designers to gratuitously add functions to gadgets. For a decade at the end of the last century, 'featurism' drove the electronic goods markets.

If multi-functionalism is the ability to do many related things, like a Swiss army knife, then an over-reaching attempt to do everything, or at least far too many awkwardly unrelated things might be dubbed "omni-functionalism". Although smartphones were first marketed as general purpose computers, they are no such thing now. They are awkward multi-functional appliances. A notable ancestor of the smartphone, and symbol of tragic technological poverty, is the 'wind up radio alarm clock torch and personal alarm' which retailed in the pound and dollar stores of the 1990s.

Electronic eierlegende Wollmilchsau infested the late 1990s technology markets, and over-functional uselessness even became an ironic design quality in the Japanese art of Chindogu. Its culmination was the Tamagochi, a piece of fast-track e-waste designed to condition a generation into developing attachment patterns with technology. The Tamagochi's role in psychologically preparing people for the smartphone era is not widely understood or recognised.

Smartphones occupy a state between general purpose computing and appliances which could be seen as a degenerate condition. Software engineers have condemned, for half a century, the poor modularity, chaotic coupling, lack of cohesion, side effects and data leaks that are the ugly symptoms of the technological chimera. As we now know, smartphones, being neither general purpose computers over which the user has authority, nor functionally stable appliances, bring a cavalcade of security holes, opaque behaviours, backdoors and other faults typical of machinery built according to ad-hoc design and a celebration of perversely tangled complexity.

Degenerate functionalism can be approached from two sides. The first is by over-reach as fairly well-bounded multi-functional device is over-developed. When a Swiss army knife becomes a Swiss army knife with laser and shark repellent it has crossed a line. In software this is the well-known phenomenon of 'feature creep'. Alternatively, we can end up with a dysfunctional device by taking a general purpose computer and trying to 'dumb it down', which is the present trajectory for smartphones.

Smartphones were originally designed around powerful general purpose computers. The cost of general use microprocessors had fallen so far that it was more economical to take an off-the-shelf computer and add a telephone to it than to design a sophisticated handset as an appliance (a so-called ASIC) from scratch. But the profit margins on hardware are small. Smartphones needed to become appliance-like platforms for selling apps and content. To do so it was necessary to cripple their general purpose capabilities, lest users retain too much control.

This crippling process occurs for several reasons ostensibly sold as "security". Security of the user from bad hackers is one perspective. Security of the vendor and carrier from the user is the other. We have shifted from valuing the former to having the latter imposed on us. In this sense mobile cybersecurity is a zero sum game, what the vendor gains the user loses. In order to secure vendor rights to extraction of rent, the users freedoms must be taken away.

Recently, developer communities have been busy policing language in order to expunge the word 'slave' from software source code. Meanwhile, slavery is precisely what a lot of software is itself enabling. A slave is someone or something entirely dominated by the influence of some other person or thing. Notwithstanding the fact that devices are often built using actual child slave labour, several technical factors operate here.

The first factor is that companies now only ship provisional products. Twenty years ago, if you bought a product it would be finished and you could inspect it to assure yourself. Modern software engineering has become a scam in which the development life-cycle extends far beyond the point of purchase and the buyer cannot, even in principle, verify or examine their purchase. The maintenance phase has been redefined as an in-situ test and finishing stage.

Suppose you bought a house that came with no bathroom, or roof, but instead got a vague promise of an imminent 'update'. You would likely be disappointed. This is modern software. It invariably ships in an incomplete state, filled with known bugs, missing features and security holes. What makes this possible is the flippant, bodacious, and arrogant assumption of internet connectivity.

Under the euphemism of 'software as service', each device and application has become a satellite of its manufacturers' network, intruding into the owner's personal and digital space. Even in open source software, such as the sound editor Audacity, developers have become so entitled, lazy and unable to ship a working product that they alienated their userbase by foisting "telemetry" (a euphemism for undeclared or non-consensual data extraction and updates) on the program.

Violating many jurisdictions' consumer and trade description laws, products make no mention of their 'requirement' for internet access and simply fail when they cannot call home. They are, as such, unfit for their advertised purpose. Consent to connection, if it exists at all, is via legally precarious tacit agreement to arcane terms. Bandwidth costs and security risks fall upon the owner. At the manufacturer's arbitrary whim, if the company goes out of business, or if its servers are hacked, your product will cease to work. Your sole recourse is to return the defective product.

Good luck finding an alternative unencumbered by the same tricks and design failures. These practices are endemic. The schtick is that you are absolutely dependent to ongoing support from the company. This is because dependency is precisely what they are selling you. In a competitive area such as digital technology, with small margins, products are really sold as conduits to extract valuable personal data from you, or hooks to ensure your continued profitable dependency.

The second pillar of slavery is encrypted links that benefit the vendor. Encryption hides the meaning of communications. Normally we consider encryption to be a benefit, such as when we want to talk privately. But it can be turned to nefarious ends if the user does not hold the key. This same design is used in malware. Encryption is turned against the user when it is used to send secret messages to your phone that control or change its behaviour, and you are unable to know about them.

The manufacturer will tell you that these 'features' are for your protection. But once an encrypted link, to which you have no key, is established between your computer and a manufacturer's server, you relinquish all control over the application and most likely your whole device. There is no way for you, or a security expert, to verify its good behaviour. All trust is given over to the manufacturer. Without irony this is called Trusted Computing when it is embedded into the hardware so that you cannot even delete it from your "own" device. You have no real ownership of such devices, other than physical possession. Think of them as rented appliances.

The third mechanism for enslavement is open-ended contracts. Traditionally, a contract comprises established legal steps such as invitation, offer, acceptance and so forth. The written part of a contract that most of us think of, the part that is 'signed', is an agreement to some fixed, accepted terms of exchange. Modern technology contracts are nothing like this. For thirty years corporations have been pushing the boundaries of contract law to the point they are unrecognisable as 'contracts'.

As a non-US writer I don't need to make some ridiculous disclaimer that "I am not a lawyer". I am not, but that's beside the point because we learned this stuff at school. Like every child should. We were educated to believe we should be informed legal agents in the world, able to make choices, not docile supplicants alienated from our rights and responsibilities by shrugging abdication.

But the point here is that most lawyers don't read EULAs either! Like you and I, they click through in a hurry "agreeing" to unseen terms. Indeed, many would not be able to understand technical details even if they did look. So, there is a very strong argument to be made now that most tech contracts are invalid because "signatories" have no capacity. Tech companies always settle rather than allow this to be tested in court.

Like the invisible software update model which allows the function of your purchase to mutate without your knowledge, you are now required to tolerate the same deceit in a legal realm. There are no longer signatures or explicit agreements, of course. Technology has made a mockery of legal process, which you will understand if you have tried to actually sign a real contract using online tools.

Now, merely by usage, association, or just failure to opt-out, you tacitly accept rolling, arbitrary future changes that require you to actively monitor some obscure source of 'notification'. Not only do you not know what your purchase is really doing, but you will remain ignorant to all and any future revisions.

A fourth mechanism for enslaving your technology is deliberate incompatibility. The consumer electronics universe and the internet exist because of standards. If chips were not standard sizes, and used standard power supplies and logic levels, nothing could be built. If TCP/IP and HTTP protocols were not created and maintained by thousands of volunteer standards committee members, nothing in our modern world would work. A brilliant lecture on this is given by Richard Buckland of New South Wales University, who described standards as "miracles" of human knowledge engineering.

Big Tech companies have all this to be grateful for. Their businesses are entirely built upon the standards created to ensure interoperability and equality of opportunity. What do Big Tech firms offer in gratitude? They deliberately break standards. They interfere, infiltrate and bribe standards bodies to subvert the foundations of our internet so that common protocols are replaced by their own products. They are no less than vandals.

There are dozens more tricks, but one last ruse to ensure digital slavery is locked hardware. Let's say you bought a product that is defective in the many ways described above, and decide you want to fix it. It's your property, right? Wrong. Users increasingly discover that modern digital devices are not modifiable.

Digital restrictions management (DRM) is built into the hardware, like booby traps, to thwart any engineer trying to fix your product. If you want to remove spyware, install alternative operating systems that are more secure, or even perform simple tasks like replacing a battery, you are prevented by the manufacturer. Even if a clever engineer is successful at fixing it for you, there are laws that actually support these rotten companies. A whole "right to repair" movement has had to arise in defence of users most basic expectations to fix their own property.

So, let's be clear. There is much more to the definition of a General Purpose Computer than the technical ability to run code as a 'Turing Complete' universal machine. Even if a device looks and feels like, or claims to be, a General Purpose Computer, under the above conditions it is clearly not. It is an appliance with degraded functionality that you happen to have in your possession. It is owned and run by the manufacturer. This description certainly fits Android and Apple smartphones, but it increasingly applies to laptops and tablets. Here is what most people mean by the 'war on general purpose computers'; the substitution of a slave device for a user programmable device.

With this background to help us better understand what a general purpose computer and an appliance are, let's think about what this means.

There are many facets to this problem of technological domination and the abdication of personal, institutional and civic responsibility to outsourced technology. I think the nefarious motives are rather obvious. If, as users, we retain control over our devices, we can exercise market choices. If manufacturers or carriers can trick us to give up control in the name of security or convenience they gain a captive market of users who must pay for and operate products as they are told.

To that end, corporate technology is becoming deceitful, expansive and totalitarian. That's barely debatable by now. Plus there are many things we did not understand about the immanent nature of technology fifty years ago, like network effects, which we must reckon with today.

It is common in technology to find we have entered cul-de-sacs without realising it, and need to backtrack. In the 1960s we did not know what DDT did to animal life until Rachel Carson alerted us, and likewise we must admit stark new evidence around digital technology. Today DDT is banned and environmental awareness is a first-class concern in government policy and education.

Aside from Snowden's revelations, we have yet to experience that same rude awakening with digital technology. How will we deal with spiralling teen suicides, depression, mountains of e-waste, or nation state and corporate malinfluence? What are we going to do if turns out microwave RF really does carry a cancer risk? Spend decades and billions of dollars trying to hide it like the tobacco industry did? Our chips may be made from silicon dioxide, but we are literally building on sand if we don't allow alternatives and choice in technological society.

That Tech Giants have transformed from enabling engines of innovation into serious threats to freedom, democracy, and Western liberal values is an opinion less widely held. It is one that I sincerely believe, and I think that if we wish to preserve the idea of sovereign nation states, and thus social contract as the basis of law, in the coming years governments must face down the tech corporations. Those who believe this remain a minority, and whether the threshold for cultural sea-change can occur quickly enough to save us from actual techno-fascism is unclear. Nonetheless, a palpable awakening is stirring. and not soon enough.

This question of threats to General Purpose Computing, and a reinvigorated corporate war on Software Freedom is salient and topical. It is precisely because Microsoft claims it 'Loves Linux' that we should be very worried. This claim exactly fits Microsoft's past pattern of sabotage known in the tech community as "embrace, extend, extinguish".

Many voices claim to be happy with only managed or stand-alone appliances built by large corporations. Nobody can dispute their value. I love my gadgets like digital cameras, mixing desks, dumb-phones and so on. I feel little urge to hack or improve them. In five or ten years I will likely buy updated models.

However, I do not want them to be connected to a network without my explicit say-so. I do not want their function to be deceptive. I do not want cheap gadgets that I really pay for with a covert data-tax that violates my privacy and creative space. I do not want them unrepairable. I do not want them to be effectively mandated because all other choices, including the choice to abstain, are pushed aside. In short, I want the technology I choose to let into my life to be under my control, and therefore general purpose programmable options must remain a choice. That is not negotiable.

To ask whether 'people really want' general purpose computing is a disingenuous and mischievous question posed to deflect from a deeper inquiry about whether we all, as a society or global civilisation need software freedom and assured access to general purpose computing. And the answer to that is an almighty Yes! Provably yes. Not since Edward Bernays in the 1930s have concepts of what people want and what they are coerced into consenting to, ever intersected.

A technological society without general purpose computing and software freedom is a society headed for disaster. Like the Soviet Union in 1980, I would give it twenty years at most. When Arthur C. Clarke said that "Any sufficiently advanced technology is indistinguishable from magic", he was not condoning technological ignorance and cybernetic governance, but issuing a warning (which, like for the foretelling of all visionary science fiction writers, we totally misunderstood and ignored).

Advanced civilisations that survive are able to replicate and maintain their infrastructural patterns, to educate and innovate, or they die. Right now Big Tech is distinctly Soviet. The synthesis of Maoist ideologies and Silicon Valley Cosmist cultism is occurring at the data level and hardware level, orthogonal to, or above, the visible left-right political axis. We are entering a period of "consumer communism".

Hopefully it will not endure. Look at the internal ethical tectonics within Google. Look at what the EU is planning. Look at Modi's Indian government involvement in policing tech. The writing is on the wall. Unless people take back tech while open source repositories are still available we stand a good chance of suffering a devastating regression.

Five years ago I gave an AES lecture at Kings College London on the "Problem of the pulled-up ladder". It is something that the Union of Concerned Scientists has put alongside other existential threats as a weakness in the resilience of human civilisation. I was inspired by Thomas Thwaite's "Toaster Project" Thwaite08 to get my electrodynamics students to build an audio system including a workable microphone and loudspeaker from nothing but raw materials.

I was actually surprised by their success, but also by the unexpected obstacles we encountered. The ability of a technological society to reconstruct itself is thwarted, not just by the accidental loss of capability, but by deliberate sabotage of knowledge by those protecting their power. This overlaps with education, reproducible research and the problems with patents and copyright being used to infringe people's right to repair.

Even if accessible now, thanks to Sci-Hub, much research, even high quality peer-reviewed work, is non-reproducible. Many modern patents are frauds, containing only declarative knowledge and vague over-reaching formulations of 'business logic' that are of no use to repeat study and hence to society. For example, patents granted on woolly ideas like anti-gravity engines are as fanciful as buying a plot of land on the Moon. They disgrace the Patent Office.

Modern textbooks build on high-level abstractions for which the earlier supports have been abandoned and forgotten, to the point where it's hard to find educational resources for whole disciplines within physics and maths. For example, thermionics seems to be a largely discarded branch of electronics outside hobbyist musicians and Russian military radio operators.

Each step of human technological development is important because technology is a living thing, like a tree. The trunk contains all of its earlier layers, visible as rings in cross-section. This is important historically, of course, but also practically. Access to each of the underlying layers is required to ensure the resilient propagation of the higher ones.

Societies that surf a wave of progress but forget the history that got them there are doomed. Appliances like the iPhone only make sense, and one can only replicate them, in a world where General Purpose Computers are also available and widely understood. Unless we document our engineering, and keep older technicians actively employed we lose technology, and may actually need to reverse engineer or re-invent our own work. I was told by an engineer that NASA had to reverse engineer technologies from the Apollo project, because the documentation was lost and the old-beards had retired or passed. Without continuity afforded by a 'computing culture' we are a cargo cult. Once the generation who control the production die out, the game is over.

A question then is, what powerful interests actually defend computing freedom? I am not talking about the Free Software Foundation (FSF) or Electronic Frontier Foundation (EFF) which are specifically 'activist' groups. I am talking about the industrial thinkers, philanthropists, and real statesmen who are smart enough to recognise the need to defend the values of free innovation and civic agility as the core of free technological societies. For the majority, it is strongly in their interests to preserve open computing choice.

Against the forces of convenient enslavement, who will "force us to be free" in Rousseau's sense? I am optimistic that such entities are numerous and very powerful. These include governments, the military, academia, medicine, libraries, police forces, and all the civic organs that built our internet in the first place. They must realise that capture by a cybernetic mono-culture is the antithesis and downfall of polity and civic life.

Long before that 'Eternal September' portending the pestilence of Big Tech parasites, we were all inescapably invested in interoperability, standards, long term resilience of supply chains, economic diversity and everything that brings vitality to technological progress. That's the essence of Western liberal values.

The mass market for General Purpose Computers was contrived partly to boost the microprocessor industry and to roll out 'home computing' as a digital literacy project for economic competitive advantage. It paid off. There would be no Tech Giants if my generation had not spent the evenings of our youth tinkering in bedrooms and garages. What we now need is a follow-up project of digital heritage, to ensure that what society built on the shoulders of giants is not swiped by a handful of privateers.

Now we need a cultural revolution in technology to consolidate the actual, material revolution that has occurred. We must catch up with our own achievements to arm everyone with the understanding necessary for a participatory, democratic technological society that doesn't exclude groups - like those who choose not to carry a tracking device, prefer to carry cash to tip waiters and give to buskers, or simply make the choice that brain cancer, cognitive and eyesight decline are not health risks they wish to have forced upon them.

The pervading tech-pessimism we feel today is justified because we are in a nadir of real innovation where control has swung too far into the hands of the few. Much of what we innovate is techno-solutionist junk, and we know it. Part of Big Tech's devious project to ossify digital consumer technology is a land-grab because they know their grip is actually rather weak.

They know that an innovative or cultural paradigm shift could strip them of power rapidly. They want to pull up the ladder, to make sure the next generation don't benefit from the same privileges as we did. On the world stage, isolationist nationalism, great firewalls and splinternets are other symptoms of the same underlying collapse of confidence in a common project of beneficent technology.

As we approach the end of the first quarter of the twenty-first century, I sense the first phase of popular computing technology is reaching its end. It led to giant corporate walled-gardens like Google, Apple and Facebook that will hopefully wither and die in due course, so the present unpleasant situation will change. I realise that we are in the infancy of computing. Harvard/von Neumann architectures on which the past seventy years of Moore's Law have ridden were just the opening 3.

Massive technology diversification is just around the corner. Ray Kurzweil misled West Coast American thinking with what I believe is a wrong-headed conceit of a 'singularity' - a consolidated Cosmist utopia. The so-called singularity is not a moment of convergence, but an explosion. It's one in which strategic limitation of technology becomes impossible and the ability of any one group or power to define it vanishes. Of course, it is also a dangerous moment.

Anyone who sees the booming market in single board computers, hybrid neural and DSP processors, massively multi-core silicon with embedded GPUs, new FPGA technology and the rise of RISC-V would surely laugh at the idea that General Purpose Computing is dying! But there are deeper theoretical objections to that claim. Advanced general purpose computing in myriad new forms might overtake our ability to define what computers are. That is a danger too. What we really need is a balance between genuinely general purpose computing and a shared concept of interoperable computing, making computers socially useful as well as offering individual utility.

While Big Tech wastes time locking down platforms, creating dumbed-down appliances, slurping up yesterdays data, telling people what they want and must have, creating thinly-veiled surveillance engines, securing their walled-gardens and 'intellectual property' – paradigm shifts will sneak up on them and leave them in the dust. Read a little history, it's always what happens.

Meanwhile, young people who continue to take a deep interest in technology, who learn programming, maths, physics and electronics, and who assert their right to tinker, repair, design, reverse engineer, build and sell new machines will hold the future. They will always be a threat to Big Tech (who will use broken patent laws, and their power to censor them using copyright). But the Tech Gaints cannot resist next generation of technological optimists, who will refuse to be dumbed-down, who will insist on ownership of their technology, and will ultimately 'take back tech'.

If you have found this article worthwhile, please share it on your favorite social media. You will find sharing links at the top of the page.



Andy Farnell is British computer scientist specialising in signals and systems. He is a regular Times Higher Education writer. His latest book Digital Vegan is available now at https://digitalvegan.net His much anticipated Ethics For Hackers will be published by Routledge in 2022.




Footnotes:

1 A person who believes not that technology can save us, but that we can save technology.

2 Sadly, Babbage did not complete his final fully programmable computer. It was not until the 1990s that the Science Museum in London commissioned a working model.

3 Moore's law ended roughly in 2008 according to the analysis here https://cheapskatesguide.org/articles/moores-law-is-dead.html.


Bibliography

    [AlKhwarizmi] Abu Ja'far Muhammad ibn Musa Al-Khwarizmi, Hisab al-jabr w'al-muqabala, Kitab al-Jabr wa-l-Muqabala, (circa 780).
    [Stirling92] Bruce Sterling, The Hacker Crackdown: Law And Disorder On The Electronic Frontier, Bantam Books (1992).
    [Lessig04] Lawrence Lessig, Free Culture, Penguin Press .
    [Thwaite08] Thomas Thwaite, The Toaster project, Princeton Architectural Press (2008).