💾 Archived View for zaibatsu.circumlunar.space › ~shufei › phlog › 20240406-Tech-Pol-ButlerianCircle… captured on 2024-08-25 at 00:02:25. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2024-05-10)
-=-=-=-=-=-=-
- - - - - - - - - - - - - - - - - - - - - - - - - - - -
It’s been clear for some time that the power elites have decided to corral the global populace under a new cyberfeudal order using AI bots of various calibres. The order of the day is data peonage.
In the explicit realms of war or policing, this means surveillance and targeting solutions determined by AI. Before long the weapons of war will be under direct AI control. QED, remaining fig leaves of human oversight on automated war are already being abolished. The IDF is currently pioneering these systems to maximum collateral attrition upon the people of Gaza.
We should be under no illusions that even the more utilitarian and consumer level deployments of AI are not strategically integrated as a project for the more quotidian herding of populations. This is to say, the rôle of AI at any level is part and parcel with a coördinated project of global subjugation by the various power interests. (If only this were hyperbole. If only.)
We hence must secure media and cultural spaces maximally free of AI intrusion. It must be possible to do. Even the attempt of resistance enhances human dignity.
We might have little defense against the tactics of assassination under empire as augmented by machine learning. But I’d like to here pose a few questions and possibilities for resistance to AI on the levels of quotidian community curation.
Community curation must needs first establish trust. Trust does not need thorough agreement. Indeed, trust functions most impressively when it functionally crosses lines of exact agreement. But trust needs *firm boundaries* for practicality. To any human endeavour there are tolerance limits. In the theatre of Butlerian Jihad (resistance to AI implementation), trust first must seek to establish a few basic parameters of operation. To wit:
1. Spaces free from AI (by whatever definition) are QED good. AI-free fora are desirable as optimal for human community. The widest negative application of this axiom for Butlerian Jihad is: *AI is toxic for human life.*
2. The best secure future-forward way to delineate such cultural spaces is by good faith agreement and editorial vetting by humans for humans.
3. Solutions to maximize good faith and human trust should be as elegant and widely deployable as possible. Techniques, protocols, and best practices should be available in as many media as needful to secure some basic threshold of AI-free societal interaction.
We are already behind the game. Time is short. But I see a few tools at our disposal which could be immediately applied to the problem of bolstering good faith bot-free human community. This is to say we must kludge. Now.
For digital media as crux we may resort to the tools granted us by keypair systems such as PGP. As the spec sheet for PGP states, the goal here is not merely trust in particular exchanges. The wider goal is to use keys as human end point quantum units of trust in pursuit of *collective operational circles of human centred trust.*
Signatures should be further streamlined in operation to this end for easiest deployment in any medium, mode, or protocol. This has not happened yet. Public-private key encryption ks still a highly abstruse art to even digitally engaged sectors of humanity. Partly this is by design; maleficent actors benefit from this obscurity. But partly this is due to complacency amongst the wider public, a complacency likely to end in the near term. There is also a certain delight in obtuseness in the design priorities of many developers. That after 30 years better UX for keychain use haven’t been achieved speaks volumes about the general failure of the infotech magisterium as agent of commonweal.
The solutions for wider signature use ought be simple to use and simple to implement in the best spirit of smalltech. Select text. Click a button. Enter a password. Signed.
And what can the signature mean that it hasn’t yet meant? What might it mean for our needs against AI? Perhaps several levels of things beyond “I wrote this as a human”. A signature might mean “this was interesting”; “This was likely written by a human”; “this is AI bot garbage”.
The key here is that signatures become not merely first person attestations, but third person witnesses to humanity.
If we can still further assume we may use the signature as basic unit of trust, how might signatures better work into our current media of communication?
I can imagine on this score a way to sign *everything online* with a minimalistic system of trust flags. Just as social media was eventually lifted out of the blog, so too might the “like” be entirely divorced from the post or comment as a separate system. This would likely have salubrious effects on both forms of communication. Apropos human trust, a system of quick signatures might be made largely invisible in this model of emoji reactions. A federated list of signature blips per url, say. Clients or addons might gather these reactions by rss. It need be no more complex than that.
In aggregate, how to best aggregate tokens of human trust? Obviously I favour a federated option as most resilient to tampering. We have all seen the failure of big tech up close and personal.
More, I’m curious as to how signature tokens of human trust might aggregate. Here we get into muddy water, natch, as any time computing attention is brought to bear upon human masses. But I hazard that there might be a way for “HumaNet” web &c. clients to scrape enough peer to peer signature tokens to justify a site as “humanly inclined”.
Imagine a distributed directory or federated search engine suite something like Seerx. Now overlay onto that engine the signature tokens. Sites with a critical ammount of human disapproval would be removed from the search engine. Sites signed as anthroposanitary would be given higher rank. This could work as easily as any brute algorithm or even require none at all.
If anonymous, such a system would of course be very open to manipulation by mass bad faith actors. This is the morass crowdsource culture has brought us. We ought have no blanket blind faith in the supposed wisdom of crowds.
But we already have tools to deal with this issue. Editorial curation of web directories is now de facto controlled by advertising/malware/spyware blockers. The same device can be applied to AI filters based on circles of human trust.
There ought be blocklists, natch, of every taste and level of credence. For every language and culture, too. This trust should resolve to local community circles. If Alice knows Bob is strict about AI filtering, she can subscribe to his lists. Charlie likes Alice and Bob, so subscribes as well.
That doesn’t have to be enough. The lists themselves can be signed for human trust, evaluated by ombudlists. This editorial check on the checkers should be as diverse and as distributed as possible. Signatures on ombudlists for white listing as well as blocks would help balance effort.
The question in any case should be with a simple eye to inoculating human online community against AI. Adblocks do this job fairly well, but suffer from lag in the loop of surveillance and editorial decision. What counts as advertising? Signature feeds could go a long way toward purging the internet of advertising as well as AI.
People cannot make an informed decisions to how to engage with AI or not unless they are aware which actors admit these machines. So the ombudlists should include both a client implementation and a “wiki talk” layer. Wiki front ends could catalogue in a consummate but concise way which bad actors have been lately exposed and blocked.
The wiki layer of Butlerian Jihad would be greatly important for purposes of education and news. But it’s not good enough for the critical work of widest possible inoculation against AI. We know that the internet takes care of mass information by itself, of often poor quality. What is required is deployment of editorial voice to curate digests. In other words, the nearly lost art of journalism.
Envision a wiki site for an ombudlist which carries news digests on the latest blocks and whitelists. A bad actors shame page. Basic education and FAQ’s regarding the deleterious trajectory of machine learning. This is all community movement work, natch. But education must needs cleave closely to the operations of establishing the edges of the “HumaNet” itself.
All of this is retail activism at heart. We must quickly ingrain the habit of boycotting institutions which deploy AI as deeply and widely as possible. Such organs of state and corporation as avail of machine learning should be concisely marked as pariahs from human community, extirpated from circles of human trust. The boycott must finally become a way of life if we hope to pursue a human future.
Slavers; rapists; arms merchants; oligarchs; AI devs. We all should say the list with the same breath. The goal in education should be to streamline the marking of such malefactors beyond the pale - with utmost rigor and journalistic reliability.
The PGP specs sheet mentions that keys should be verified face to face when possible. This is the sine qua non of human trust. And of course, this is what even the most fanatical of us fall down on the most. In the end, we can digitally sign here, there, and yonder; but if “meatspace” human community continues to erode, the entire edifice of human trust will continue to decline under it.
Here I register a critique against the liquid digital culture. Online culture becomes toxic in some proportion to its lack of *accountability to the offline world*. As such, the first question we ask people should switch from “what is your Facebook” or “what is your phone number” to “what is your public key?”
I have begun to suspect that a human future with computing can only be assured by a sort of permanent revolution toward Offline First. That is to say, in some sense the first praxis of Butlerian Jihad (or smoltech for that matter) must always be Offline First. The danger of not implementing such a touchstone for accountability is apparent by now in the liquid digital mob of current online culture.
I don’t think this means we must only trust those we have met face to face. Nor must we abandon the anonymous/pseudonymous internet altogether. Neither would I say meeting face to face ensures trust. The simian revolution in mendacity removed such possibilities of sacred naïveté from most of humanity.
But the ultimate signifier of trust should be “I personally deeply trust this person and know them offline”. This signature token should be highly privileged in all digital culture. From this font of *offline trust*, all the other circles will likely flow to ensure some grounding to human affairs in accountability.
The key to the error of crowd sourced culture to my mind lies in its denial of editorial voice. The crowdsourced culture is one in which no particular people’s specific worldview matter because the buck stops nowhere. The buck being accountability, natch, but also the buck being *sincere personal perspective*.
There’s the rub; and the irony. It is easier to see in hindsight, that an anonymous mass culture would strip us of individual recognition, care, and commonweal. And thence to our current morass of social media empty calories is not far. Withal mendaciously manipulated by parasitic algorithms - and now by pseudosentient machine intelligence. A heck of a muddle for the species.
I suspect general trust requires the editorial imperative to the degree that the editorial imperative is based in judgment. Good judgment, good will, good faith. This is as far as I can tell as close as sentient beings become to trust: that we stand in stead for each other in these things with constancy and critical rigor. And nuance. Ye gods, give us nuance.
In the end, what we say when we say we trust is that we are willing to stand as guarantors for accountability. And the only way we can do so is by care rooted in our heartfelt perspectives. That is, we must strive more than ever to see less good in agreement per se as much as accountable trust on its own stance.
So I would hope any reader attend to these deeper touchstones, especially if you have taken to your own heart what I’ve written. I don’t so much think the internet must be deanonymous to be accountable as *particularized* to our deeper perspectives which then may root us together in trust and care. How this happens, I don’t rightly know. I do hope some of the foregoing stimulates some possibilities, at the least as thought experiments.
-EOF-終-30-