đž Archived View for idiomdrottning.org âş ai-punditry captured on 2024-08-25 at 01:09:26. Gemini links have been rewritten to link to archived content
âŹ ď¸ Previous capture (2024-08-18)
-=-=-=-=-=-=-
One of the things that makes me super bored now is blogposts with a super self-assured reductive take on AI and what AI is and will be and mean and will mean. (And usually the post also has a tedious rant on what to call it instead of "AI", and why AI is such a misnomer.) Those have gotten so old already.
Usually the take is that itâs an over-hyped nothingburger, and that take has been done to death. The opposite take is just as boring.
One of the worst ones I saw recently Iâve thankfully forgotten the name of but it used the I-wish-I-could-say-it-was-unique schtick of mixing in threats of physical violence like âI will kick in the teeth of the next person who hypes AIâ or whatever it was, I donât remember because I was dead from boredom, let alone all the violence.
I do have some sympathy for the fact that everyone wants to get in on day one and be the Marshall McLucan or Timothy Leary or William Gibson or Scott McCloud or Cory Doctorow of this đ, first to understand our contemporary world properly and to describe it accurately, so Iâm not saying you canât keep making these posts full of unfounded cocky guesses. Go ahead. Saves me money on sleeping pills.
In the fifties they at least had the good taste to throw some drugs and tits and bug-eyed monsters into their speculative fiction. That was a liâl more fun.
Over email, someone asked âwhaddayamean âday oneâ? neural network language models arenât newâ, and thatâs true. But when I wrote âday oneâ, I tried to contextualize that by giving examples like Marshall McLuhan, who was writing in the 1950s even though researchers had been trying to invent television since the 1900sâsimilar time gaps exist for some of those other philosophers like Timothy Leary. McLuhan wasnât writing about the technical details to transmit cathode ray control codes, he was writing about how society was affected and about how our own minds were altered by the practical application of the new tech and the emergent consequences thereof. Which for some quite complex inventions end up being nothing, a dead end (like the whole decades long stopgap that was âoptical mediaâ like CD-ROM or, more recently, those embarrassing proof-of-work ledgers of non-fungible tokens) while other simple-seeming inventions do end up being something that change societies profoundly.
Itâs absolutely true, and needs pointing out, that hypes and tulipmanias are driven by gamblers and bagholders. But itâs also true that sometimes tech, like modems or cars or TV, do end up making our day-to-day pretty different. Even for those of us who try to opt out; I can avoid television but I still have to live in a world warped by policies set by officials mandated by an electorate swallowing televised lies.
Iâm glad people are thinking about this stuff and not buying into hypes too hastily. At the same time, some of these pundit sermons ex cathedra have come across as more wishful thinking than anything else.
If ML models stay this bad, we wonât need any arguments because theyâll collapse by themselves in dot-com 2.0, and if they become good, the âthey suckâ argument isnât gonna last us far.
My whole case here is that itâs premature to guess whether theyâre gonna become useful or if theyâre gonna stay ugly and confused, so I didnât really wanna make a guess either wayânot that I canât: My own guess is that they are going to become very capable. Not Pinocchi level, just really useful and often the most convenient ways to solve things, like the camera was for illustration and digital computation was for math. Which is why the very real problems of energy use and ownership concentration need to be solved sooner rather than later.
And if Iâm wrong about that, if theyâre gonna stay as bad and useless as the we-hate-all-changes crowd of conservative xennials believe they are, then thatâs fine by me, too. Still need to solve those same two problems though; a tulip by any other name still uses resources.