đž Archived View for idiomdrottning.org âş re-civ-thoughts captured on 2024-08-31 at 14:27:13. Gemini links have been rewritten to link to archived content
âŹ ď¸ Previous capture (2024-02-05)
-=-=-=-=-=-=-
Shannon Vallor (and I canât necessarily vouch for this think tank, I was not familiar with them) wrote a piece called
The Thoughts that the Civilized Keep
I think Vallor has said/done some other stuff that has been good or more thought-through so this isnât meant as a general slag on her either.
I def agree with the basic point that so called âPinocchioâ style AI that really understands and really wants to be alive is thousands of years away, if ever (unlike AI thatâs good enough to do a lot of stuff I do by hand today, like sorting images and fold proteins and such, that might come sooner).
But.
This argument is just 100% empty semantics:
Understanding cannot occur in an isolated computation or behavior, no matter how clever. Understanding is not an act but a labor. Labor is entirely irrelevant to a computational model that has no history or trajectory in the world.
These words means absolutely nothing.
Again, Iâm on the same side as the argumenter. Yes, GPT-3 is just a glorified Markov chain, a cut-ups-project released, a slightly-worse version of the I Ching or Tarot. GPT-3 specifically is a very long way away from understanding.
But use real arguments please. Use better arguments than what GPT-3 would generate:
âTrue comprehension cannot happen in a function application because the real comprehension was the friends we made along the way. Friendship is something that the cold-hearted robots can never understand because they, unlike every single human, donât remember their personal experience of the entire Earthâs history billions of years back nor do they, unlike every single human, have true and full awareness of the inevitable hand of the reaper.â
This:
âUnderstanding is not an act but a laborâ
and
âLabor [depends on] history or trajectory in the worldâ
is a conflation fallacy on two separate polysemes of labor (âprocessâ vs âeffortâ).
Rather than restate her case in my own words, please let me reply to it in more detail, because I donât fully agree with her here.
More importantly, it reveals the sterility of our current thinking about thinking.
Our âcurrent thinking about thinkingâ is a huge field across psychology, cognitive science, mathematics, even religion and poetry.
Dismissing all of that collective introspection as âsterilityâ is, wellâŚ
I donât wanna unequivocally defend the promethean quest of replicating our own awareness.
A growing number of todayâs cognitive scientists, neuroscientists and philosophers are aggressively pursuing well-funded research projects devoted to revealing the underlying causal mechanisms of thought, and how they might be detected, simulated or even replicated by machines. But the purpose of thought â what thought is good for â is a question widely neglected today, or else taken to have trivial, self-evident answers.
Thatâs not true at all. Thatâs a core philosophical question from Zhuang Zhou to Camus to Ligotti to Bodhidharma.
Iâd say the opposite is trueâmodern day computational information processing is over-emphasizing behavioristic purpose and results. (Although that they have their focus there, that suits me just fine.âĽď¸)
What purpose, then, does thinking hold for us other than to be continually surpassed by mindless technique and left behind?
What purpose does it hold for us even without any machines? Itâs the whole âsixpence none the richerâ argument from Lewis, the old âSisyphus is happyâ argument from Camus, the old âchop wood, carry waterâ from Buddhist thought.
Our existence is precious to me because it is purposeless, because it is useless, because we may exist anyway. Weâre enough.
Her reference to Dreyfusâ 1992 âWhat computers still canât doâ isnât really current since these AIP systems have moved on from structured heuristic symbol manipulation. They now work way more dreamily and intuitively. The mantra when I got my computational linguistics degree was that statistics was better than linguistics.
By symbol manipulation, we mean the early modernist view of thinking overly conflated the map with the territory for things like âgrammarâlanguageâ, âlinnean nomeclatureâlifeâ etc. Chess engines crammed full of theory and guidelines and deliberately programmed âif this then thatâ.
This is in contrast to contemporary AIP systems, including GPT-3 as crappy as that is, because the contemporary AIP systems are more⌠âgrownâ than âconstructedâ. Less âfollow this recipeâ and more âletâs try to catch some sourdoughâ.
Labor is entirely irrelevant to a computational model that has no history or trajectory in the world. GPT-3 endlessly simulates meaning anew from a pool of data untethered to its previous efforts.
A fundamental difference between AI and human is that AI is like a song on a tape. You can stop, rewind, start over. (Itâs unlike a tape because it can go somewhere new each time.)
Each human child starts out without a personal history of experiences, but absorbs a lot of info and processes from culture and surroundings.
When an AI app starts over, it doesnât start over completely fresh. Creating GPT-3 was a climate disaster but that work does not have to be re-done every time.
When a human child is born it does not remember what some other dead person has personally experienced and thought. When youâre absolute beginners, the kingdom is for you.âĽď¸âĽď¸
GPT-3 is constantly rewinding, it doesnât reincorporate (although there are many AI that does do that! Although thatâs not necessarily what you want, for tool purposes, since it comes at the expense of predictability) but it doesnât start over either.
When GPT-3 answered Millièreâs question it basically pasted together stuff from science fiction pulp stories. Thatâs what it does. Itâs a cut up statistics-driven jumble machine.
AI will in the future get so good at doing this that itâll look like it cares.
Now to avoid falling into the same empty semantics âitâs not true labor because I say itâs notâ pitfall myself⌠what does âcaringâ about something mean?
AI are conditionally rewarded on good results and extinguished on bad. Like humanity as a whole.
I almost wanna write âAI does try to deliver, and if thatâs what caring about what they do meansâŚâ but âtry toâ is too teleological.
What even is âtryâ? Do humans ever âtryâ to do something or is that just the word weâve assigned to how it feels like when we do things with uncertain outcome?
We care because we donât want bad outcomes (such as the people we love getting hurt).
We try because we care.
IDK and maybe weâll never.
I wrote the other day, in reference to Lewisâ sixpence argument:
Pretty nice! So even if our existence would be just like a bunch of beeping, broken Tamagochis, pre-programmed robots on strings⌠we could still do good if we so wish. A meaningless good is good nonetheless.âĽď¸
Itâs like⌠Vallorâs piece is motivated by fear. Yeah, things can go wrong pretty darn quickly, Iâm not disputing that. This fear can lead us to kinda reaching when it comes to arguments.
Itâs the God of the Gaps argument except that itâs âhumanity of the gapsâ.
For me, when Iâm confronted with a fear such as this, such as âwhat even is the purpose of being a thinking human? Arenât we just meat puppets with a one-way ticket to Boot Hill anyway?â, I wanna face that fear openly. I want to confront it. I want to think about it and sit with it and maybe accept that some things are pretty awful and some things pretty wonderful.âĽď¸
Itâs one life, itâs this life, and itâs beautiful.âĽď¸
Sifr, who was the one who linked me to Vallorâs text, wrote:
I am here for considering non-human approaches to life and communication but AI does not strike me as interesting in this direction, like at all
Oh, same! Thatâs a really good point.
I just think she is saying a lot of nothing and a lot of unfounded things, based on just fear and hope and empty semantics.
Does that mean that the GPT-3 hype-crowd is right? Absolutely not. Iâm not into AI stuff.
But thatâs more because how I feel about itâit strikes me as not as interesting or fruitful as other forms of meditation or cognitive researchâthan anything I could actually logically reason for.
jmcbray writes in:
[I]t seems to me that when we say that GPT-3 lacks understanding, we mean that itâs building on only a model of language, not a model of language and a model of the world, the way hew-mons do.
Yes, thatâs a distinction that puts GPT-3âs limitations in pretty clear terms.
GPT-3 specifically is a model of un-dereferenced references.
By de-referencing, I mean how humans like to do some sorta mapping from âthe pencil on my deskâ to the pencil I on my desk that I can reach out and touch, from âI like pizzaâ to actually thinking about some pizza how it smells and feels.
What about Watson, the glorified Wikipedia that won Jeopardy? Is that just a model of language or also a model of the world? Iâm not arguing for Watsonâs level of comprehension here, just the desired scope. Is it Watsonâs job to, solely on the language level, write Jeopardy answers just like itâs GPT-3âs job to write texts?
jmcbray continues:
The most interesting, to me, grounds for suspecting GAI is not possible is the argument that humans are not a general intelligence either â that weâre good a very wide range of things, but hopelessly bad at others, and especially bad at seeing our limitations.
Yes, maybe the fear isnât as much that weâll like Mary Shelleyâs Prometheus grab the spark of fire from the heavens and create towering life out of slime and bricks.
Maybe the fear is that weâll discover our own puppet strings.
But donât worry, darling, if that were to happen. Discovery of circumstances isnât the same as changed circumstances. Everything we could do before we could still do after. Dance, sit, chop wood, carry water, give a sixpence to GodâĽď¸