💾 Archived View for mediocregopher.com › posts › llm-understanding.gmi captured on 2023-04-19 at 22:18:16. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
As usual I'm about a month and a half late to the party.
ChatGPT, and large language models in general: can they understand? Is the singularity upon us? Has humanity finally automated itself out of capitalism? Having serendipitously begun reading Wittgenstein's _Philosophical Investigations_ (PI), while also playing with ChatGPT a bit, I have thoughts.
The common consensus is that ChatGPT doesn't really "understand" what it's saying, it's more like a really smart parrot which is just repeating what it's seen before based on context. But I notice that, like usually happens with every new AI leap forward, there's not been much in the way of setting tangible goals for AI to reach in order to be considered "understanding". We move the goal posts, but we don't even know where. Truthfully, we don't understand "understanding", but we think we know it when we see it.
PI explores this idea quite a bit, though outside the realm of technology, focusing instead on language and meaning. Wittgenstein takes on the idea that "words have meaning", in the sense that there is some immutable thing out in reality towards which a word indicates, and by stringing words together we can convey statements about reality, and therefore convey "understanding" of reality. I won't rehash the book, but the conclusion seems to be: that's not really how it works. If you try to dig down and find the meaning of words beyond their use, you come up empty.
In order to find the real artichoke, we divested it of its leaves.
Rather, language is more of a game we play with others, where the rules change based on the context the game is in. A word doesn't have an underlying meaning anymore than a pawn's moving forward has a preordained underlying meaning in chess; the meaning of the move depends on the context within the game.
Personally, I would say that LLMs play the language game quite well. For a start they display perfect command of syntax and grammar, but beyond that they respond in reasonable ways, display some level of imagination, and boast a short memory which they are able to incorporate into their play. I have met apparently human individuals who have not displayed all these qualities.
But for all that I can't help but agree, the conversation lacks something. That something is _not_ "understanding", which I don't think is a real goal. Other terms like "intelligence" and "humanity" I think are similarly lacking. I don't accuse my cat of being a hollow machine for his lack of humanity and intelligence, afterall.
The element that LLMs are lacking, which leaves them feeling still like wind up toys rather than living beings, is _will_. An LLM can only respond, it can never ask (even if we can trick it into phrasing responses as questions). My cat feels alive because he has needs and desires, and through his actions he asserts himself on the world around him accordingly, and having seen this same quality in every other living organism I associate it with being alive.
LLMs are like a waterwheel which was built in a desert. Yes, we can spin it ourselves, and something useful may even happen, but until the wheel can find some water it is falling short of its true potential.
Will isn't even a problem of language, it's an entirely different beast. Even through just language an LLM can accomplish interesting things in the real world, using a human as a proxy.
The "M" in LLM stands for HUSTLE
But even with a human proxy, the AI still needs a human's goal. Truly living organisms have a built-in, complex, analog system which perpetually instills our goals into us, and luckily we have the corresponding tools to attempt to achieve those goals. Thusfar we have given computers the tools, but not the will to use them.
Will we ever instill a will into a machine? Well, in the sense that there's 8 billion people, so probably _someone_ will try, maybe. But as a true milestone, as a societal phenomenon, I remain hopefully doubtful. A machine with a will is not, at present, the goal. We want a machine which can carry out our own will to the greatest effect possible, not yet another will which we have to assert ourselves against.
================================================================================
Published 2023-03-25 by mediocregopher
================================================================================