💾 Archived View for soviet.circumlunar.space › pjvm › antilog › 000001-chinese-room.gmi captured on 2023-07-10 at 14:02:01. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2022-06-03)
-=-=-=-=-=-=-
by pjvm
30-1-2021
antilog entry #1
The "Chinese room" is a thought experiment slash argument by analogy, meant to prove that a computer cannot really understand something.<a> The analogy is between a hypothetical computer (running a program, of course) that appears to understand the Chinese language, and a human operator to whom Chinese is a foreign language in a room with instructions that are a human-executable version of the computer's program. Chinese text is typed into the computer / thrown into the room, the computer / operator goes through the steps of the program, and at the end some Chinese text is given back that is perfectly believable as a reply a Chinese-speaking person could have given. The key observation is: the operator does not understand Chinese.
This is intended to show that the computer, aided by the program, does not actually understand Chinese, just as the human operator of the Chinese room, aided by the instructions, does not understand Chinese. Even assuming a computer can speak perfect Chinese, the "Chinese room" argument holds that it still does not have understanding of Chinese, it is only simulating understanding. Chinese is not special, of course: the same argument can be applied to basically anything that a computer might otherwise be said to understand.
A nuance that I want to add is that the computer may not only be using a program but also what I'll call "program-owned state": any information stored by the computer that is used in the program. Program and state are very similar - the computer represents the program as information, too - but here we are assuming that the exact same program is always run, whereas the program might leave behind different state for the next run than the state it started with. In the room analogy, the room can of course contain information in human-readable form; tables full of data, perhaps.
One defense against the "Chinese room" argument, the one I'll be discussing, is: "the *room* understands Chinese".<b> This might seem absurd, but consider: a computer without a program to run on it is basically useless, so for the purpose of conversing in Chinese, it makes more sense to consider the computer, the program and the program-owned state as one whole, rather than thinking of the computer as merely consulting the program. Considered as one whole, this thing is indistinguishable from something that understands Chinese, at least in terms of the output it produces. So it may seem reasonable to say it understands Chinese. However, in the room analogy, this seems rather unintuitive: the operator together with the objects in the room, considered as one thing, somehow understand Chinese? But then again, someone who tests the room finds that it acts exactly like someone who speaks Chinese.
In fact, I will go a bit further and argue that the objects in the room *alone* understand Chinese. After all, you can replace the operator and it will still work; conversely, the operator without the room does not understand Chinese. Whether you are willing to call it understanding or not, this 'understanding' is *contained* in the program and program-owned state. Often it is the program-owned state, as it turns out: at least at present, "machine learning" dominates, which basically means programs building up state with 'understanding' over many runs.
Note that I'm kind of dancing around the distinction between "understands" and "contains understanding". It is intuitive to only think of active things as understanding something; rather than say the program understands Chinese, which is ultimately just a set of instructions, it seems more logical to say the computer running the program understands Chinese, because the computer is doing things. If you agree that the operator with the room considered as a unit understands Chinese but the operator alone does not, then one could either say the instructions and other objects contain a sort of "passive" understanding of Chinese that becomes "activated" when the operator uses them, or one could say that it's the objects that understand Chinese. Both are strange, but the latter option seems a bit more logical to me.
This 'understanding', if you want to call it that, is different from human understanding. For humans, using human language is very natural, which it will never be for computer programs. The way a program arrives at an output can be nothing at all like human thinking; it may seem illogical, chaotic. On the other hand, one can think of a hypothetical program that accurately simulates a human mind, in which case its understanding of thinks would presumably be very humanlike.
Ultimately, though, this comes down to whether you think understanding can only be evaluated through what someone or something is capable of. If you do, then a computer (running a program that utilises state) that produces responses completely indistinguishable from those of a person who understands Chinese, does in fact understand Chinese. If you do not, then it only simulates understanding of Chinese. I find myself holding the former position: that something "acting as if it understands" simply understands, that the internals do not matter. One particular example that I have in mind is the game of chess, for which nowadays there exist computer programs that beat every human on Earth; the distinction between 'real' and 'fake' understanding seems meaningless when the 'fake' understanding can be far deeper and better than the 'real' understanding of any of us. But that does mean broadening my concept of 'understanding' so much that it can be ascribed to passive information.
<a> the "Chinese room" argument originates from John Searle
<b> not original either; the exact origin is unclear