Egnorance about the mind, meaning and the Chinese Room

The Chinese Room is a thought experiment in artificial intelligence. John Searle proposed it as a way to falsify the claim that some computer algorithm could be written which could mimic the behavior of an intelligent human so precisely that we could call it an artificial intelligence. Searle proposed that we imagine a log cabin (though it has been observed that it must be an enormous log cabin, perhaps a log aircraft carrier), in which a person sits. Around that person lie reams of paper, full of rules in English, as well as a story written in Chinese (or any other language that the person doesn't understand) and a set of all possible answers to questions about that story. On one side of the cabin is a slot through which pieces of paper come in. Those strips of paper contain questions in Chinese. On the other side of the room is a slot and a notepad.

Our hypothetical cabindweller's job is to take the note from the slot and use the instructions written on the reams of paper to compose a reply. Searle asks whether we would regard this arrangement as embodying some sort of intelligence if the notes passed from the cabin in response to any note sent in are sufficiently sensible.

In Le Ton Beau De Marot, Douglas Hofstadter points out some problems with this. Because Searle puts a person into the system, we identify that person as the locus of whatever intelligence might be present. This appeals to a dualist's desire to think that intelligence must be some extrinsic force which breathes life into instructions. Searle then takes that intelligence out of the equation by stipulating that the person doesn't understand the language of the questions or answers. The point of the exercise, though, is not the person. The person in the room is an automaton and a McGuffin. The paper is where any artificial intelligence would lie, and Searle is careful to use all of a magician's arts to direct attention away from them. Since even a short story can generate an infinite number of questions, any set of instructions for answering such questions would not provide rote answers to a finite list of questions, but some sort of adaptable set of rules for answering any questions.

Stories also contain tremendous amounts of context. A story about people eating dinner might not specify any details at all about the table, but the reader generates those details mentally. The rules would have to explain how to describe the imagined details, and the unspecified rules of the story's universe. The characters in the story are assumed to have brains, are probably around 5' 9" (or maybe less if it's set in China). The rules would have to include an excellent encyclopedia and means to translate that encyclopedic knowledge into Chinese in response to questions. In short, the rules would not specify an answer for each possible question, it would specify a way to parse questions, to gather information in response, and to construct a grammatically and syntactically correct way to express the underlying ideas.

The more fundamental error lies in trying to subdivide meaning and intelligence. If we call this system intelligent, is the intelligence in the person, the slots, the logs of the cabin or the ink on the paper? To subdivide intelligence is to destroy it. The intelligence lies in the system of interacting parts. Hofstadter has pointed out elsewhere that "consciousness certainly seems to vanish when one mentally reduces a brain to a gigantic pile of individually meaningless chemical reactions. It is this reductio ad absurdum applying to any physical system, biological or synthetic, that forces (or ought to force) any thoughtful person to reconsider their initial judgment about both brains and computers, and to rethink what it is that seems to lead inexorably to the conclusion of an in-principle lack of consciousness 'in there,' whether the referent of 'there' is a machine or a brain." The intelligence does not lie in the inert paper or the uncomprehending person, it lies in the interactions between parts.

Michael Egnor, creationist brane serjun, is not a terribly thoughtful person, which may explain why he feels comfortable defending dualism using the Chinese room, and why he butchers the description:

Imagine that P.Z. Myers went to China and got a job. His job is this: he sits in a room, and Chinese people pass questions, written on paper in Chinese, through a slot into the room. Myers, of course, doesn’t speak Chinese. Not a word. But he has a huge book, written entirely in Chinese, that contains every conceivable question, in Chinese, and a corresponding answer to each question, in Chinese. P.Z. just matches the characters in the submitted questions to the answers in the book, and passes the answers back through the slot.

In a very real sense, Myers would be just like a computer. He’s the processor, the Chinese book is the program, and questions and answers are the input and the output. And he’d pass the Turing test. A Chinese person outside of the room would conclude that Myers understood the questions, because he always gave appropriate answers. But Myers understands nothing of the questions or the answers. They’re in Chinese. Myers (the processor) merely had syntax, but he didn't have semantics. He didn't know the meaning of what he was doing. There’s no reason to think that syntax (a computer program) can give rise to semantics (meaning), and yet insight into meaning is a prerequisite for consciousness. The Chinese Room analogy is a serious problem for the view that A.I. is possible.

The idea that every possible question and every "appropriate" answer to that question could be contained in any book, no matter how "huge" is laughably egnorant. Searle at least was smart enough to try to restrict the scope of the questioning (though any interesting story would be set in a world about which an infinite number of questions could be asked).

i-58554be85fde85e4530e9ad45323e6d5-duckling.jpgSearle's original argument was an argument from personal incredulity. Egnor multiplies that incredulity with his own ignorance, and slaps on the fallacy that anything that hasn't been done in 50 years won't happen.
Egnor's atomistic view seemingly also extends to meaning, even though that's easy to disprove using the photograph at the right.

I think we can agree that this digital photograph has meaning. No one is likely to object if I say that it represents a duckling, even though you, dear reader, didn't actually see me photograph this duckling. The fact that the photons entering your eyes did not actually reflect off it's fluffy feathers doesn't prevent you from attaching meaning to it, nor does the fact that it is actually a 200x187 grid of colored dots.

i-06e9e078d1cb754e81975b93c6895010-duckling25.jpg

It probably doesn't stop you from attaching the same meaning if I remove a quarter of the dots.

i-ddaa591af034b7626524aeccb2b98598-duckling50.jpgi-1abcdc8931075de095a185a77a431287-duckling75.jpg

Or half of them or three quarters of them. It's harder to be sure what you are seeing now, but I suspect that you can still work it out.

Does that mean that the meaning of this picture resides in one of the pixels I didn't remove?

No. That can't be it, because for the final picture I revealed the hidden 25% from the first obscured photo, and hid the other 75%. If the meaning of the image resided in a single pixel, one of those images would be meaningless.

Meaning resides not in the individual pixels, but in the arrangement of pixels, their interactions on the screen, in the eye, the optic nerve, and the brain. Your description of the photograph draws on those interactions and adds another layer of interactions between neurons in the brain and on into the tongue, lips, lungs and cheeks. Slicing those interactions apart and expecting to cleanly separate intelligence from other aspects is as silly as thinking you can separate syntax from semantics or meaning from context.

Categories

More like this

Excellent! Absent tongue, lips, lungs, cheeks--absent a BODY--artificial intelligence will remain an uphill pull. Syntax will remain separate from semantics, meaning from context, without body knowledge and the senses used to gather it.

As one programmer famously put it, "It takes a page of instructions to tell the machine that when Mary had a little lamb, she didn't have it for lunch."

Dirk Hanson
http://www.dirkhanson.org

Oh that's great!

The leak hole that is the "infinite instructions book" was so glaringly obvious - how could one miss it so carelessly!?

And on a side note, taking it to the opposite team's turf ;) - it's funny how in this perspective, if one assumes that there exists an all-knowing deity, one could never tell if it's actually intelligent, or just has an "infinite instructions book". :)

Of course "the opposite team" don't have a problem with the infinite instructions book, since they consider themselves to have just such a thing. And amazingly compact it is, too.

And they don't understand how other people can read the same book and get such different conclusions.

It is important to remember that what Searle was attacking in this example was not the idea that computers could be intelligent--whatever that means--but that they could actually UNDERSTAND Chinese (or anything at all). Even if you argue that the instruction book would have to be infinitely large (which I don't agree with--suppose the person in the room were to just memorize the relational elements that would produce a "reasonable" answer, just as a computer would), that doesn't detract from the central point of the experiment. It doesn't help anything to ask "where does the intelligence lie," because that isn't what Searle cares about; he cares about understanding, and that doesn't lie anywhere.

Jon, if 'understanding' doesn't lie ANYWHERE, then it clearly doesn't lie in people either rendering the thought experiment meaningless. If however the thought experiment is trying to demonstrate that the 'Chinese room' lacks some quality that people 'obviously' have, it's a failure as described above by Josh.

By mapollyon (not verified) on 28 Jun 2007 #permalink

Re "understanding"--Searle himself writes:

"The reason the man [in the Chinese Room] does not understand Chinese is that he does not have any way to get from the symbols, the syntax, to what the symbols mean, the semantics."

Which is, I think, part of what Josh was saying.

Jon, I think the distinction between intelligence and understanding that you draw is nonexistent, and Searle would be drawing a false dichotomy.

Josh,

I disagree--the difference is the difference is a difference of kind, even. "Intelligence" measures sheer informational processing capabilities, the sorts of things that computers are already quite good at doing. Any system, given sufficient complexity, can be classified as intelligent--i.e. any system can generate solutions to problems based on symbol manipulation. Understanding, however, is another matter entirely, and requires an inherent semantic level on top of the symbolic processing--it is this to which Searle would object.

By this definition we already have AI, right? My laptop, or your kid's PS3 would be intelligent if we use sheer computational muscle as the relevant metric, which suggests that there has to be more too it. The general problem in emergence is figuring out what more is needed than sheer volume to generate interesting emergent properties, and meaningful separation of new hierarchal levels. There are a lot of particles on a beach, but they don't show emergent behavior. Neurons in the brain do. Something in that emergence constitutes the intelligence.

I think it's probably an error to draw sharp lines between symbol manipulation, understanding, semantic, syntax, etc. To manipulate symbols in the real world requires the ability not just to manipulate them, but to properly assign symbols to conceptual categories, recognizing that the same object can fulfill many conceptual functions under varying circumstances. Based on context and other factors, an intelligence has to choose the appropriate concept and apply the right manipulations, often putting the concept into new categories in various stages of the mental process.

Humor is an obvious case where this happens, but if you watch your own behavior, you see this taking place all the time. Syntax and semantics are different ways of talking about the same thing. They are useful concepts, but it is a mistake to treat them as if they had genuine independent existences, that one could have a syntax without semantics or vice versa.

Searle's argument is an appeal to intuition, with no logical justification. It ultimately boils down to Searle's statement: "Why on earth would anyone suppose that a computer simulation of understanding actually understood anything?"

The obvious rejoinder is that a perfect simulation of understanding must have all of the properties of understanding, so it makes no sense to say that it can't understand.

He also commits the fallacy of division, which Josh alludes to. Referring to the fact that computers can be made of various materials, Searle says, "Stones, toilet paper, wind, and water pipes are the wrong kind of stuff to have intentionality in the first place..." But neurons don't have any more intentionality or understanding than water pipes do.

By secondclass (not verified) on 10 Jul 2007 #permalink