Cracking the Fortune Cookie

A Response to John Searle’s Chinese Room Analogy

John Searle. “Minds, Brains, and Programs.” Behavioral and Brain Sciences, vol. 3, p. 417-424 (1980).

In a famous 1980 paper titled “Minds, Brains and Programs”, the philosopher John Searle proposed a notorious thought experiment, now known as the Chinese Room, relating to the possibility of artificial intelligence. Searle has no objection to “weak AI”, the claim that a properly programmed computer can help teach us about the mind; but he opposes “strong AI”, the claim that a properly programmed computer actually would be a mind, with cognitive states just like those of humans, and purports to prove in this thought experiment that such a thing is impossible.

The Chinese Room is described as follows: Imagine that a person is locked in a room with a slot in the door. At regular intervals, a slip of paper covered with indecipherable squiggles comes through the slot. The person in the room looks up these squiggles in a book they possess, which instructs them to write a different set of squiggles on the paper and send it back out through the slot. As far as the person knows, they are just processing meaningless symbols; but unknown to them, the squiggles are Chinese characters, and they are actually carrying on a conversation with a Chinese speaker outside the room. The point of this analogy is that the person inside the room is acting just as a computer acts, processing symbols according to a set of rules. But this person does not understand what they are doing, and therefore a computer could never understand either. Searle concludes that a computer, even one that we could carry on a normal conversation with (i.e., a computer that could pass a Turing test) could never be conscious, could never understand, in the way that a human being does.

However, I do not agree with this analysis. I have just one request for Searle and his supporters: I want to see this marvelous book.

Even if we disregard the question of how unimaginably huge such a book would have to be, there are several categories of questions that it would seem no book, regardless of how much effort went into its creation, could give a correct and convincingly human-like answer to. For example, one could ask the same question multiple times; a human being would either rephrase the answer or become frustrated or both. Also, one could ask a question whose answer depends on contextual information (for example: “Would you please estimate how much time has passed since the beginning of our conversation?” or “Could you please rephrase the last question I asked?”).

If, as postulated by Searle, a Chinese Room can pass a Turing test, then it would have to be able to answer repetitive and context-dependent questions correctly. But if the Chinese Room works in the way described by Searle, this is not possible. A book containing a static list of questions and answers – in effect, a list of rules reading “If you see X, do Y” – will unfailingly advise Y every time it is confronted with X. Therefore, a Chinese Room could easily be unmasked by asking it the same question repeatedly and observing that it gives the same answer repeatedly. And it would be utterly helpless to answer context-dependent questions in a convincing way; it could only make vague, general statements which would be easily recognized as such. Either way, a Chinese Room masquerading as a conscious person could easily be detected, and thus could not pass a Turing test. It would neither be conscious nor seem to be conscious, and hence would say nothing at all about the feasibility of true artificial intelligence. That is why I ask Searle and his supporters, what does this book look like? How does it advise responding to queries such as these?

What if we modify the Chinese Room so that it could pass this test? What changes would we have to make?

In light of the above challenge, the first change is obvious. The book in our Modified Chinese Room (MCR) could no longer be just a simple lookup table – in other words, it could no longer be memoryless. It would have to store some kind of state, some information describing the questions it has seen and answers it has given so far. But note, also, that memory is a necessary component of consciousness. Consciousness requires some minimal continuity of experience; an agent with absolutely no memory, whatever its intellectual capabilities, could not be said to be conscious.

But the mere maintenance of that state would be useless if it could not affect the answers that the MCR gives. Therefore, the MCR could no longer be a static list of responses; it would have to perform some kind of computation, combining its background lexical knowledge with the state information already stored, to come up with answers to questions.

With these two new tools at its disposal, it would seem that the MCR could pass a Turing test including repeated and context-sensitive questions. But are we still certain that this system is not actually conscious? After all, it answers questions put to it by extracting relevant information from the question, adding this information to its remembered state, and processing both the state information and its own background knowledge to produce a coherent reply. This seems very much like what human beings do in the same circumstance. For one thing, how could the MCR ever “pick out” the relevant information from a query unless it, in some way, understood what was being said to it? Though it might still be argued that such a system would not be conscious, it is no longer obvious that it could not be conscious, which is what I seek to establish.

The Chinese Room is a type of philosophical thought experiment that Daniel Dennett refers to as “intuition pumps“, analogies that are designed to elicit an intuitive conclusion in a simple realm and then transfer that conclusion to a more complex domain. While intuition pumps are an appealing tool, they are frequently used to misdirect; very often, the conclusion drawn in the simple problem is not straightforwardly transferable to the more complicated problem. This is especially true in the domain of the mind, where our understanding is still so limited that “intuitions” about how such a system could or could not possibly work are as perilous as they are common. The true lesson of the Chinese Room is that we should not attempt to use our limited imaginations as a way to set bounds on reality.

About Adam Lee

Adam Lee is an atheist writer and speaker living in New York City. His new novel, Broken Ring, is available in paperback and e-book. Read his full bio, or follow him on Twitter.

  • Quath

    Great article. A lot of these types of problems come up. For example, I was debating with a person about “Free Will.” I wanted a definition to work with. So they defined free will as having the ability to choose. So I wanted a definition of a choice. All I got was a circular definition. So I went a different route.

    I gave examples and asked if it was an example of a choice or free will. I mentioned things like a computer turning on its fan when it got too hot. Or a virus choosing to invade a cell. Or a boulder choosing to bounce to the left. Or a worm choosing to wiggle.

    It was very similar to this article in that the grey areas needed to be brought back out.

  • Void

    It may interest you to know that programs containing memory have been available for ages, in the form of variables. It would be rather easy to construct a program that gets frustrated at the same question after it is asked it a number of times. Could you explain context sensitive questions in greater depth?

  • BlackWizardMagus

    But would it be a set number of times, like the fifth repeat elicits another automatic response? What if you rephrase the question?

    And I think the context related questions are ones that would require more abstraction. Off the top of my head, I’m thinking of something like “you”; could you ever ask the guy in the room “How are you doing?”? Not really. You COULD ask, physically, but the response would be either completely vague or meaningless, if you even got one. It has no sense of self, a sort of context. Or time, like mentioned. You could ask an advanced computer to save your internet history for exactly thirty days, but you couldn’t ask it “What kind of sites did I look at while ago?”. It doesn’t know what “a while” is, as it’s an abstraction, and if you phrased it more like “I keep getting these month-late credit bills…what sites did I look at a while back?”, where we would know that if it’s a month late, and they are given usually at the end of the month, to go back between 1-2 months, but a computer would be baffled. As Adam said, eventually we could design a computer to do this, but it just doesn’t apply to the chinese room.

  • http://www.patheos.com/blogs/daylightatheism/ Ebonmuse

    I’m well aware that computers can store previous results in memory and incorporate them into calculations, even in a probabilistic way. But the Chinese Room, at least as Searle originally described it, doesn’t work like that. The Chinese Room is a set of simple, deterministic situation-action rules, instructing the person inside to do X whenever they see Y, and as I said, this sort of thing could easily be detected by repeating questions. A consciousness-simulator that took previous results into account would not be a Chinese Room but a Blockhead, which I plan to discuss in a later post.

    BlackWizardMagus did a fine job of explaining context-sensitive questions; they are ones whose answer depends on the prior exchanges in the conversation. “Can you define the word ‘hydrological?’” is not a context-sensitive question. “Could the word ‘hydrological’ be used to describe the subject of my last question?” is. Or, another example: “Based on what I’ve said so far, what kind of person do you think I am?” I can’t see how a Chinese Room could answer questions like this in a way that could not easily be detected and distinguished from the answers a human would give.

  • Archi Medez

    Adam, heady stuff!

    You are quite right that Searle’s example is limited. The technology has advanced quite a lot since the late 1970′s, but even then he could have added the type of “context” dependent subsystems you suggest, e.g., robotic “sensory” systems could measure the passage of time, and this information could be accessed by a central system which processed the request, giving an update with each request. (And of course, like sci-fi writers, he could have easily imagined computers with plausible capabilities). Really, the appropriate analogy for Searle would have been a super-duper android with all the capabilities of exhibiting, externally, the behaviours that we take as indicative of those of a conscious agent. This is the Turing test.

    One significant problem is that the Turing test, even with the best of technology, is inherently limited. Turing was referring to external inputs and outputs, only. This was during the behaviourist era. (Many cognitivists’ thinking is still shaped quite a bit by the behaviorist tradition–Dennett, Pylyshyn, etc.). We now know a great deal about how the brain works through direct study of the human brain in action during cognitive tasks, and we can infer a great deal from the study of non-human animals. We aren’t just looking at inputs and outputs of a black box. We are looking at the actual operations of the brain as they unfold in real time. So the question of processing, rather than just assessment of inputs and outputs, is important. It appears that certain types of processing are associated with conscious states, and others are associated with peripheral and/or non-conscious states. In addition, certain areas of the brain appear to be important for different aspects of consciousness. Taking both the processing and the neural structures and organization into account, the activation of certain neural assemblies working in concert in certain ways is necessary for consciousness. Of course, Hebb figured out all of this by 1949, at least schematically, but lots of the cognitive psychologists and philosophers weren’t familiar with his work.

    Another limitation with the Turing test, even if we extended it to include mimicry of key aspects of processing (i.e., android’s neural net does the critical things and has the critical components that humans’ neural net has that are associated with consciousness), we are still looking at this, as scientists, from the outside. Is first-person experience necessary to absolutely affirm the existence of consciousness? That is, would the entire question of whether our android had consciousness be ultimately only answerable by our android to itself? (The “problem of other minds” is a vexing one. Practically, and scientifically, it is not a major problem; we have ways of ascertaining to an acceptable level of certainty whether some human is conscious. We can even assess more precisely the limitations of their consciousness. But this is inference, not experience. Ultimately, I would have to somehow experience your consciousness directly. Now, that sounds ridiculously fantastic, but when it comes to assessing consciousness in an android, we may need to devise some way to do that–experience the android’s consciousness directly–if it is at all possible). BTW, do I think androids will ever be built that are conscious? I think it’s possible. And I’ve been met with disdain and disbelief for expressing that view, but my hunch is that it’s only a matter of time, scientific development, and technology. Whether we can be absolutely 100% certain that such an android is conscious is another matter. However, if we can get beyond knowing the correlates of consciousness and can develop a proper scientific theory of the mechanisms which cause consciousness, and test and prove it, then we will have made significant progress toward answering the question.

    One thing I will add, in defence of Searle, is that there are other lessons to be learned from his imperfect analogy. He was not merely addressing the issue of consciousness. He was also addressing what is now known as the symbol grounding problem. Actually, there are at least three interrelated aspects of it: grounding, transduction, and differentiation. Take the word pomme. It is a French noun that refers to the fruit which in English we call apple. But if you do not know French, you would have no idea what that meant. You would need someone to show you the referent (or find out somehow), establish a memory of that referent, attach that memory to a memory of the appropriate word (whether in French, English, or both). Each time you hear the word, and process it meaningfully, the referent (i.e., neural instantiation of it) to some extent becomes activated. But if you didn’t know the word when you first heard it, you wouldn’t know what “pomme” referred to, i.e., there would not be an appropriate neural connection set up between the pomme and referent assemblies. Searle’s main point in this regard is that traditional symbol processing systems, as proposed in cognitive theory at the time, didn’t deal with the problem of the representation of the referent. These theories dealt with cognitive representations as arbitrary word-like units connected to other arbitrary word-like units. This would be like having a dictionary without any escape route; i.e., the words lead you to other words, but not to what the words refer to.
    Now, the solution to this seems simple enough: Just connect word-like symbols to representations that capture “multi-media” information from experience that is not word-like, similar to a mulit-media encyclopedia with the relevant sensory, motor, and emotional and bodily state information, as well as the ability to generate contexts to interpret word usages, and so on. Surprisingly, though, one camp of mainstream cognitive psychologists and scientists were unwilling to do this. They would not allow any symbols that were not arbitrary, word-like, and amodal into the cognitive system. (This bias was probably traceable to the linguistic and anti-empiricist bent of philosophy through much of the mid-20th century). The perceptual, motor, and emotional systems of the brain were assumed to be fully non-cognitive and separate from the cognitive system, which supposedly used a kind of predicate calculus (a good example of this view is expressed as late as 1988, by Fodor and Pylyshyn). This separation only aggravated the symbol grounding problem, which was aggravated further by cognitive psychologists and cognitive scientists general ignorance/lack of interest/dismissal of neural science. This problem was solved at least in principle by proposals that focussed on the neural instantiation of mental representations, linguistic or non-linguistic (e.g., see Barsalou, BBS, 1999). Dennett responded to this theoretical proposal by saying, in effect, I’ll believe it when I see a computer model…and since there have been computer (i.e., robotic) models that show the necessary type grounding (word apple–image apple). (It does not, however, answer the more difficult question of consciousness). More importantly, we know that when people think of certain objects, when only processing the names of those objects, appropriate perceptual and motor areas of the brain become significantly activated.

    Searle’s Chinese room thought experiment was probably good enough to address the symbol grounding problem (i.e., to show illustrate the nature of the problem to his audience who were heavily influenced by 1970′s AI, behaviourism, and linguistic philosophy), and thus bring it to psychologists’ attention, but did not adequately address the related problem of consciousness.

  • Archi Medez

    Just to clarify what I mean by “appropriate” perceptual and motor areas of the brain being activated, when people read and understand words for hand-held tools, the areas of the brain that deal with manipulation become significantly activated. When people read the words, hand, foot, tongue, etc., the primary and sensory-motor motor areas of the brain become significantly active in the subregions that deal with the hand, foot, tongue, etc., respectively. Pulvermuller and colleagues have done work on this in the past few years.

  • BlackWizardMagus

    http://www.titane.ca/concordia/dfar251/igod/main.html

    Here is actually a decent example of this; sometimes, it looks like it’s a real person, but certain trends come out after a few minutes.

  • http://www.applecidercheesefudge.blogspot.com Dr Pretorius

    I’ve replied to this post here. Shortly, though, I think you somewhat misrepresent how Searle describes the Chinese Room and the conclusion he draws from it, and I’m not sure how your object at the end differs in practice from the Systems reply.

  • http://www.patheos.com/blogs/daylightatheism/ Ebonmuse

    Hello Dr Pretorius,

    Thanks for your comment. If I read your post correctly, we’re not disagreeing that a simple list of if-X-then-Y rules could simulate something that could pass a Turing test; we’re only disagreeing about whether the Chinese Room works that way. Based on Searle’s original paper, though, I still maintain that this is what he seemed to have in mind. Consider the following excerpt:

    Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch a “script,” they call the second batch a “story,” and they call the third batch “questions.” Furthermore, they call the symbols I give them back in response to the third batch “answers to the questions,” and the set of rules in English that they gave me, they call the “program.”

    If you read the original paper, you’ll see that Searle was primarily writing about lexical-analyzer programs that could extract pertinent information from a story they were given in order to correctly answer questions about that story – a function that would not require taking repetitive questions or contextual information into account, although a system that could pass a full Turing test would have to do that. You’ll notice that, in the above excerpt, Searle speaks specifically of rules which allow him to correlate information from the symbol batches he was given, and that does sound to me very much like the situation-action rules I wrote about. No mention is made of the sorts of complexities that would be required for a full Turing equivalence, such as memory of past questions and answers, or self-modifying rules. Perhaps Searle just didn’t think to include things like this, but I can only respond to what he’s actually proposed.

    I also have a comment about something from your post:

    Memory in the sense that the Chinese room system would require, or in the sense that a computer has memory, is a very different sort of thing than what we speak of when we talk about someone remembering something (or if it is not, the claim that it is not is as yet deeply unjustified). After all, in the sense that a computer remembers certain values, it is also true that my chest of drawers remembers how many clean shirts I own: it is a useful metaphor, but it is not a statement of fact to say that some computer remembers some piece of data.

    I don’t think it would be accurate to say a chest of drawers “remembers” anything, but I do think a computer does (although the term might perhaps be attacked for excessive anthropomorphizing). The difference is that a computer stores information in symbolic form, but a bureau does not. If my chest of drawers had an LED readout on the front that displayed how many pairs of clean socks it contained, then I do think it would be accurate, in a sense, to say that it “remembers” that information.

    In general, I agree that the systems reply is an accurate characterization of my post. That has always struck me as the most sensible reply to the Chinese Room; the fact that the person in the room does not understand Chinese seems no more relevant to me than the fact that no single neuron in my brain understands English.

  • http://gadianton2.tripod.com SidW

    The point of Searle’s thought experiment here is to distinguish between syntax and semantics. It doesn’t matter how fast or sophisticated the memory of the computer is, the architecture is still serial symbol manipulation that (tacitly) understands syntax, there is intuitively, no semantic content. There are various solutions that might give it semantic content but none of them are knock-down responses to Searle. His thought experiment is an important contribution. According to SEP, it’s been talked about by cognitive scientists more in the last 25 years than any other idea.

    Dennett, by the way, doesn’t believe there is a solution in the grain of the responses here that a computer might explicitly get semantics if it were designed right. His “solution” is to say that all semantic content is derived. That is, “meaning” is something we attribute via interpretation to computers or people.


CLOSE | X

HIDE | X