Searle's Chinese Room

A fun video illustrating the classic analogy:

Your Thoughts?

About Daniel Fincke

Dr. Daniel Fincke  has his PhD in philosophy from Fordham University and spent 11 years teaching in college classrooms. He wrote his dissertation on Ethics and the philosophy of Friedrich Nietzsche. On Camels With Hammers, the careful philosophy blog he writes for a popular audience, Dan argues for atheism and develops a humanistic ethical theory he calls “Empowerment Ethics”. Dan also teaches affordable, non-matriculated, video-conferencing philosophy classes on ethics, Nietzsche, historical philosophy, and philosophy for atheists that anyone around the world can sign up for. (You can learn more about Dan’s online classes here.) Dan is an APPA  (American Philosophical Practitioners Association) certified philosophical counselor who offers philosophical advice services to help people work through the philosophical aspects of their practical problems or to work out their views on philosophical issues. (You can read examples of Dan’s advice here.) Through his blogging, his online teaching, and his philosophical advice services each, Dan specializes in helping people who have recently left a religious tradition work out their constructive answers to questions of ethics, metaphysics, the meaning of life, etc. as part of their process of radical worldview change.

  • http://aceofsevens.wordpress.com Ace of Sevens

    I never got this argument. The book would have to either be infinitely large or always give the same response to a given note without regard for context with the rest of the conversation.

    • Kevin

      Similarly, what if someone asks a question that relates to time, such as “what day is it?” The answer will change and the person will not have the tools to answer the question. This calls in the question of whether this thought-experiment is even logically possible in its original formation. If the instructions tells him to look up the day of the week and respond with character X, he knows what the answer to the question is and he learns what the character means. I’m not sure how much he could learn, so the point remains that his portrayed intelligence via the book exceeds his actual intelligence. However, doesn’t that sound like most people when they have access to Google or quantitative software? The kicker is that in order for the book to prepare the individual with such a variety of responses, it would have to be more vast than Google.

    • http://verbosestoic.wordpress.com Verbose Stoic

      Well, you have to remember that what the thought experiment was trying to show was that pure symbol manipulation was missing the semantic and therefore real meaning component of interpreting language. So, in that light:

      1) If the book would have to infinitely large for this mechanism that is only doing symbol manipulation, then we couldn’t do it that way either since we clearly only have a finite brain. So we don’t have an infinite “book” and also don’t only have limited answers, therefore a pure symbolic manipulative system isn’t going to be able to understand the way we understand.

      2) If context is required, then again it isn’t merely the case that we get in a symbol, translate it, and spit out an answer. We have to understand what the symbol MEANS. And Searle’s comment is that that sort of meaning is the semantic component that AI can’t do.

      So, either the machine can do it and so a merely syntactical machine could produce the same behaviour or it can’t do it and so a merely syntactical machine can’t actually do language to the level that we can do language. If the latter, then Searle’s argument against purely symbolic AI systems is made. If the former, then the experiment is interesting … but then it doesn’t look like anything like what we’d consider understanding is going on in there.

    • Kevin

      “We have to understand what the symbol MEANS.”

      Correct me if I’m wrong, but I think that understanding what a symbol means means that the individual is able to properly classify future objects using the correct syntax. If someone knows the difference between a car and a truck and you give them a picture of an automobile, if they understand the concept of car and truck, they should be able to properly classify the picture. I think that some computers are able to do this.

    • Caravelle

      Humans have a staggering amount of knowledge about the world, about objects in it, about the properties of those objects, which of those properties are specific to one object and which ones are common to a class that object belongs to, how those objects relate to each other in time and space and function and causality and similarity… When we learn about a new word or object or property, I’m not sure how you can articulate what they mean other than by relating them to your pre-existing web of knowledge. And that’s manipulating symbols. Isn’t it ?

    • andrewmahone

      If I recall correctly, the man in the room can also take notes. The system of man and rulebook is meant to represent a computing device, not simply a mapping of queries to responses. The rulebook would be an artificial intelligence program, which the man executes by hand (far too slowly, of course). It could be made to seem that the man understand Chinese himself by having rules that refer to his observations, or to entirely fictional ones. The rules could direct actions based on his observations that lead to his writing in Chinese that he is in a room, following rules in a book to produce responses, for example.

  • DaveL

    I think the standard response from the AI community is that, while the philosopher may not understand Chinese, the system composed of the philosopher and the room together does.

    • http://verbosestoic.wordpress.com Verbose Stoic

      The weakness of this, though, is that we have absolutely no reason to think that any of the components understands, and no reason to think that simply sticking them together miraculously gets understanding, especially since this system is clearly not an intentional one and there is nothing in here that itself has meaning (by definition, since this is only manipulating syntax). So you need to show what in this system means that understanding and meaning is present without simply assuming your conclusion by saying that if it looks like it understands, it must understand.

    • Caravelle

      @Verbose Stoic : I think the issue with that is what we mean by “understand”. If we’re referring to qualia, the subjective feeling we have when we understand something then it’s easy to accept machines couldn’t replicate that feeling, but it’s also hard to prove a machine that appears to understand in every visible way doesn’t have that feeling.

      If we use a more operational definition of “understand” so that this quality can be recognized from outward behavior or internal workings, then almost by definition a machine can theoretically “understand” – since we can always imagine a machine with that internal working our outward behavior.

      Searle seems to do that, by defining “understanding” as “always giving the appropriate response to inputs”. That seems like a reasonable definition of “understanding” to me, and I don’t exactly see by what standard the person in the room – or the room as a whole, whatever – doesn’t understand Chinese. They don’t have the brain “clicking” effect of understanding, sure but what reason do we have to think that the brain “clicking” feeling is anything more than the effect of said brain integrating all the inputs to produce an appropriate output ?

    • Andrew G.

      @ Verbose Stoic:

      The weakness of this, though, is that we have absolutely no reason to think that any of the components understands, and no reason to think that simply sticking them together miraculously gets understanding,

      I want a big rubber stamp labelled “Fallacy of Composition” to apply firmly to the forehead of people who spout this kind of nonsense.

      We have no a priori reason to think that a Life cell or a Rule 110 cell has any ability to do universal computing, and no a priori reason to think that sticking a lot of them together “miraculously” gets us universal computing – but in both cases it does. (Anything you can compute on a turing machine can be computed on a sufficiently large Life field or line of Rule 110 cells.)

      What’s more, of the 87 other elementary cellular automata in the same family as Rule 110, we have absolutely no idea whether any of them can do universal computation. (We think that most don’t, but that’s based on nothing more than eyeballing the patterns they generate and classifying them as “too simple” or “too random”.)

      Another example: if we consider the system known as Langton’s Ant as being two parts, a field of black and white cells, and an ant that walks over them flipping the colours according to one of the most trivial rules possible (turn left if standing on a black square, right if standing on a white square, flip the colour of the square as you step off it), we have no reason to believe that putting the two together gives us a universal computer – but it turns out that it does.

  • Robert B.

    I always wondered – how did Searle think humans understood languages? Brain magic? The Platonic Form of literacy? More seriously, was he a Cartesian dualist?

    Barring such magical thinking, clearly there is some physical process that can understand language. If we knew what it was, why couldn’t we program a computer to copy it?

    Also, not for nothing, there’s no such language as “Chinese.”

    • http://verbosestoic.wordpress.com Verbose Stoic

      Searle is not, in fact, a dualist. The point here was that we aren’t simply doing symbol manipulation, or simply reading syntax. Humans also do semantics when understanding language. It was a reaction against the forms of AI at the time that basically said that you could build a truly understanding AI by simply having lots and lots of look-up tables and took in symbols and spit out answers based on the look-up table. Searle thought that was absurd.

    • Robert B.

      Then Searle was quite right. But in that case, the video misrepresents him, with “no matter how well you program a computer, it doesn’t understand Chinese.” I’ve actually read Searle’s original essay, but it was so long ago, all the different ideas about what the thought experiment means got mixed up in my head and I don’t know what came from where. I think a student in that class actually argued that the Chinese Room proved dualism.

    • http://verbosestoic.wordpress.com Verbose Stoic

      Well, you have to recall that Searle was reacting against Turing machines, which only do symbol manipulation. Then you get that statement, and it isn’t unfair when we consider how computers do work, but since the time Searle wrote this programming has advanced past the simplistic models that were used then, although it is still debatable if they get past simple symbol manipulation.

    • Caravelle

      Fide Wikipedia, Searle thought there was something about brains and biological systems that cause consciousness and that computers can’t emulate.

    • Richard Wein

      I think Verbose Stoic is mistaken, and the video does not misrepresent Searle in this respect. As far as I can see, Searle does not put any limits on the complexity of the program represented by the book. And he presents his argument as an argument against the possibility of Strong AI in general, not just against one type of proposed AI program.

      http://plato.stanford.edu/entries/chinese-room/

      I vaguely recall reading elsewhere that Searle suggests there must be something about the organic physical nature of the human brain that makes it capable of understanding, and that it’s not just a matter of executing the right algorithm.

      He seems to accept as a premise of the argument that the Chinese Room can respond correctly to complex natural language questions. But he seems to think this doesn’t necessarily count as “understanding” the questions. I don’t think he addresses the question of what “understanding” means. He just seems to rely on an intuitive feeling that “understanding” isn’t the sort of thing that a computer or Chinese Room can do.

    • http://verbosestoic.wordpress.com Verbose Stoic

      Richard,

      Your source says this:

      The second premise is supported by the Chinese Room thought experiment. The conclusion of this narrow argument is that running a program cannot create understanding. The wider argument includes the claim that the thought experiment shows more generally that one cannot get semantics (meaning) from syntax (formal symbol manipulation).

      Which, then, is pretty much what I’m saying: for Searle, programs could only do formal symbol manipulation as per the Turing Machine arguments — all algorithms can be reproduced on a Turing Machine with infinite tape — and so they can’t get understanding because they only know syntax and not semantics. All of the major forms of AI at the time were reducible to formal symbol manipulation and so none of them could do it. However, connectionist systems which don’t manipulate symbols at all might, in fact, be able to do it, and I was careful to say that while the AI systems have moved past the simplistic forms — like into connectionism — it is still debatable if they move past formal symbol manipulation. If they do, then Searle’s Chinese Room argument cannot apply to them.

      So, think of it this way: does a connectionist system work anything at all like a Chinese Room? No. Well then it isn’t in a case where the Chinese Room experiment can directly cast doubt on its method of understanding language, and good thing since connectionist systems at least attempt to work the way the brain actually does, and Searle is not a dualist.

    • Andrew G.

      but since the time Searle wrote this programming has advanced past the simplistic models that were used then

      Not true in any applicable sense; everything in programming is still reducible to Turing machines (yes, even in quantum computing – anything computable in any current model of quantum computing can be computed on a Turing machine in finite time).

    • Andrew G.

      and so they can’t get understanding because they only know syntax and not semantics.

      But there’s no reason whatsoever to believe that semantics is not also a matter of symbol manipulation.

      In a sense this is an argument from incredulity – “I don’t see how symbolic manipulations can represent meaning, so obviously they can’t” – which is obviously an invalid argument.

    • Robert B.

      Referring to a Turing machine’s function as “symbol manipulation” is misrepresenting the comp sci. A Turing machine at its most basic level isn’t doing syntax, like the Chinese Room. It’s doing math. And the fact that a Turing machine manipulates bits is no more relevant to the question of whether understanding is going on than the fact that the philosopher’s book in the Chinese Room is written on paper rather than papyrus.

      Imagine if the Chinese Room also contained a very large chalkboard with drawings (not symbols) on it. Unbeknownst to the philosopher, this is a map, a reasonably accurate model of the world (or part of the world) centered on the Chinese Room itself. When he gets a note under the door, the philosopher’s instructions in the book involve altering the map – for one thing, of course, there now needs to be a drawing of someone standing outside putting notes under the door – and the responses change to reflect what’s in there. If the map shows a bowl of oranges nearby, for example, and someone passes the room a note saying “I’m hungry,” the philosopher might be told to write “Would you like an orange? They’re in a bowl to your left.” (Or maybe not – maybe it says “Well, you can’t have any of my oranges. I need them to feed my philosopher.”)

      The philosopher still does not understand the language. By my hypothesis, he doesn’t even understand the map on the chalkboard. But now I would say that the room as a system does understand the language. Now, as you put it, the room is doing semantics and not just syntax.

    • Richard Wein

      But there’s no reason whatsoever to believe that semantics is not also a matter of symbol manipulation.

      And not only that, but Searle’s own CR scenario adopts the premise that the CR can produce appropriate responses to natural language input, and I think he’s even allowed that it can pass a Turing Test. He apparently thinks it can do this purely by “syntax” without “semantics”. But unless he’s appealing to magic, the CR’s program must contain and utilise information about the meanings of the words in its input and output. How else would it be able to produce appropriate responses? And that’s semantics.

      Searle seems to have made up his mind that there’s some ineffable thing that only organic beings can have, and he uses words like “understanding” and “semantics” to denote that thing, without stopping to wonder what those words actually mean. So, no matter how sophisticated and human-like a computer’s responses may be, it will not be “understanding” or doing anything involving “semantics”. Even if a computer responds to a philosophical question with an original, sophisticated and detailed response, it still won’t have “understood” the question or its own response!

    • Andrew G.

      Searle seems to have made up his mind that there’s some ineffable thing that only organic beings can have, and he uses words like “understanding” and “semantics” to denote that thing, without stopping to wonder what those words actually mean.

      Which isn’t conceptually any different to, say, Plantinga, who calls the ineffable thing “intention” and then applies the fallacy of composition to claim that physical brains can’t have it because their constituent neurons don’t have it, therefore dualism.

    • http://verbosestoic.wordpress.com Verbose Stoic

      Andrew G.,

      Not true in any applicable sense; everything in programming is still reducible to Turing machines (yes, even in quantum computing – anything computable in any current model of quantum computing can be computed on a Turing machine in finite time).

      Let’s look at the full sentence that you’re replying to here:

      Then you get that statement, and it isn’t unfair when we consider how computers do work, but since the time Searle wrote this programming has advanced past the simplistic models that were used then, although it is still debatable if they get past simple symbol manipulation.

      The first part says that his charge that they are all reducible to symbol manipulation is actually a fair charge when we look at how computers work, while the last part concedes that it is debatable whether the new, less simplistic models actually get past simple symbol manipulation. So, your reply that they all still do symbol manipulation like Turing machines seems to be saying that my statement is untrue because of things that I explicitly conceded and commented on, leaving it as the case that you can only be objecting to my saying that we have advanced past those simplistic models, and if you don’t think that connectionist systems or systems that reason based on representations and not simple inference rules are advances from the initial simplistic model of inference engines then I have no idea what you think AI is currently doing [grin].

      But there’s no reason whatsoever to believe that semantics is not also a matter of symbol manipulation.

      In a sense this is an argument from incredulity – “I don’t see how symbolic manipulations can represent meaning, so obviously they can’t” – which is obviously an invalid argument.

      Well, we actually DO have a few reasons to think that semantics and syntax aren’t the same thing. First, linguistics for some reason continues to separate the two. Second, we have many examples where things can indeed manipulate syntax where it seems unreasonable to think that there is anything semantic going on. For example, we can do it ourselves; blindly apply rules without having any idea what’s going on and so not understanding what they mean. Also, calculators can easily manipulate the syntax of mathematics and do that symbol manipulation but it’s certainly a bit odd to suggest that they have the semantics and really understand mathematics. And there are oodles of other examples. So, no, we have really good reasons to think that manipulating symbols doesn’t JUST give you meaning, and you have given no reason to think that it is just manipulating symbols that gives us human understanding, or understanding at all. Especially since connectionist systems, aimed at working like the brain does, technically don’t do symbolic manipulation at all (as nothing is a symbol to it, really).

    • Andrew G.

      You’re equivocating two senses of “more powerful” here.

      A high-level programming language is more powerful than a low-level one in one sense: you can write programs faster and more compactly and conveniently.

      It is not more powerful in another sense: there is nothing which is computable in one and not in the other.

      Almost all progress in computing (including in AI) has consisted of making things more powerful in the first sense. In the second sense, the Turing machine is still the most powerful physically realizable computing device we know of. (If we weaken the requirement that the machine eventually halt, it becomes more powerful in terms of what problems can be solved, at the cost of not being able to tell if we have the right answer; but this doesn’t require any change to the basic mechanics and therefore doesn’t affect the argument.)

      It is certainly true that the early AI researchers massively underestimated the difficulty of the problem, but that has little to do with philosophical arguments about mind, consciousness or understanding.

      Well, we actually DO have a few reasons to think that semantics and syntax aren’t the same thing.

      We don’t treat physics and chemistry as the same thing, even though they obviously are.

      The fact that we can choose to manipulate symbols without reference to meaning (whether in studying language syntax or formal systems or whatever) doesn’t automatically imply that “meaning” is inaccessible via more complex manipulations.

      Of course this argument is all moot since there is no usable definition of “meaning” or “understanding” in play, other than “the room passes the Turing test” which is already conceded.

  • Cuttlefish

    In one class I taught, two students were particularly frustrated. One swore up and down that xe absolutely “got it”, absolutely understood the concepts completely, really knew this stuff…. but could not pass a test on it to save xir life. The other one aced every test, answered any question, could do both problem and explanation… but claimed “I just don’t get it.” Xe claimed not to understand what xe was doing, while doing it perfectly.

    Which one understood?

    We learn the word “understand” from people who have no access to our feelings. What “understanding” means, as the referent for the word, is the publicly accessible ability to do the task. Without sensory neurons available to “feel” our thinking, we have only the byproducts of our actual thinking available to us, so we do not and cannot have our actual thought process available to us as the referent for “understanding”. It is hidden from us, and more importantly it is hidden from those who taught us the meaning of the word.

    Searle’s Room does understand. What he demonstrates is that there is no requirement for a sentient homunculus. Which is fortunate, since we do not have one.

    • http://verbosestoic.wordpress.com Verbose Stoic

      Well, I think that the point of thought experiments and examples like yours are to tease out whether it is indeed the case that publically accessible ability to do the task counts as understanding. First, I think a lot of people would take the person’s word that they don’t really get the things despite their aceing all the tests. Second, we all know of cases where you can do a task by following all the steps but don’t know why you do each step and why you do A first and then B, or if you can do B first and then A, and all sorts of things like that. We seem to think that understanding incorporates more like that sort of reasoning.

      There is a difference between “I know how to do X” and “I understand how to do X”. I may claim to know how to add fieldsets to HTML code, but argue that I don’t understand how to do that because I did it by copying it from somewhere else and it worked. I actually think I moved from “know how” to “understand how” this weekend with HTML, by copying something, having it not work, and then copying more, and then having to change it because it wasn’t working right, and then finally understanding what it was actually doing and then being able to tweak it to my own needs. So, in my case, I could indeed have been able to build a fully-functioning HTML GUI page and yet still say that I didn’t get how to do it … but it works, so who cares?

  • http://nwrickert.wordpress.com/ Neil Rickert

    I thought the video a bit too simplistic. It doesn’t get at the subtleties that Searle is trying to discuss. But then, I never found Searle’s argument to be persuasive either.

  • http://qpr.ca/blogs Alan Cooper

    I have always felt that the fact that Searle’s “Chinese Room argument” is taken seriously (as a real argument rather than just a paedagogical device to prompt discussion) brings the entire subject of philosophy into disrepute. The “Fallacy of Composition” identified in Andrew G’s response to Verbose Stoic was glaringly apparent in Searle’s own discussion in Sci Am years ago and no subsequent defense of the “argument” has ever avoided it.

    • http://aceofsevens.wordpress.com Ace of Sevens

      Whether it’s a valid model of how language is used is the central question of the metaphor. I would think this is all relevant. Objections like the amount of time or size of the book (if finite) are distractions, but the rest seems fair.

  • stevegerrard

    You must ask the question, why have a person in the room? If you have developed the algorithm for processing questions in Chinese, surely you can setup a scanner for inputting the questions and a printer for outputting the result. The human in the room is just a clerk.

    The human is present to distract you, to make you notice that the human doesn’t understand Chinese, and to focus on that, instead of the capabilities of the language algorithm. In fact the human has nothing to do with it, and is dispensable.

    As for the chemistry, it is worth noting that a lump of granite and a human being are both entirely composed of protons, neutrons, and electrons. The difference is all in how those particles are arranged.

    • Richard Wein

      The human is present to distract you, to make you notice that the human doesn’t understand Chinese, and to focus on that, instead of the capabilities of the language algorithm. In fact the human has nothing to do with it, and is dispensable.

      Good point. I think that if Searle’s argument has any merit at all, it’s for exploring the intuition that there’s nowhere in the room for consciousness to be located. This intuition can be explored more easily once we remove the misleading man from the room.

      But you would need more than a scanner and printer. You also need a computer/robot to read the instruction book and execute the instructions. It would need enough intelligence for optical character recognition and interpreting the language the book is written in, as well as a device for turning the pages. Also bear in mind that the man in the room needs a notepad for working data storage. If the computer is strictly replacing the man, it would need a printer/scanner arrangement for recording its working data on paper. But putting a computer/robot in the room (albeit one far simpler than a man), could still cause problems. I would suggest an alternative.

      First, what’s gained by running our AI program in the Chinese Room rather than on an electronic computer? Why can’t we apply our intuition (that there’s nowhere for consciousness to be located) equally well to an electronic computer as to the Chinese Room? I think the main reasons are that (a) an electronic computer seems more brain-like, and (b) we can’t see anything happening when we look inside a computer. These reasons make it easier for our intuition to accept that a computer is like a brain, and so equally capable of consciousness. The Chinese Room doesn’t seem much like a brain, and its working is much more visible too us, so it’s easier to see that there’s no particular place for consciousness to reside. If we want to retain these features of the Chinese Room while getting rid of the misleading man, I suggest imagining that we run our AI program on a giant mechanical computer instead of an electronic one, replacing all electronic components with moving parts, including punched cards for memory. We make it big enough to go inside and see all the individual moving parts.

      Another difference between the Chinese Room and a brain or electronic computer is that it’s vastly slower. It would probably take millions of years to answer one question. That difference could be significant to our intuition, and some people seem to think it’s actually relevant to the question of consciousness. A mechanical computer would also be very slow, but perhaps for the purpose of our thought experiment we can imagine it having a speed control that allows it run at unlimited speed (even ignoring physical constraints like the speed of light if necessary), so it can answer questions at human speed. But we can dial down the speed when we want to see what’s happening inside.

      Note that I’ve replaced “understanding” with “consciousness”, because I think the real issue here is what Chalmers calls “the hard problem of consciousness”. Searle’s ill-considered talk of “understanding”, “syntax” and “semantics” just gets in the way of clear thinking.

      With these changes, I think my Chinese-room-replacement could be a useful tool for thinking about consciousness. If functionalism (or something of that sort) is true, then it seems my mechanical AI should be conscious in the same way as a human brain (or at least a brain in a vat). But my intuition objects to the idea of a device being conscious when I can see that it’s just a collection of metal and cardboard moving parts. The question is whether to accept that intuition.

  • http://verbosestoic.wordpress.com Verbose Stoic

    Robert B.,

    Imagine if the Chinese Room also contained a very large chalkboard with drawings (not symbols) on it. Unbeknownst to the philosopher, this is a map, a reasonably accurate model of the world (or part of the world) centered on the Chinese Room itself. When he gets a note under the door, the philosopher’s instructions in the book involve altering the map – for one thing, of course, there now needs to be a drawing of someone standing outside putting notes under the door – and the responses change to reflect what’s in there. If the map shows a bowl of oranges nearby, for example, and someone passes the room a note saying “I’m hungry,” the philosopher might be told to write “Would you like an orange? They’re in a bowl to your left.” (Or maybe not – maybe it says “Well, you can’t have any of my oranges. I need them to feed my philosopher.”)

    Well, adding more things, like a representation of the world — and an ACTUAL representation of the world — are decent replies to the Chinese Room. For me, though, it comes down to the rules themselves. If something external to the system writes the rules and the system just follows them, then there is no reason to think it understands no matter how much context is required for it to do so. For me, what understands is what wrote the rules, not the system. So it would seem that I’m going to require the system to write the rules itself that it then follows — internally, not the rules of the language as a whole — and so at a minimum it will have to in some way “learn” the language. Then we can start thinking about whether it really understands it or not.

    Which leads to my reply to Richard:

    And not only that, but Searle’s own CR scenario adopts the premise that the CR can produce appropriate responses to natural language input, and I think he’s even allowed that it can pass a Turing Test. He apparently thinks it can do this purely by “syntax” without “semantics”. But unless he’s appealing to magic, the CR’s program must contain and utilise information about the meanings of the words in its input and output. How else would it be able to produce appropriate responses? And that’s semantics.

    Sure, it passes the Turing Test, but only because it has access to a list of every possible input and the output for it. Here, you seem to be tripping over the fact that, say, writing a massive dictionary of all possible inputs and outputs would be really, really hard, but this is a thought experiment, so all we need to do is think about how it would be if we COULD do that, and include all possible contexts and facts of the world in the look-up table. The system wouldn’t need to in any way know any of the meanings, because all the system is doing is accessing an externally defined table to give its responses, and in any case where all we do is access a table or list to give our responses we all know that understanding — and therefore, in this case, semantics — is not required.

    • Richard Wein

      Verbose:

      Sure, it passes the Turing Test, but only because it has access to a list of every possible input and the output for it.

      I see now that there’s a major difference between how you interpret the CR and how most people interpret it. You think the book contains a single (but vast) table, listing every possible Chinese question that might be asked, together with a corresponding answer. Most people take the book to contain a complex program, representative of AI programs.

      Here’s Searle’s description of the CR (from the SEP article):

      Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

      Note that he talks about “following the instructions in the program”. Searle’s express intention was to argue for a limitation on computer AI. Your interpretation is uncharitable to him, since it makes the CR an obvious straw man.

      I’ll grant you that Searle seems to have some simplistic ideas about AI, and sometimes describes the CR in misleading terms, so I can understand how you might have reached your conclusion. Still, if we accept your interpretation for the sake of argument, we can simply say that Searle is addressing a straw man, and we can move on to something more useful. Either way, the CR argument is a bad one, and at least your interpretation has the merit of making it more obviously bad (given the conclusion it’s supposed to deliver), so we can move on more quickly.

  • http://verbosestoic.wordpress.com Verbose Stoic

    So, on the “Fallacy of Composition” argument, here is how I see the debate as going:

    - Searle gives an example that we do think intuitively does not include understanding, and uses this to challenge various assumptions, including the one that if something gives responses like it understands it must understand (Caravelle, Searle doesn’t, in fact, assume that if it acts like it understands it must. That’s one of the things he’s challenging here, as he is explicit in the thought experiment that the behaviour is indistinguishable from the outside and so it acts like it understands, and yet it doesn’t).

    - The System Reply says that while the individual does not understand the system does.

    - My reply to that is that there is no reason to think that the system understands, because we don’t have any reason to think that putting those parts together is what creates understanding and that if we try to appeal to the fact that it gives the right answers you would be assuming your conclusion.

    - The “Fallacy of Composition” response is given.

    To which I now reply that you still have given absolutely no reason to think that this system understands without assuming the conclusion that if it acts — meaning, gives responses — like it understands, it must do so, which is one of the things under debate here. So, no, not that fallacy at all. Unless you are claiming that understanding is emergent behaviour, we really should be able to tell even after we’ve tried it why sticking these parts together gives us understanding and meaning, and we can’t here. Note that in all of the cases Andrew G. cites we can, indeed, show why doing that produces a system that, properly interpreted, does universal computing. We can’t do that here, at least not yet. Thus, the System Response, it seems to me, relies entirely on setting up test cases based on external behaviour and claiming that satisfying those is enough to demonstrate understanding, just like it is in the universal computing cases. Searle denies this and it’s a view that I share certainly for consciousness and also possibly for intelligence.

    • Andrew G.

      Note that in all of the cases Andrew G. cites we can, indeed, show why doing that produces a system that, properly interpreted, does universal computing.

      Only after the fact, and only in the most limited sense of “if you do this, then the system emulates the behaviour of some other system which we’ve already proven to be universal”. Which is not really an answer to “why” at all. (And the only means we have to prove that a system is not universal is to prove that we can solve its halting problem.)

      Anyway, the point is that all arguments of the form “A can’t do X, B can’t do X, therefore A+B can’t do X” are invalid and illegitimate; you need to provide an explicit proof that A+B can’t do X. If we don’t have any reason to believe (based on the properties of A and B) that A+B can do X, then we have to admit ignorance rather than jump to the unsupported conclusion that it necessarily can’t.

  • Robert B.

    The point I was trying to make is that the Chinese Room seems to refute only one conceivable form an AI program might have – the kind that gets an input string and looks up the response. The Chinese Room doesn’t seem like it understands, because important properties of understanding have been left out of the thought experiment, not necessarily because they are impossible for computers. (Again, if Searle knew that, then fine, but that video is not the first time I’ve seen the Chinese Room presented as an argument against the idea that any computer program could have understanding.)

    Anyway, as far as I can tell, the only reason that natural language is such a big deal for AI is that Turing put it in his test. After all, a human who spoke oddly, or didn’t speak at all, might still be intelligent and understand things. (One remembers the Farside cartoon where scientists are studying dolphin vocalizations by counting the frequency of certain sounds. Unbeknownst to the researchers, the dolphins are speaking Spanish.) Passing a Turing Test might be a sufficient condition for strong AI, but that doesn’t mean that designing a program to pass a Turing Test is a good way to get a true AI. Designing a program to, say, solve novel problems would seem to get at the nature of intelligence more truly than designing it to carry on a conversation. It seems like the whole debate, of which the Chinese Room idea was a part, is focusing on non-essentials.

    • John Morales

      Robert B.,

      The Chinese Room doesn’t seem like it understands, because important properties of understanding have been left out of the thought experiment, not necessarily because they are impossible for computers.

      Searle’s entire point was that if some process exists the output of which can not be distinguished from cognition under certain circumstances, then under those circumstances the distinction between cognition and that process is moot.

      (It’s a though-experiment, and valid within its domain)

    • John Morales

      [erratum]

      the output of which can not be distinguished from cognition under certain circumstances

      the output of which can not be distinguished from the output of cognition under certain circumstances

      Bah.

    • Richard Wein

      John,

      Searle’s entire point was that if some process exists the output of which can not be distinguished from the output of cognition under certain circumstances, then under those circumstances the distinction between cognition and that process is moot.

      [erratum incorporated]

      Are you sure you don’t need another erratum? Because at the moment you seem to be contradicting Searle. By hypothesis the output of the CR cannot be distinguished from that of a human. And the point of the CR argument is to show that, unlike a human, the CR is not capable of cognition (or understanding or consciousness). He would hardly make such a big deal of arguing that point if he thought the distinction was moot!

      If you doubt my interpretation, I refer you to these sources:

      (a) The SEP article on the subject: http://plato.stanford.edu/entries/chinese-room/
      (b) Two interviews with Searle: http://machineslikeus.com/interviews/machines-us-interviews-john-searle-0/page/0/2 , http://globetrotter.berkeley.edu/people/Searle/searle-con4.html
      (c) The abstract of Searle’s original 1980 paper: http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=6573580&fulltextType=RA&fileId=S0140525X00005756 (I’m too cheap to pay for access to the paper itself.)

      From the first interview:

      ‘Years ago I baptized the view that computation by itself is sufficient for cognition as “Strong Artificial Intelligence” (Strong AI for short). … The simulation of cognition on a computer stands to real cognition in exactly the same relation that the computation simulation of a rainstorm stands to a real rainstorm or the computational simulation of digestion stands to real digestion.’

      Far from the distinction being moot, he thinks that only one is “real” cognition.

    • John Morales

      Richard, same thing, different emphasis.


CLOSE | X

HIDE | X