Blockbuster

A Response to Ned Block’s “Blockhead”

In a classic 1981 paper titled “Psychologism and Behaviourism“, the philosopher Ned Block proposed a thought experiment that has been dubbed “Blockhead” in his honor. Block’s experiment has to do with the Turing test, itself a classic proposal on how to test for the presence of intelligence in a machine (or some other suitable non-human agent). The Turing test consists of a human, the judge, conversing via computer terminal with two agents. One of the agents is another human being; the other is a machine. If the judge cannot reliably tell which is which, the Turing test tells us that the machine should be considered to have the same intelligence as a human.

Block’s proposal was to build a computer program that would store within its memory banks a precomputed reply to every possible question it might conceivably be asked. Everything: from “What are your fondest memories of my Uncle Morris?” to “Do you prefer the smell of vanilla extract to gasoline?” to “Could you summarize the plot of Romeo and Juliet in a hundred words or less?” To many nonsensical queries, the program could be instructed to deliver an answer expressing confusion or bewilderment. Its meaningful answers might be written to consistently express a single perspective or outlook, thus simulating not just a person but a personality. Block’s assertion is that such a program, despite not being intelligent, could pass a Turing test.

Blockhead is one specific example from a family of philosophical thought experiment I like to call “lookup table consciousness”: imaginary constructs that simulate consciousness by maintaining a massive list of actions to take in response to every imaginable circumstance. The Chinese Room is another, although the two differ in that the Chinese Room is usually said to possess some rule set or program that transforms input into output, whereas a Blockhead stores every possible output explicitly. Proponents of the Chinese Room such as John Searle argue that no computer program or machine, however well-constructed, could ever be conscious in the same way a human being is. However, Block makes only the weaker claim that some structures could simulate consciousness without actually being conscious.

I have previously discussed reasons why the Chinese Room could not pass a Turing test, and therefore would not cast doubt on the claims to intelligence of any machine that could. Namely, because it is said to reply on a stateless set of condition-action rules, the Chinese Room could not adequately respond to repetitious or context-dependent questions. However, we could imagine a Blockhead that could do this. Rather than a simple list of queries and answers (which would be unmasked by the same strategem), we could imagine a Blockhead that stores every possible conversation in a form analogous to a branching tree, where each question and answer represents a decision point that branches out into an innumerable array of possibilities for the next query and reply. In such a scenario, every reply that is given depends on what has come before, and so there is a realistic possibility of answering context-sensitive questions correctly.

It should be clear just what a staggeringly impossible and pointless endeavor this is. Even if every atom of the visible universe was used for storage (if the entire cosmos was turned into computronium), we still would not have even a fraction of a fraction of the capacity that would be required to build a Blockhead, to say nothing of the unimaginable time it would take to compile a list of answers to every syntactically valid question. But of course, we are doing philosophy here, and it is logical possibility, not practicality, that is relevant in that field. Even if a Blockhead will never be built, if one was, what would that tell us?

To more clearly illuminate the principle at work, consider a different thought experiment: the Chance Conversation Machine. The CCM is another program designed to participate in a Turing test, one that takes input from a keyboard and sends output to a terminal. But the CCM makes no effort to create an intelligible response to its interrogator’s queries. No matter what input data it receives, it discards that data, generates a random stream of bits and outputs them to the screen.

Of course, the vast majority of the time this will result in total gibberish. But if the CCM’s output is truly random, all possible outcomes are guaranteed to occur eventually. Once in a great while, its random output will fall into the patterns that code for English characters. Once in an even greater while, these characters will form meaningful words. And once in an unimaginably enormous while, the CCM will apparently respond meaningfully to the most recent thing its interrogator said. It may send a response that dazzles us with penetrating insight, provoke gales of laughter at its razor wit, or respond to our troubles with understanding and sympathy. It may even seem to be aware that its output is purely random, and express its apparent regret that its next reply probably will not be so erudite.

Plainly, the CCM is not conscious, though it might occasionally seem to be so. Consciousness by definition requires genuine understanding of one’s situation, not merely meaningful output in response to it. The CCM has the latter, but not the former. For the same reason, a Blockhead is not conscious either. Neither of these imaginary constructs perform any actual analysis of their sensory data, and without analysis, there can be no genuine comprehension. The ability of analysis is not a sufficient condition for consciousness, but it is clearly a necessary one.

Where a Blockhead simulates consciousness using unrealistically enormous amounts of space, the CCM simulates consciousness using similarly unrealistically enormous amounts of time. Although logically we must account for bizarre possibilities like this, they are not realistic possibilities. It entails no self-contradiction to imagine them existing, but they could never actually be built. In particular, a Blockhead could not exist in our universe – there is not enough matter in the cosmos to store all its possible actions, and even if there were, the finite speed of light sets a finite horizon for communication that would make it impossible to access all the multibillion-light-year-distant memory banks that would need to be queried whenever the consciousness simulator was presented with a new challenge to react to.

Far more reasonable, given the evidence available to us and our knowledge of the underlying laws of physics, is that there are and can be no such things as Blockheads. Far more reasonable is that any agent, whether organic or mechanical, that can pass a Turing test can do so because it performs some process analogous to thinking and understanding. This does not provide a logically airtight proof that an agent that succeeds at a Turing test must be intelligent, but then again, when do we ever have that impossible degree of certainty about anything?

As I said, genuine understanding and not merely meaningful response is a necessary condition for consciousness, and the two are not logically required to go together. In the strictest sense, this is true. But in our imperfect, inductive world, we can depend on the latter to be a reliable indicator of the former. To the degree we believe our worldview is not the product of a Cartesian demon, concocting illusions to deceive us, we should similarly believe that anything that passes a Turing test is truly conscious and not merely a cunning simulation.

About Adam Lee

Adam Lee is an atheist writer and speaker living in New York City. His new novel, Broken Ring, is available in paperback and e-book. Read his full bio, or follow him on Twitter.

  • valhar2000

    Very good, very good indeed. It always seemed to me that these thought experiments made some very basic mistake, but I could never quite put my finger on it (though, to be honest, I never really tried).

  • Alex Weaver

    Of course, the vast majority of the time this will result in total gibberish. But if the CCM’s output is truly random, all possible outcomes are guaranteed to occur eventually. Once in a great while, its random output will fall into the patterns that code for English characters. Once in an even greater while, these characters will form meaningful words. And once in an unimaginably enormous while, the CCM will apparently respond meaningfully to the most recent thing its interrogator said. It may send a response that dazzles us with penetrating insight, provoke gales of laughter at its razor wit, or respond to our troubles with understanding and sympathy. It may even seem to be aware that its output is purely random, and express its apparent regret that its next reply probably will not be so erudite.

    I can’t shake the feeling I’ve argued with a few of these… ^.^

  • valhar2000

    I can’t shake the feeling I’ve argued with a few of these…

    Once, I downloaded a freeware chat program, and used it to chat with with other people in IRC, by pasting the programs reply into IRC and the chatter’s replay into the program. I got a girl’s phone number that way…

    In “Darwin’s Dangerous Idea” Dennet actually mentions the unreliability of Turing’s Test for this very reason.

  • http://importreason.wordpress.com Simen

    We could imagine a Blockhead that stores every possible conversation in a form analogous to a branching tree, where each question and answer represents a decision point that branches out into an innumerable array of possibilities for the next query and reply.

    Such a machine would meet this criterion:

    Neither of these imaginary constructs perform any actual analysis of their sensory data, and without analysis, there can be no genuine comprehension.

    Because it would be branching based on previous data, it must be analyzing some of its data.

  • andrea

    Hmm, I know that some “humans” would fail a Turing test. Actual understanding is quite rare. And a “meaningful response” isn’t that common either.

  • http://www.patheos.com/blog/daylightatheism/ Ebonmuse

    In “Darwin’s Dangerous Idea” Dennet actually mentions the unreliability of Turing’s Test for this very reason.

    The usual setup of a Turing test is that the person is explicitly told that one of the participants they are chatting with is a machine, and has to figure out which one that is. An uninformed person can be surprisingly easy to fool; I believe Carl Sagan mentioned in one of his books that the extremely simplistic “Eliza” chat program could not only fool naive people but even caused some of them to develop strong emotional attachments to it. But generally, when a person is alert to the possibility, machines simulating intelligence are fairly easy to detect.

    Because it would be branching based on previous data, it must be analyzing some of its data.

    Analyzing in the sense I was referring to means performing some sort of transformation or processing on a statement to extract the meaning – analysis in the literal sense of “breaking apart”. A Blockhead doesn’t analyze any of its incoming data, but just does a bit-by-bit comparison between its input and an entry from its data banks.

  • George Jelliss

    I get the impression that creationists are very often like blockheads. Ask them a question and they look up the answer in “Answers in Genesis”. This is why they keep repeating old arguments that have long been refuted.

    As an alternative to the two schemes described how about constructing an “ironist” or a “contrarian” who twists whatever you say into a paradox or an opposite view? This would require a knowledge (database) of the meanings of words and of grammatical structures, but would just twist the grammatical structures in reply. Well it’s an idea; I wonder if its been considered.

  • Alex Weaver

    Once, I downloaded a freeware chat program, and used it to chat with with other people in IRC, by pasting the programs reply into IRC and the chatter’s replay into the program. I got a girl’s phone number that way

    Nothing terribly surprising about that; I’ve certainly known (well, been aware of) plenty of girls who’ve given their numbers to blockheads… ;/

    As for George’s proposal, if this “contrarian” does anything more sophisticated than simply adding verbal “nots” to the sentence and repeating it, I should think it would of necessity need either a blockhead construction or else intelligent analytical capacity.

  • http://inthenuts.blogspot.com King Aardvark

    Damn, Andrea, you beat me to it.

  • consilium

    Very, very late to the conversation here, but for future latecomers, as a computer science major, I see something very important to point out:
    Without getting into the details of an Abstract Syntax Tree (see http://en.wikipedia.org/wiki/Abstract_syntax_tree for more about that if you’re curious), Blockhead actually /does/ perform computations – specifically, precached computations: a human (presumably) built the database of all /reasonable/ answers to the queries Blockhead might ever encounter. That computation happened in a human brain, ahead of time, but it was computation done nonetheless.
    This rather means that the human architect(s) are a /part/ of Blockhead, which rather parallels one of the main refutations to the Chinese Room – the Chinese Room contains the /work/ and computation of at least one sentient being (even leaving out the poor drudge stuck blindly translating Chinese by rote), in its construction of answers to human querants.
    As an aside, the Chance Conversation Machine could be argued to /never/ produce meaningful responses – its responses are all random, and only the happenstance, ignorance, or wishful thinking of human conversants would lead anyone to believe otherwise. The extremely rare cases where it produces even one ‘meaningful’ response illustrate that fact. The human conversants merely read meaning into random output.