Bermúdez’ Cognitive Science (a review)

I’ve long had people tell me I should take a class on cognitive science, from my undergrad philosophy of mind prof to, more recently, various MIRI/LessWrong folks. But I’m out of school and very likely will never take a cognitive science class, so I decided to do the next best thing and pickup a textbook on the subject, specifically José Luis Bermúdez’ Cognitive Science.

Part of my problem going in is that I wasn’t really sure what “cognitive science” was. I had a decent academic background in philosophy of mind and neuroscience and even a couple psychology courses; I had read my my Steven Pinker and other evo psych, and lately I’d been learning more about about AI, but I wasn’t sure if any of that added up to “knowing cognitive science.”

Because of this, I was quite happy when I read Bermúdez’ description of his goals in the preface, where he says, “The challenge is to give students a broad enough base while at the same time bringing home that cognitive science is a field in its own right, separate and distinct from the disciplines on which it draws.” That gave me the hope that Bermúdez would show how to integrate those disciplines (later listed as psychology, philosophy, linguistics, anthropology, neuroscience, and AI).

However, reading the book, I got the feeling that Bermúdez’ commitment to making each of these disciplines an equal partner in the enterprise sometimes led to overselling certain results. For example, chapter 2 is dedicated to three “milestones” in the development of cognitive science as a field, the first of which is (according to Bermúdez) SHRDLU.

After describing SHRDLU’s limitations, Bermúdez writes:

But to criticize SHRDLU for neglecting pragmatics, or for steering clear of some comple linguistic constructions such as counterfactuals (statements about what would have happened, had things been different) is to miss what is genuinely pathbreaking about it. SHRDLU illustrates a view of linguistic understanding as resulting from the interaction of many, independently specifiable cognitive processes. Each cognitive process does a particular job – the job of identifying noun phrases, for example. We make sense of the complex process underlying a sentence by seeing how it is performed by the interaction of many simpler processes (or procedures). These cognitive processes are themselves understood algorithmically (although this is not something that Winograd himself stresses). They involve processing inputs according to rules. Winograd’s procedures are sets of instructions that can be followed mechanically, just as in the classical model of computation.

Well, okay, SHRDLU may illustrate a particular view of linguistic understanding, but I’m not sure it gives us much reason to think that that view is the correct one – or indeed, that it tells us anything positive about the human mind.

It’s worth contrasting this with Steven Pinker’s presentation of the significance of AI and robotics in his book How the Mind Works. Pinker is open about the fact that not all of the disciplines that make up cognitive science have progressed equally far. In his account, the value of AI (as it’s existed so far) for understanding the mind is mainly negative; the failures of AI tell us what the human mind is not.

In particular, the difficulty of making a robot do many things that seem easy for humans (like play with blocks, or speak English as well as a five-year-old) tells us that evolution has done something quite difficult in enabling humans to do these things.

Pinker’s approach might be more accurately described as telling us not “how the mind works” but “what the mind does, and I don’t say this as a criticism. He’s honest about the fact that science has managed to tell us a lot about what the mind does in an evolutionary and functional sense, while how the mind does these things (both in scientific terms, and in terms of the sort of precise algorithms you could program into an AI) remains largely an unsolved puzzle (though not, Pinker emphasizes, a mystery).

  • Verbose Stoic

    It strikes me that you are making the exact mistake that Bermudez warns against: criticizing SHRDLU for not being perfect without realizing that it made an advance and started down a path that is almost certainly correct: that linguistic processing is the result of a number of smaller cognitive processes all working in parallel as opposed to one separate “language module” that does all the work. So, no, it doesn’t get it right, but that doesn’t mean that it isn’t progress.

    One of the issues with AI is that you actually have two completely different approaches to it, one of which is actually useful for Cognitive Science but isn’t useful for building practical AIs, and the other that produces great and poweful results but does it in a way that makes it useless for Cognitive Science. The second approach is the one commonly taken in Computer Science departments, where they try to build an AI to solve or automate certain problems or tasks, and so where the focus is on getting human level COMPETENCE but not human level COGNITION. So whatever it takes to get the problem does it what’s used, and if it makes less mistakes than humans, avoiding some of our cognitive and psychological errors, then that’s exactly what we want. But if you want to treat AI as a way to MODEL human cognition — which is what Cognitive Science wants — then that it doesn’t make the same mistakes we do suggests that it’s actually a bad model; if it really was modelling what we do, it would make the same mistakes, so if it isn’t it isn’t a good model.

    The AI that gets attention is the latter. The AI that Cognitive Science needs is the former.

    Also, I went through the introductory courses in Cognitive Science as part of my undergrad, and don’t recall mention of SHRDLU as being one of the really important ones …

  • Pingback: yellow october


CLOSE | X

HIDE | X