I’ve long had people tell me I should take a class on cognitive science, from my undergrad philosophy of mind prof to, more recently, various MIRI/LessWrong folks. But I’m out of school and very likely will never take a cognitive science class, so I decided to do the next best thing and pickup a textbook on the subject, specifically José Luis Bermúdez’ Cognitive Science.
Part of my problem going in is that I wasn’t really sure what “cognitive science” was. I had a decent academic background in philosophy of mind and neuroscience and even a couple psychology courses; I had read my my Steven Pinker and other evo psych, and lately I’d been learning more about about AI, but I wasn’t sure if any of that added up to “knowing cognitive science.”
Because of this, I was quite happy when I read Bermúdez’ description of his goals in the preface, where he says, “The challenge is to give students a broad enough base while at the same time bringing home that cognitive science is a field in its own right, separate and distinct from the disciplines on which it draws.” That gave me the hope that Bermúdez would show how to integrate those disciplines (later listed as psychology, philosophy, linguistics, anthropology, neuroscience, and AI).
However, reading the book, I got the feeling that Bermúdez’ commitment to making each of these disciplines an equal partner in the enterprise sometimes led to overselling certain results. For example, chapter 2 is dedicated to three “milestones” in the development of cognitive science as a field, the first of which is (according to Bermúdez) SHRDLU.
After describing SHRDLU’s limitations, Bermúdez writes:
But to criticize SHRDLU for neglecting pragmatics, or for steering clear of some comple linguistic constructions such as counterfactuals (statements about what would have happened, had things been different) is to miss what is genuinely pathbreaking about it. SHRDLU illustrates a view of linguistic understanding as resulting from the interaction of many, independently specifiable cognitive processes. Each cognitive process does a particular job – the job of identifying noun phrases, for example. We make sense of the complex process underlying a sentence by seeing how it is performed by the interaction of many simpler processes (or procedures). These cognitive processes are themselves understood algorithmically (although this is not something that Winograd himself stresses). They involve processing inputs according to rules. Winograd’s procedures are sets of instructions that can be followed mechanically, just as in the classical model of computation.
Well, okay, SHRDLU may illustrate a particular view of linguistic understanding, but I’m not sure it gives us much reason to think that that view is the correct one – or indeed, that it tells us anything positive about the human mind.
It’s worth contrasting this with Steven Pinker’s presentation of the significance of AI and robotics in his book How the Mind Works. Pinker is open about the fact that not all of the disciplines that make up cognitive science have progressed equally far. In his account, the value of AI (as it’s existed so far) for understanding the mind is mainly negative; the failures of AI tell us what the human mind is not.
In particular, the difficulty of making a robot do many things that seem easy for humans (like play with blocks, or speak English as well as a five-year-old) tells us that evolution has done something quite difficult in enabling humans to do these things.
Pinker’s approach might be more accurately described as telling us not “how the mind works” but “what the mind does, and I don’t say this as a criticism. He’s honest about the fact that science has managed to tell us a lot about what the mind does in an evolutionary and functional sense, while how the mind does these things (both in scientific terms, and in terms of the sort of precise algorithms you could program into an AI) remains largely an unsolved puzzle (though not, Pinker emphasizes, a mystery).