You’ve read the clickbait headlines: “Sex: Stone Age Mind in a Modern-Day World”; “Five Ways our Caveman Instincts Get the Best of Us”; and even “Why Has America Elected a President Adapted to a Stone Age Way of Life?” The core claim of pop evolutionary psychology is that our brains evolved to solve problems in our ancestral environment, leaving them mismatched to our new, modern lifestyles. This mismatch leads us to do stupid things like eat too many sweets because our Stone Age minds evolved to go gaga over high-quality energy sources. In technical language, evolutionary psychologists claim that our minds consist of cognitive modules – discrete mechanisms that solved particular problems in our ancestral environment. But in a new book and article, a leading psychologist calls this picture of how the mind works into question. Instead, she argues that the mechanisms that make the human mind unique are cognitive gadgets, forged anew every generation by cultural evolution and learning.
The book’s author, cognitive psychologist Cecilia Heyes, is one of the most well-respected and formidable researchers working in the behavioral sciences today. She’s made a career arguing that many of the remarkable abilities that distinguish humans from other animals aren’t genetically hard-wired, but rather emerge naturally from social interactions. Heyes calls these learned abilities “cognitive gadgets” – the name of her book.*
Take imitation, for instance.
As I’ve written here quite a lot recently, humans have an uncanny ability to mimic one another’s bodily postures, gestures, facial expressions, and actions. Chimpanzees, our closest relatives, can emulate one another, but they don’t really do much imitation. That is, they can copy the basic outcome of an action – sticking a twig into a nest to gather up a tasty snack of termites, for example – but they don’t carefully copy every step or pay close attention to the chimps they learn from. For their part, older chimps don’t take pains to model their termite-catching techniques for younger pupils, the way a human father might demonstrate how to carve out a dugout canoe for his son.
Many researchers have argued that humans’ dazzling imitation and teaching skills emerge directly from their genetic endowment, a hardwired capacity that comes online in infants with no outside input needed. Others have argued that, while imitation is genetically hardwired, it depends on external social cues to get rolling. This view sees imitation as similar to language, which is supposedly based on genetic coding but needs the “trigger” of hearing other people speaking a language to get off the ground. If children don’t get exposed to language within an early developmental window, the opportunity for learning it passes and they’ll never be able to master human speech.
Imitation as Social Learning
But Heyes believes that imitation isn’t a genetically programmed ability at all. Instead, she argues that it emerges from patterned interactions between children and their social environments, particularly their parents. Endowed with remarkable associative learning abilities, human infants are able to quickly build mental and practical associations between their own actions and those that they observe in others. These generalist cognitive abilities enable infants to solve the tricky problem of how to map one’s own movements onto an action that they’ve perceived – a hand gripping a tool in a particular way, for example.
This “correspondence problem” is a thorny one: scientists have no agreed-upon explanation for how humans know which muscles to activate to accurately copy someone else’s actions. To solve this mystery, many evolutionary psychologists posit that our brains house an “imitation mechanism” that accomplishes this one-to-one mapping. But it’s not clear how it does this.
Rather than recruiting a prepackaged evolutionary module, Heyes thinks we actively learn which muscles correspond to the actions we see through trial, error, and social feedback. For instance, parents often mimic their children’s facial expressions in an exaggerated way. This playful mimicry helps infants learn what it feels like to make the faces they see their parents making. Over time, they come to build a database of “vertical associations” between efferent (active or outgoing) and afferent (passive or incoming) sensory and motor data. According to Heyes, generalized mimicry skills are built from these core building blocks of learned sensorimotor connections, social feedback, and trial and error.
One of Heyes’s big complaints about standard evolutionary psychology is her claim that it’s “cognition blind.” That is, the cognitive modules or mechanisms that evolutionary psychologists claim underlie much of our behavior are essentially black boxes. They supposedly produce effects and run automatic computations that affect our behavior, but evolutionary psychologists can’t say how they work internally.
Heyes believes that computationalism – the approach to cognitive science that models the brain as a biological computer – provides resources for mapping out those processes. In order to understand how the brain solves complicated problems like imitation or the acquisition of language, we need to identify the step-by-step computations the brain needs to go through. This means black boxes aren’t allowed. And when we break down problems like imitation or language acquisition, “visibly” laying out their component parts like tools on a workbench, Heyes believes we find that domain-general cognitive processes and learning are all that’s needed to accomplish the most uniquely human forms of cognition.
This doesn’t mean humans aren’t innately different from other animals, though. Heyes argues that humans come with a unique “starter kit” of three unusual traits: (1) high sociability and tolerance; (2) biased attention toward other people; and (3) highly developed associative reasoning and learning. Together, these traits are enough to get complex cultural learning off the ground.
Cognitive Modules vs. the Cultural Starter Kit
Let’s briefly return to language. As I mentioned above, traditional cognitive scientists, cognitive linguists, and evolutionary psychologists have argued that uniquely human cognitive modules are what make the difference between humans and our close genetic relatives. For example, the linguist Noam Chomsky famously argued that human brains come equipped with a “language acquisition device” that provides innate knowledge of an abstract, universal grammar of which all real-world languages are simply versions. Other animals lack this cognitive module, which parsimoniously explains why chimps, dolphins, and parrots can occasionally learn a few dozen words, but seem incapable of learning complex grammar or true language.
Chomsky and other advocates for psychological nativism believe that language is so impossibly complex and orderly that we could never learn it from simple trial and error. They observe that we start using grammar and inferring complex meanings very early in life – long before we’ve had enough exposure to real language to have learned all of its rules. This ability to learn language despite the poverty of the stimulus – that is, despite a relative lack of concrete information and feedback about how grammar should work – suggests that grammar preexists internally in the human minds, so that hearing real language simply unleashes our ability to reconstruct it.
In this nativist view, language is essentially an instinct (as reflected in the title of linguist Steven Pinker’s famous book). In biological terms, instincts are hardwired behavioral responses that are activated, or “released,” by appropriate stimuli. An excellent human example? Latching onto and sucking a nipple. Infants know how to do this automatically, from the very first minutes after birth. The complex motor sequences for attaching the mouth to the mother’s nipple, sucking, and drinking are automatically activated, with no conscious planning or prior learning, when the baby is placed near the mother’s breast.
By contrast, while Heyes acknowledges that humans do possess many low-level instincts (including for breastfeeding, presumably), she thinks that language and motor mimicry are something else entirely. These cognitive “gadgets” emerge from social interactions scaffolded by the three ingredients of the cultural starter kit:
- High sociability and tolerance means that we have many more non-aggressive interactions with our peers and group members than other animals do.
- Thanks to our strong bias for paying attention to faces and voices, we voraciously learn from the people we encounter in the midst of this highly social context.
- And our highly developed causal reasoning and associative learning skills mean that we recognize and reconstruct patterns with unprecedented speed and accuracy.
Put these things together, and we can build complex associations between words and meanings, between actions and muscle activations, and between grammatical rules and applications in a way that can seem almost miraculous. But it isn’t miraculous at all. It’s the natural outcome of highly social animals living together in groups, paying very close attention to each other’s movements and vocalizations, and extracting patterns from what they see and hear. By comparison, it’s the black-box cognitive modules of evolutionary psychology that look (to Heyes) like magic.
Cultural Evolutionary Psychology
Heyes goes further, arguing that the cultural starter kit of sociability, biased attention, and associative learning provides the foundations for cultural evolution writ large – that is, the Darwinian selection of cultural practices and habits. In turn, this Darwinian cultural evolution gives rise to many the remarkable characteristics of human societies, including language and technology.
But the fact that we depend on cultural evolution for so many of our key behaviors comes with a scary possibility: we can lose them. A catastrophic event, such as a worldwide pandemic or a nuclear war, wouldn’t just destroy our current technologies. It would potentially destroy our very ability to recover them. In a précis (summary) of Heyes’s book in the influential journal Behavioral and Brain Sciences, she writes that
cultural evolutionary psychology implies that human minds are more agile, but also more fragile, than previously thought. We are not stuck in the Pleistocene past with Stone Age minds, and new technologies – social media, robotics, virtual reality – provide the stimulus for further cultural evolution of the human mind. However, we have more to lose. Wars and epidemics can wipe out not just know-how, but also the means to acquire that know-how.
This kind of cultural evolutionary psychology gives us a different picture of human nature than classical evolutionary psychology. Evolutionary psychology sees an immutable human nature defined by the many innate capabilities and cognitive modules that we’re born with. Cultural input can modify how those capabilities express themselves, but it can’t change the baseline package of problem-solving mechanisms in the brain.
By contrast, Heyes’s cultural evolutionary psychology depicts human nature as far more flexible, since it’s made up of the culturally inherited “gadgets” that we learn from social interaction:
The primary implication of evolutionary causal essentialism is that human nature is labile; it changes over historical rather than geological time.
But if this is true, how are we not just back to the “blank slate” view of human nature that Steven Pinker attacked in another of his straightforwardly named books? After all, Heyes emphasizes that her model of cultural cognition rests heavily on “general-purpose mechanisms of learning.” Well, Heyes would reply that the human mind isn’t a blank slate because it comes pre-equipped with the three ingredients of the cultural starter kit. Other animals lack these ingredients, which means they can’t build up the complex cultures that humans have. Ergo, we’re not blank slates – even though nearly all of the concrete, practical abilities we have come from cultural learning, not genes.
Not everyone agrees, of course. In the peer commentaries for the Behavioral and Brain Sciences précis, evolutionary psychologist Marco Del Giudice complains that Heyes is making an “almost-blank slate argument.” Another group of commentators claims Heyes “tends to neglect the fundamental role of biology in shaping our cultural worlds,” even as they agree that the main source of evolutionary selection pressure on humans comes from our social environment, not just our physical one.
What do I think? Heyes’s critiques of standard evolutionary psychology are, in many ways, very welcome. The most egregiously oversimplified versions of evo-psych theorizing imply that the human mind stopped evolving 150,000 years ago, and that most of the problems it evolved specified modules to solve had to do with brute survival, not social learning. Since then, we’ve been stuck in a kind of time warp, our bodies housing Stone Age minds in modern worlds. For this reason – and I’m not the first to say this – low-quality evo psych sometimes suffers from a sort of myopic individualism, making the tacit assumption that brains mostly evolved to grapple one-on-one with the physical environment, not a complex cultural one.
However, many more sophisticated versions of evolutionary psychology don’t make these mistakes. And, to my mind, Heyes’s theory of cognitive gadgets doesn’t really account for the fact that imitation and language are truly cultural universals – found everywhere, in all societies, without exception. If they really were simply reconstructed and relearned anew in each generation, we would likely see more variation between societies. Yet as the anthropologist Terrence Deacon has pointed out, there aren’t even any intermediate or “simple” versions of language to suggest that some societies have developed further linguistically than others. The smallest, most isolated tribes often have the most complex and intricate grammars, and there’s no such thing as a language without a complete grammar – that is, a coherent set of rules that all speakers implicitly understand and use. The universality and completeness of language and imitation suggest that something about them may really be genetically hardwired.
By contrast, another human cultural near-universal that probably is not genetically hardwired is the controlled use of fire. Humans have been teaching and passing down the skills needed to master fire for longer than we’ve been Homo sapiens. But we very clearly don’t have a fire-making instinct – a set of coordinated motor actions that come prepackaged in our brains, allowing us to strike a spark on tinder automatically. Instead, we have to effortfully and consciously learn how to make and control fire in each new generation. Accordingly, while nearly all societies do have means of making fire, there are several that appear to have lost it. It really is possible to just forget how to make fire, yet to still live in coherent societies that have language and other human hallmarks. Controlled fire, then, is a very good candidate for a “cognitive gadget.” But language and imitation? I’m going to need a lot more convincing.
Cognitive Gadgets and Religion
This debate is directly relevant to the study of religion. Why? Because the standard model in the cognitive science of religion emerges from evolutionary psychology. For more than two decades, psychologists and cognitive scientists have argued that religious beliefs and behaviors are natural byproducts of our heavily modular minds. For instance, one famous hypothesis claims that our cognitive modules for agency detection are hyperactive, leading us to perceive invisible actors everywhere around us – which, in turn, become the ghosts, gods, and demons of folk and institutional religion alike. So if our brains turn out not to be riddled with overactive evolutionary modules after all, then a lot of standard theories in the cognitive science of religion might be in trouble.
Another, emerging trend in the scientific study of religion might stand a better chance of fitting with the theory of cognitive gadgets: predictive processing models of the brain. Heyes’s theory explicitly posits that the brain is a Bayesian prediction machine, actively learning from every experience as it builds a rich, updatable internal model of the world. This predictive processing framework lends itself easily to general-purpose models of cognition. In fact, predictive processing models of the mind often overlap with (but aren’t the same as) embodied cognition theories, which depart from computationalism for a more phenomenological, non-dualistic understanding of the brain. In many theories of embodied cognition, the brain doesn’t trade in symbols or computational operations at all. Instead, physical processes link the body to the brain, and these processes themselves are cognition.
But Heyes, true to form, refuses to fit into either of these molds, upholding both predictive processing and computational models of the brain. In fact, she argues that only computationalism can enable us to precisely map out the granular predictive processes of the mind. This position puts her out of step with both classic evolutionary psychology and with the most wild-and-wooly approaches to embodied cognition. While Heyes might dismiss the concept of evolved cognitive modules too swiftly, her work stands to shake up some of the most established doctrines in early 21st-century cognitive science – and, by extension, in the scientific study of religion itself.
* Full disclosure: This post isn’t a book review, but a summary of some of Heyes’s key ideas. I haven’t read the book itself. I have read the detailed précis in Behavioral and Brain Sciences along with most of the peer commentaries and Heyes’s replies. I’m also pretty familiar with Heyes’s work from my own dissertation research, so the cognitive gadget argument is well-trodden territory.