Supernatural punishment and the evolution of cooperation

Supernatural punishment and the evolution of cooperation July 14, 2017

Angry God - Supernatural PunishmentFor the past year and a half or so, I’ve been working on a project that uses computer models to study religion. (You can read about this project here, here, here, and here.) Such a project might sound absurd – how could you use computer simulations to study something as intangible and subjective as religion? Well, it just so happens that my colleague on the project, Justin Lane, recently published a paper that offers a great answer to this question. Using a computer model of a classic economics game, he tested the predictions of an important recent theory in the scientific study of religion: the supernatural punishment hypothesis.

Justin’s article appeared in the journal Religion, Brain & Behavior and wasn’t, technically, an article. Instead, it was a commentary, part of a book symposium in which experts from across disciplines discussed, critiqued, and responded to the book God Is Watching You: How the Fear of God Makes Us Human, by Oxford political scientist Dominic Johnson. Johnson’s book argues that large-scale human cooperation – which makes us different from chimpanzees, bonobos, or any other mammal – evolved in part because of “supernatural punishment,” or the belief that God or gods would enact retribution on anyone who didn’t obey society’s rules.

The key question the supernatural punishment hypothesis tries to answer is this: how could reliable human cooperation have evolved, since there’s always an incentive for individuals to cheat – that is, to benefit from others’ cooperation without cooperating in return? The public goods game offers a good illustration of this free rider problem. Say there are five people with $5 each. Everyone chooses to either contribute money to the collective pot or keep it to themselves. The collective pot doubles at every turn, and then the total amount of money is redistributed to everybody equally. So if everybody puts in all their money, the total pot will be $25. Doubled and distributed equally, that works out to $10 each. Not a bad return.

But what if you – and only you – decide sneakily not to contribute? That would leave four people each contributing $5, working out to $8 for everyone at the end of the turn. Since you kept your initial funds to yourself, you end up with $13, compared with a measly $8 for everyone else! That’s why it’s nearly always tempting to cheat in a public goods scenario.

The trouble is, as soon as the others realize that someone hasn’t been contributing their fair share, they’ll start withholding their own contributions in return. In a famous paper by economists Ernst Fehr and Simon Gächter, actual lab subjects playing the public goods game withdrew most of their contributions to the shared pot over only a few turns. Lacking confidence that others would contribute their fair share, the subjects simply refused to invest in the public good – which then collapsed.

But there’s a catch: when subjects were given the opportunity to “punish” one another by paying a small amount to eliminate some of another player’s savings, something remarkable happened. Contributions to the collective good skyrocketed. Fearful of being punished, participants didn’t want others to see them as free riders. Costly punishment stabilized cooperation.

But how does punishment itself become stable, since it incurs costs for punishers? This question particularly applies to anonymous interactions, since there’s no guarantee that punishing another player will ever pay off for the punisher. The question of who punishes free riders becomes a second-order free rider problem. How does the group get people to effectively punish those nasty non-cooperators, since punishment benefits the rest of the group but only incurs costs for individuals?

This is where the supernatural punishment hypothesis comes in. In his book, Johnson argues that the belief in supernatural punishment essentially “outsources” punishment to posited gods or spirits. When everybody in the community believes that God (or Allah, or the ancestors, or what have you) will punish them for moral lapses, then the community receives most of the benefits of costly punishment without actually incurring the full costs. To be sure, sometimes people need to be punished anyway, in order to maintain the credibility of punishment. But overall, Johnson argues, supernatural punishment makes cooperation more sustainable, because it circumvents the substantial costs of actually punishing free riders. According to Johnson, a society of believers wouldn’t suffer from (many) free riders.

Justin’s Model of Supernatural Punishment

Normally, in a book symposium, commentators simply reflect on, critique, or offer challenges to the book. But Justin did something different, offering novel results from a new study. Taking the computer code from a well-known cooperation game, a Prisoner’s Dilemma tournament, Justin amended the rules to simulate Johnson’s predictions about how supernatural punishment should affect cooperation in a social group. The Prisoner’s Dilemma is a cooperation game that’s conceptually similar to the public goods game I described above, but instead of groups, individuals meet up one-on-one. At every turn, players can choose to either cooperate or defect. The payoffs depend on what both players choose to do:

  • Defect against a cooperator (be a free rider): Very high payout.
  • Cooperate with a cooperator: Moderately high payout.
  • Defect when the other guy defects: Low payout.
  • Cooperate with a defector, like a sucker: Lowest payout.

In the Prisoner’s Dilemma tournament, different strategies compete against each other. In Justin’s model, the strategies ran from “always cooperate” to “always defect,” with intermediate strategies in between. One of those intermediate strategies, “tit for tat,” is famous for being the winner of the most well-known Prisoner’s Dilemma tournaments conducted by political scientist Robert Axelrod in the 1980s. Tit for tat starts out by cooperating, and then simply copies whatever the other player did on the last round. In the model, each strategy got matched up with every other one for 100-round games, and at the end of each set the strategy with fewer points was probabilistically replaced by a copy of the winner. In this way, strategies that performed better spread across the population over the course of a simulation, emulating Darwinian evolution.

Justin also added a “supernatural punishment” belief variable to the agents in his model, which had one key effect: it made agents less likely to defect. This simulated the “fear of God” that Johnson argues was a crucial ingredient in stabilizing human cooperation. When agents were “afraid” that not cooperating with other agents would lead to divine punishment, they tended to cooperate. The belief in divine punishment could take three forms: agents could believe that individual non-cooperators would be punished, that the entire group would be punished if enough agents behaved badly, or that the gods/God never actually punished anybody.

Belief in divine punishment was treated as a heritable binary trait. At the end of interactions, agents with more points would generally be replicated, while those with fewer points would disappear  – complete with their trait of supernatural belief or unbelief. That way, if supernatural belief tended to be associated with more successful agents, it would gradually spread in the population. (Importantly, evolution in the model was probabilistic, not deterministic – the more successful agent in each interaction wasn’t always replicated, just most of the time.)

In testing of the model, Justin let the simulation run under a wide variety of combinations of parameters. Parameters are the settings for a model; for example, one parameter determined how many agents there were in a given simulation, while another determined how dramatically supernatural belief would modify the tendency to cooperate. Overall, the model ran 22,500 simulations across the range of possible parameters.

Unexpected Results

Results from the experiments showed that the belief that God punishes individuals, not groups, led to the highest percentage of believers in the population. Interestingly, in Justin’s model tit-for-tat, the winning strategy in Axelrod’s original tournament, was only stable when agents were programmed to believe in individual, not group, divine punishment. The dominant strategy under other kinds of belief was “always defect” – the least friendly and most uncooperative type of agent.

In fact, “always defect” dominated the model overall. However, supernatural belief also dominated – in more than half of all 22,5000 model runs, all the agents in the simulation ended up believing in God’s wrath.

In a challenge to Johnson’s theory, the percentage of agents that “believed” in supernatural punishment was positively correlated with defection as a dominant strategy. In other words, supernatural punishment came to spread across the majority of simulated populations, but, oddly, it didn’t seem to have any population-level benefit for cooperation.

In his response to Justin’s commentary, Johnson praised the model for articulating previously unrecognized implications and predictions of supernatural punishment theory. But he criticized Justin’s write-up for not reporting the individual-level and within-population correlations between supernatural punishment and cooperation. Across the entire swathe of simulations, supernatural belief was negatively correlated with cooperation, but that doesn’t necessarily tell us whether individual agents who believed in divine punishment were more likely to cooperate. That information, Johnson claims, would be crucial for understanding whether the model really contradicts or corroborates the predictions of supernatural punishment theory.

Johnson also suggested that the model would be improved if agents could interact via assortment, rather than randomly. Many game-theory models of the evolution of human cooperation – including those Axelrod describes in his pop-science book The Evolution of Cooperation – depend on the ability of cooperators to preferentially assort with other cooperators. If you can exclude non-cooperators and rogues, then cooperation becomes a stable strategy. But if your entire population is randomly mixed, cooperation might never get the spark it needs to come to life.

Finally, Johnson also pointed out that the Prisoner’s Dilemma isn’t the same as a public goods game (such as the four-player game I described above), and that supernatural punishment might very well have very different effects using a different type of game. In other words, the Prisoner’s Dilemma tournament might not be the best model for human cooperation in groups – the context in which Johnson argues belief in punitive gods and spirits evolved.

Despite Johnson’s criticisms, Justin’s model is a great example of what modeling and simulation can do for the study of big questions in religion and human evolution. The sheer extent of cooperativeness in human societies is an evolutionary conundrum. How did we become so good at contributing to shared goods in groups? We can’t run experiments on early hominid populations, because time travel isn’t a thing (yet). But we can build simulations that reflect what we believe are the likely conditions of our early evolution, play with the settings, and see whether our best theories are reflected in the results. This can’t answer every question, but it can at least help us understand the questions more clearly – and argue with each other more effectively.


Browse Our Archives

Follow Us!