Morality as Evolved; What about God?

Morality as Evolved; What about God? April 22, 2019

I am first going to furnish you with an excerpt from Robert Sapolsky’s Behave [UK here] before discussing the content within a theological context. Of course, what Sapolsky has to say is nothing new or groundbreaking, but it serves s a good introduction to the topic.


Much as infants demonstrate the rudiments of hierarchical and Us/Them thinking, they possess building blocks of moral reasoning as well. For starters, infants have the bias concerning commission versus omission. In one clever study, six-month-olds watched a scene containing two of the same objects, one blue and one red; repeatedly, the scene would show a person picking the blue object. Then, one time, the red one is picked. The kid becomes interested, looks more, breathes faster, showing that this seems discrepant. Now, the scene shows two of the same objects, one blue, one a different color. In each repetition of the scene, a person picks the one that is not blue (its color changes with each repetition). Suddenly, the blue one is picked. The kid isn’t particularly interested. “He always picks the blue one” is easier to comprehend than “He never picks the blue one.” Commission is weightier.—

Infants and toddlers also have hints of a sense of justice, as shown by Kiley Hamlin of the University of British Columbia, and Paul Bloom and Karen Wynn of Yale. Six- to twelve-month-olds watch a circle moving up a hill. A nice triangle helps to push it. A mean square blocks it. Afterward the infants can reach for a triangle or a square. They choose the triangle.* Do infants prefer nice beings, or shun mean ones? Both. Nice triangles were preferred over neutral shapes, which were preferred over mean squares.

Such infants advocate punishing bad acts. A kid watches puppets, one good, one bad (sharing versus not). The child is then presented with the puppets, each sitting on a pile of sweets. Who should lose a sweet? The bad puppet. Who should gain one? The good puppet.

Remarkably, toddlers even assess secondary punishment. The good and bad puppets then interact with two additional puppets, who can be nice or bad. And whom did kids prefer of those second-layer puppets? Those who were nice to nice puppets and those who punished mean ones.

Other primates also show the beginnings of moral judgments. Things started with a superb 2003 paper by Frans de Waal and Sarah Brosnan.— Capuchin monkeys were trained in a task: A human gives them a mildly interesting small object—a pebble. The human then extends her hand palm up, a capuchin

begging gesture. If the monkey puts the pebble in her hand, there’s a food reward. In other words, the animals learned how to buy food.

Now there are two capuchins, side by side. Each gets a pebble. Each gives it to the human. Each gets a grape, very rewarding.

Now change things. Both monkeys pay their pebble. Monkey 1 gets a grape. But monkey 2 gets some cucumber, which blows compared with grapes— capuchins prefer grapes to cucumber 90 percent of the time. Monkey 2 was shortchanged.

And monkey 2 would then typically fling the cucumber at the human or bash around in frustration. Most consistently, they wouldn’t give the pebble the next time. As the Nature paper was entitled, “Monkeys reject unequal pay.”

This response has since been demonstrated in various macaque monkey species, crows, ravens, and dogs (where the dog’s “work” would be shaking her paw).*—

Subsequent work by Brosnan, de Waal, and others fleshed out this phenomenon further:—

• One criticism of the original study was that maybe capuchins refused to work for cucumbers because grapes were visible, regardless of whether the other guy was getting paid in grapes.

But no—the phenomenon required unfair payment.

• Both animals are getting grapes, then one gets switched to cucumber. What’s key—that the other guy is still getting grapes, or that I no longer am? The former—if doing the study with a single monkey, switching from grapes to cucumbers would not evoke refusal. Nor would it if both monkeys got cucumbers.

• Across the various species, males were more likely than females to reject “lower pay”; dominant animals were more likely than subordinates to reject.

• It’s about the work—give one monkey a free grape, the other free cucumber, and the latter doesn’t get pissed.

• The closer in proximity the two animals are, the more likely the one getting cucumber is to go on strike.

• Finally, rejection of unfair pay isn’t seen in species that are solitary (e.g., orangutans) or have minimal social cooperation (e.g., owl monkeys).

Okay, very impressive—other social species show hints of a sense of justice, reacting negatively to unequal reward. But this is worlds away from juries awarding money to plaintiffs harmed by employers. Instead it’s self-interest —“This isn’t fair; I’m getting screwed.”

How about evidence of a sense of fairness in the treatment of another individual? Two studies have examined this in a chimp version of the Ultimatum Game. Recall the human version—in repeated rounds, player 1 in a pair decides how money is divided between the two of them. Player 2 is powerless in the decision making but, if unhappy with the split, can refuse, and no one gets any money. In other words, player 2 can forgo immediate reward to punish selfish player 1. As we saw in chapter 10, Player 2s tend to accept 60:40 splits.

In the chimp version, chimp 1, the proposer, has two tokens. One indicates that each chimp gets two grapes. The other indicates that the proposer gets three grapes, the partner only one. The proposer chooses a token and passes it to chimp 2, who then decides whether to pass the token to the human grape dispenser. In other words, if chimp 2 thinks chimp 1 is being unfair, no one gets grapes.

In one such study, Michael Tomasello (a frequent critic of de Waal—stay tuned) at the Max Planck Institutes in Germany, found no evidence of chimp fairness—the proposer always chose, and the partner always accepted unfair splits.— De Waal and Brosnan did the study in more ethologically valid conditions and reported something different: proposer chimps tended toward equitable splits, but if they could give the token directly to the human (robbing chimp 2 of veto power), they’d favor unfair splits. So chimps will opt for fairer splits—but only when there is a downside to being unfair.

Sometimes other primates are fair when it’s at no cost to themselves. Back to capuchin monkeys. Monkey 1 chooses whether both he and the other guy get marshmallows or it’s a marshmallow for him and yucky celery for the other guy. Monkeys tended to choose marshmallows for the other guy.* Similar “other- regarding preference” was shown with marmoset monkeys, where the first individual got nothing and merely chose whether the other guy got a cricket to eat (of note, a number of studies have failed to find other-regarding preference in chimps).—

Really interesting evidence for a nonhuman sense of justice comes in a small side study in a Brosnan/de Waal paper. Back to the two monkeys getting cucumbers for work. Suddenly one guy gets shifted to grapes. As we saw, the

one still getting the cucumber refuses to work. Fascinatingly, the grape mogul often refuses as well.

What is this? Solidarity? “I’m no strike-breaking scab”? Self-interest, but with an atypically long view about the possible consequences of the cucumber victim’s resentment? Scratch an altruistic capuchin and a hypocritical one bleeds? In other words, all the questions raised by human altruism.

Given the relatively limited reasoning capacities of monkeys, these findings support the importance of social intuitionism. De Waal perceives even deeper implications—the roots of human morality are older than our cultural institutions, than our laws and sermons. Rather than human morality being spiritually transcendent (enter deities, stage right), it transcends our species boundaries. [my emphasis]

Indeed, the Frans de Waal experiment can be seen here:

What do these things tell us? Well, within the context of primates and other social animals, the foundations of morality are not only there, evident to see, but they are evolved. Their functionality for sociality and social cohesion are clear to see. And this is just the tip of the iceberg as far as the subject of the evolution of morality and morality in other species is concerned.

But what about God?

This is where you can see so many theists have an issue with evolution. This is where evolution gets rejected because so many things that have theological purchase, things like morality, that we can see have been clearly involved for their functional purpose become problematic for the theist for these very reasons.

The theist really and has two options. Firstly, they can reject evolution outright. Secondly, they can be theistic evolutionists. Obviously, from a naturalistic and atheistic point of view, the theistic evolutionist is a much more preferable form of the theist. At least they are somewhat reasonable and open to logic, science and data. And yet, still, one wonders how they deal with the subject of evolved personality traits. For example, Steven Pinker’s superb How the Mind Works, a great synopsis of the evolutionary underpinnings and processes behind everything to do with our personality and minds, must still be a very difficult book for a theistic evolutionist to read.

Of course, if you deny evolution, none of this is a problem. However, the real problem is that you deny evolution and all of this data and science still exists. It doesn’t suddenly disappear. Burying your head in the sand doesn’t make the rest of the world disappear; it just means you only get to see granules of sand in front of your eyes. And nothing else. The world becomes a very one-dimensional place when you do this; there is no benefit the person doing this in terms of understanding the world around them.

God is the Foundation of Objective Morality

Goodness me, I’ve written enough on the fact that the term “objective” doesn’t make any sense, particularly in the context of conceptual nominalism. Indeed, what we have here is a two-pronged attack on God. And by “God” we can mean objective morality or any other similarly evolved traits that humans have that also have theological importance.

If morality is evolved due to its functionality and usefulness, then the grounds to morality are… their functionality and their usefulness. Morality isn’t underwritten by God; God has no explanatory use here, both in terms of causality and moral philosophy.

As humans have built on these foundations of morality, both aspects of intuition and reasoning, using the many parts of our brain that Sapolsky spells out in minute detail throughout his book, we create an intricate framework of morality that gets woven into our culture and our relationships with each other, and even with other species. But the core foundation of this morality is it functionalism evolved within parts of the brain.

As Sapolsky continues:

Many moral philosophers believe not only that moral judgment is built on reasoning but also that it should be. This is obvious to fans of Mr. Spock, since the emotional component of moral intuitionism just introduces sentimentality, self-interest, and parochial biases. But one remarkable finding counters this.

Relatives are special. Chapter 10 attests to that. Any social organism would tell you so. Joseph Stalin thought so concerning Pavlik Morozov ratting out his father. As do most American courts, where there is either de facto or de jure resistance to making someone testify against their own parent or child. Relatives are special. But not to people lacking social intuitionism. As noted, people with vmPFC damage make extraordinarily practical, unemotional moral decisions. And in the process they do something that everyone, from clonal yeast to Uncle Joe to the Texas Rules of Criminal Evidence considers morally suspect: they advocate harming kin as readily as strangers in an “Is it okay to sacrifice one person to save five?” scenario.

Emotion and social intuition are not some primordial ooze that gums up that human specialty of moral reasoning. Instead, they anchor some of the few moral judgments that most humans agree upon.

Which is to say that both reasoning and intuition are important, yes. Anyone who has read around the subject, from Kahneman to Damasio knows this. What I find important here to the context of this piece is that there are functional reasons as to why intuitive morality evolved – for example, it profits the survival of kin more likely than non-kin (think the selfish gene). This is hardcoded into the brain and if you damage those intuitionist parts of the brain, the subject then makes moral decisions that, purely rationally, do not favour kin over non-kin.

The brain, and the way it has evolved, is key, then, to understanding morality. And that is a truth many philosophers don’t like. And a truth that theologians really struggle with.

My next piece will be on how theistic evolutionists deal with the above. Or how they don’t, really.


Stay in touch! Like A Tippling Philosopher on Facebook:

"Papyrus 45 (… or P. Chester Beatty I) is an early New Testament manuscript which ..."

Kiekeben: Can One Actually Believe in ..."
"Don't fret it, it was an off the cuff observation while passing through, not worth ..."

Interviews on Gnostic Informant
"If you want engagement, get the story from OnlySky. Feel free to ask JP, because ..."

Interviews on Gnostic Informant
"But no, can't be—the ancient Hebrews were either indistinguishable, or worse, than their contemporaries!Hard to ..."

Interviews on Gnostic Informant

Browse Our Archives