James Davison Hunter and Paul Nedelisky, in their new book called Science and the Good: The Tragic Quest for the Foundations of Morality, sketch the results of 500 years of the “scientific” study of morality.
Science can achieve Level One but is it overreaching to claim Level Two and Three ?
Level One results would provide specific moral commands or claims about what is genuinely valuable. They would demonstrate with empirical confidence what, in fact, is good and bad, right and wrong, or how we should live.
“Level Two” findings, while falling short of demonstrating some moral doctrine, would still give evidence for or against some moral claim or theory.
“Level Three” findings would provide scientifically based descriptions of, say, the origins of morality, or the specific way our capacity for moral judgment is physically embodied in our neural architecture, or whether human beings tend to behave in ways we consider moral. Evidence for these sorts of views doesn’t tell us anything about the content of morality—what is right and wrong—but they speak to the human capacity for morality and in that sense are interesting.
Here is one such claim they quote:
As we come to a scientific understanding of morality, society is not going to descend into anarchy. Instead, we maybe able to shape our moral thinking towards nobler ends. Which norms of fairness foster economic prosperity? What are the appropriate limits on assisting a patient’s end-of-life decisions? By recognizing morality as a property of the mind, we gain a magical power of control over its future.
They, the authors claim, is overreach. They probe three areas: (1) Philosophical and methodological limitations, (2) pop science’s “moral molecule,” and (3) a blurred boundary.
They take aim at Joshua Greene’s well-known and highly-acclaimed study with these words:
He presents these as universal claims about human moral thought. But the experimental subjects whose brain activity is taken to be representative of all human moral thought are in fact just a handful of college students at elite, northeastern American universities. … Yet the moral impulses of these few Ivy League students are meant to tell us about the nature of moral thought for all of humanity: for elderly female Muslim subsistence farmers in northern India, indigenous hunter-gatherers in Papua New Guinea, and ambitious young atheistic businessmen in Shanghai. We have no good reason to think Greene has told us much about the moral thought of humanity, when all he has studied is the moral thought of a few Princeton students, especially given the likely influence of factors such as age, culture, and class on moral thought.
Nedelisky gets into the weeds with some of Greene’s theories and claims and then concludes he has overreached the evidence. Much of what Greene argues was already known. They turn then to primatology and the sympathy and empathy of monkeys (Frans de Waal).
De Waal is right that human morality often involves feelings of sympathy and empathy, and the actions that these feelings motivate. He may also be right that the development of this sort of altruistic behavior in nonhuman primates tells us something about how our own capacity for altruism developed.
However, there is still a significant gap between behavior being altruistic—even in the so-called moral sense—and the behavior being moral.
But is helping your neighbor with her rent merely on the basis of your sympathy for and empathy with her enough for your action to be moral? And if it is morally altruistic, is that enough to actually make it the right thing to do?
Certainly, de Waal has shown us a plausible account of the development and psychological functioning of certain tools we draw upon in moral judgment and action. Empathy and sympathy often form part of the basis for our moral behavior. What remains unclear, however, is how his theory of these capacities tells us anything about the nature of prescriptive morality, or about how to live.
You can act sympathetically yet immorally. With this in mind, calling sympathy the “centerpiece of human morality” is, at best, quite a stretch.
Now to pop science theories of the moral molecule, oxytocin, and its supposed presence in the trust game of Paul Zak:
What Zak observed is that those who had trusted more and shared had higher levels of oxytocin, and those who shared more tended to come out of the game with more money than those who shared less. He tried to control for personality differences, couldn’t find any, and so concluded that the oxytocin is the key difference-maker here. Zak’s takeaway? People whose brains release more oxytocin are more trusting and wind up benefiting more in Trust Game scenarios.
[Zak:] Am I actually saying that a single molecule—and, by the way, a chemical substance that scientists like me can manipulate in the lab—accounts for why some people give freely of themselves and others are coldhearted bastards, why some people cheat and steal and others you can trust with your life, why some husbands are more faithful than others, and by the way, why women tend to be more generous—and nicer—than men? In a word, yes.
Without being able to show how oxytocin figures in the broader explanatory framework of moral thought and action, he is in the position of someone trying to tell us that what explains drunk driving accidents is the presence of alcohol in drivers’ blood. Certainly this is an important, even necessary factor. But focusing only on blood chemistry neglects many other explanatory elements, such as human responsibility in choosing to drive drunk or in choosing to become drunk, or the cultural, genetic, and psychological factors that bear on such decisions.
The news gets worse: not only has the basic science of the ‘moral molecule” not been replicated, but additional research has suggested that under some conditions oxytocin promotes aggression and defensiveness, emotions directly opposed to the cuddly ones Zak describes.
They come finally to this very significant observation: “This overreaching depends on obscuring the distinction between “Is” and “Ought,” between description and prescription. By fudging that line, one may give the impression that practical moral implications emanate from the science; that a special moral authority derives from scientific expertise. This tendency is pervasive, even among the brightest lights of the new moral science.”