Moral Psychologist Joshua D. Greene and Experimental Philosopher Joshua Knobe

Below is a great dialogue between Harvard psychologist Joshua Greene and Yale “experimental philosopher” Joshua Knobe laying out some of the basics of moral psychology. I took notes as I watched the video, summarizing the major points for myself and for your use, dear blogreader.  It will be easier to just watch the video, of course, and since what I offer is almost always mere summary with little analysis, reading my jottings may be entirely superfluous. But if you’d rather skim or read my summarizations and my occasional replies than watch a video that will take an hour or so to finish, by all means feel free to do that instead. Or also!

During the video they allude to information which can be found in the following links:

Brigard, Mandelbaum, and Ripley’s paper “Responsibility and the Brain Sciences”
Josh K. on Brigard, Mandelbaum, and Ripley’s paper
Inbar, Pizarro and Bloom’s paper “Disgust sensitivity predicts intuitive disapproval of gays”
Science Daily on Inbar, Pizarro and Bloom’s work
Josh G. (et al.), “An fMRI Investigation of Emotional Engagement in Moral Judgment”

Greene’s work is primarily on the sources of our moral intuitions in psychological factors.  What is interesting to distinguish is that he is not arguing from the “is” of how we form our intuitions to an “ought” that we should form them as we do but using psychology to do the opposite, to show us that some of our moral intuitions have undermining psychological causes.  For example, we have disgust responses that we take to be moral responses which, upon cognitive reflection, we would recognize are not morall justified.  Recognizing ways in which we systematically psychologically make unjustified leaps from disgusts to moral judgments alerts us to this sort of an error and avoid being swayed by it.

Why not consider all moral intuitions debunked once explained?

Greene proposes a thought experiment: you can create two species.  One just like us except they were only interested in a small set of people closely related to them, using their resources only for those close to them while others farther away suffer.  Or you can create a species that spreads its love around and values the members of all its species equally and distributes its resources equally.  Greene thinks the former is what we are like and the latter is what we are not like.  And he argues that the latter would be the world we were going to create if we were ourselves going to create one of the two species.  He thinks this is a better world.  But he argues that it is not worth trying to change our deeply rooted psychological tendencies towards parochial preferences, which we have in this world.  Curiously, he is  therefore sketching a moral ideal which he does not actually want us to implement.  It is a moral ideal which, ironically, does not have normative force to morally command us to see it implemented, out of recognition of the limits of our psychological potential.  In this way, ought does not imply can for Greene.

Knobe then profers a most interesting challenge.  What if Greene’s own parents were offered two pills, one to make them Greene’s omni-benevolent altruists and one to make them normal humans with more concentrated love for their son.  What would Greene say his own parents should do, love him less while fulfilling his ideal or love him with partiality instead.  Greene expresses all the normal psychological responses, admitting it would be strange, weird, and painful, but follows through with the logical conclusion of his consequentialism that if his parents’ global benevolence made the overall world a much better place, he’d have to say that that’s the better choice and that they morally should take it (even at his personal expense as a child not as preferred by his parents as most human children while expecting to be preferred, being a normal child).  It “bristles against” his intutions for such parents to “neglect” their child by our ordinary standards but he won’t take that to mean it’s immoral that they do so, just “bizarre.”

Greene thinks that given the way our brains are, there will be no way to change from being locally preferring humans.  It would only happen with radical changes in our physiology.  What we can hope for is people who will at least decide to, say, buy an $800 if they could buy a $200 stroller and give $600 to charity.  He thinks if our culture and educational system reared these values such that we did not see it as so strange to care as much for those around the world as those close to us that we would get closer to this ideal. In this way, being educated about evolutionary psychology and the quirky ways we form our moral intuitions, if done systematically, could influence our intuitions into a more morally universalist mindset.

At 17:22 Greene talks about punishment.

2 views: Forward thinking, consequentialist take:  deterance, rehabilitation, and incapacitation.  “Punishing in hopes of making the future better.”

Retribution:  Someone’s done a bad thing and therefore deserves to suffer.

Greene is not a big fan of retribution.  When we understand human action better, desire for retribution goes away.  A hurricane does a lot of damage but you don’t want to make it suffer–it’s just a machine.  Should we start to recognize ourselves as mechanical systems, and really grasp the mechanical nature of our action stemming from physical events in our brain, retribution will lose its grip.  In this way some of our moral commitments will be undone.

Knobe argues though that even should we abstractly, intellectually realize that people are not free and responsible, we would still be chemically induced in our brain to feel strongly like the one who harms us must be punished.

Greene points out in reply though that Knobe has a study that actually shows people can overcome their immediate psychological responses.  They have an explicit approval but an implicit disapproval of gay kissing.  In Knobe’s study, he evaluated people’s views on things that may have been transgressive in an earlier time but now are seen as okay.  So they asked about interracial sex and gay people french kissing on the street.  100% of subjects said nothing wrong with interracial sex.  And they were less inclined to say straight couples were wrong for kissing on the street than to say gay men french kissing on the street was wrong.

Knobe points out that people are more likely to see side effects as intentional when they think of those side effects as being morally bad than when they think of them as being morally good.  So, they gave stories in which gay kissing and interracial sex are side effects to see whether people would call the actions in the stories that led to them as wrong.

In the example story, a record executive is warned that his videos will have the side effects of increasing interracial sex and public gay french kissing but he says, “I don’t care about that, I just want to make money, so I’m going to release them anyway.”  He releases them and they increase interracial sex and gay french kissing on street.  Did he encourage gay men to french kiss on street intentionally?  People with high dispositions to feel disgust say yes and those with a low disposition to feel disgust say no. So those with the high disposition to feel disgust subconsciously are seeing the french kissing as bad (therefore thinking the action which promotes it is intentional) even though consciously they do not think it is wrong.

I think Knobe’s experiment, at least as summarized here sounds as though it rather drastically is underdetermined by the evidence.  There is a huge assumption that because in some cases people are more prone to attribute intention when they think an action is wrong, and while people do seem more likely to associate disgust with wrongness, in this case there could be any other number of factors at work in their reason for thinking the record executive acted intentionally.

Greene makes an interesting point though when he notes, assuming Knobe’s experiment shows what he thinks it does, the difference between people’s subconscious disgust and their explicitly professed moral judgments need not be attributed to lies to the experimenters about their true feelings out of political correctness.  These are Cornell undergrads in the experiment.  Greene thinks that the vast majority of them not only would publicly say they’re not against gay sex but that they would even secretly vote in a gay-friendly way and otherwise act in accord with their abstract judgment and not their emotional reflexes.  It’s an example of people really changing their actions and not just saying the right thing to an experimenter.

So Greene asks if people can come to see gay kissing as fine even though it wasn’t thought so before, why can’t we comparably overcome our desire to punish out of a recognition of cognitive science?  Why can’t we react not with the desire to punish but compassionately as Greene has?

Knobe offers recent studies in which people were told to imagine a rape.  The person who did it had an injured pre-frontal cortex which completely caused the behavior and people were told that if they had the same injury they would do exactly the same thing.  People were then asked whether he is responsible?  Still most people said yes.  But Greene points out that not as many did as those who said the rape was wrong without including a tumor in the story.

In the abstract when people think of a world without free will, they see people as not morally responsible.  But when we tell a story about a particular person who commits the wrong, we hold him responsible.  The point is that abstractly we can grasp the unfariness of holding them responsible but when the example is personalized, we wind up reacting emotionally and want to punish.  We can grasp the point abstractly but just need to overcome our emotions.  They can see observe this conflict in the brain using cognitive neuroscience.  In the future, maybe these conflicts in the brain could be settled in the utilitarian favor.

Greene brings up the trolley example where we have to decide whether to let 5 people die or push 1 person to his death in order to save the other 5.  Watching their brains, scientists can recognize the two portions of the brain: more activity in the more cognitive portion of the brain when they opt for the consequentialist judgment over activity in the more emotional portion of the brain. There is a conflict between an intuitive response and a more reflective, considered response.

Going with our emotions is like using our camera on its regular, “point and shoot” settings.  Usually they work fine.  But sometimes when we need to make adjustments for an unusual situation, we need to put our camera into a manual mode and pay close attention to the peculiar factors at hand.  In unusual moral situations, our normal first reactions may not be reliable as they usually are.  We need to step back and evaluate.  The tension is between an efficiency in our ordinary moral settings and flexibility in our cognitive capabilities for reassessment.

In asking people whether they would push someone in front of a trolley to stop it from killing 5 people or whether they would pull a switch with a trap door to drop that person in front of the trolley, there is a different response from people to both scenarios.  60% say you should pull the trap door lever but only 30% would say you should push the person.  So, our emotions can’t be the final guide because they’re not responding to rational factors.

But the issue is not emotional per se. If our response was emotional but it led us to the right judgement then there’s nothing wrong with it being emotional.  If a response was cognitive but led us to the wrong judgment, it wouldn’t be better for being simply unemotional.

Emotions are essentially heuristics, precompiled programs, like the automatic settings on your camera. Usually the automatic settings are the best but it would be amazing if there was never a time you had to readjust your camera. Because it is an automatic setting, an emotion will sometimes lead you astray.

In a lot of kinds of circumstances, the emotions may be the better guide to responses.  Aesthetic judgments, for example, maybe should be guided more by the emotional response rather than by reference to a “top down” theory.

Ultimately Greene thinks there is no external fact about what is right or wrong but the best we can do is be consistent with our values as they are.  What leads Greene to prefer the cognitive, non-local perspective in preferring the world of universal love over the parochially loving one is a matter of consistency with his values and he can make arguments against the proponent of the more parochial theories but only in terms of the narrowness of their perspective.

Your Thoughts?

About Daniel Fincke

Dr. Daniel Fincke  has his PhD in philosophy from Fordham University and spent 11 years teaching in college classrooms. He wrote his dissertation on Ethics and the philosophy of Friedrich Nietzsche. On Camels With Hammers, the careful philosophy blog he writes for a popular audience, Dan argues for atheism and develops a humanistic ethical theory he calls “Empowerment Ethics”. Dan also teaches affordable, non-matriculated, video-conferencing philosophy classes on ethics, Nietzsche, historical philosophy, and philosophy for atheists that anyone around the world can sign up for. (You can learn more about Dan’s online classes here.) Dan is an APPA  (American Philosophical Practitioners Association) certified philosophical counselor who offers philosophical advice services to help people work through the philosophical aspects of their practical problems or to work out their views on philosophical issues. (You can read examples of Dan’s advice here.) Through his blogging, his online teaching, and his philosophical advice services each, Dan specializes in helping people who have recently left a religious tradition work out their constructive answers to questions of ethics, metaphysics, the meaning of life, etc. as part of their process of radical worldview change.


CLOSE | X

HIDE | X