The Doctrine of Double Effect

In my recent post on euthanasia, a thought experiment was mentioned which I’d like to address at greater length:

In one dilemma, you are standing by a railroad track when you notice that a trolley, with no one aboard, is heading for a group of five people. They will all be killed if the trolley continues on its current track.

The only thing you can do to prevent these five deaths is to throw a switch that will divert the trolley onto a side track, where it will kill only one person. When asked what you should do in these circumstances, most people say that you should divert the trolley onto the side track, thus saving a net four lives.

In another dilemma, the trolley, as before, is about to kill five people. This time, however, you are not standing near the track, but on a footbridge above the track. You cannot divert the trolley. You consider jumping off the bridge, in front of the trolley, thus sacrificing yourself to save the five people in danger, but you realize that you are far too light to stop the trolley.

Standing next to you, however, is a very large stranger. The only way you can prevent the trolley from killing five people is by pushing this large stranger off the footbridge, in front of the trolley. If you push the stranger off, he will be killed, but you will save the other five. When asked what you should do in these circumstances, most people say that it would be wrong to push the stranger.

The conclusion that most people draw is that it is acceptable to throw the switch, but not to push the fat man onto the tracks. This argument is usually raised as an insuperable dilemma for utilitarianism, since in both cases the overall outcome (one person dies, five survive) is the same. Utilitarian reasoning would seem to consider both scenarios equivalent, yet most people feel very strongly that they are not the same.

The original commenter who raised this dilemma suggested, as others have, that this is evidence for the evolutionary origin of morality. The usual argument along these lines is that directly pushing another person into harm’s way is something that could have happened in our species’ past, and therefore our brains are primed to respond emotionally to it. Pulling a lever, however, is an action that nothing in our evolutionary heritage prepared us for. Therefore, we do not feel the same instinctive reaction of emotional repulsion, which leads most people to conclude that the first scenario is somehow more acceptable than the second.

Although I am a utilitarian (more specifically, a universal utilitarian), I believe that it’s allowable to pull the lever, but not to push the man onto the tracks. I’ll explain why in a moment, but first I’d like to make an observation. We didn’t have levers in our ancestral environment. But we also did not have trains. If people don’t have as strong a moral reaction to things we didn’t grow familiar with over millions of years, why do people show any emotional response to any of the train scenarios at all? If this reasoning was correct, shouldn’t our evolved minds not “grasp” the danger an onrushing train poses, and shouldn’t people react impassively to all versions of the train scenario? Clearly that is not the case.

I think there’s a superior explanation that illuminates the key difference between the two scenarios, and it’s this difference that most people intuitively grasp. The difference comes from a famous moral principle, the doctrine of double effect:

…sometimes it is permissible to cause such a harm as a side effect (or “double effect”) of bringing about a good result even though it would not be permissible to cause such a harm as a means to bringing about the same good end.

As the doctrine of double effect tells us, it is intent that makes all the difference. A person who shoots and kills someone else at random, in cold blood, is a murderer; a person who shoots and kills someone who was attempting to kill them is not a murderer, but has merely acted in justifiable self-defense, though the end result is the same. Likewise, if our country was at war with another, deliberately bombing a residential area in the enemy country to kill civilians and create panic and terror among the survivors would be a war crime. However, bombing a factory used to manufacture munitions for the army is a legitimate tactic of warfare, even if civilians work there and the bombing kills just as many of them. And the same is true of the trolley scenario. It is the difference between unwanted but unavoidable harm, versus foreseen and intended harm, that is the key distinction between the two. (This applies in the other direction as well: a man who shoots at the President, intending to assassinate him, is no less culpable if his bullet misses its target.)

The only question is whether a utilitarian moral system can recognize the importance of intent, and I have always firmly supported the position that universal utilitarianism can do this. UU’s fundamental counsel is that we should judge our actions not just by the real harm they actually produce, but by the potential harm they might produce – in other words, by the intent of the actor.

In the trolley scenario, a person who acts with the intent to kill one person in order to save others has introduced a vast amount of potential suffering – as in the scenario where a healthy person is vivisected so that their organs can be given to the seriously ill, it raises the chilling scenario of a ruthless society where anyone’s life may be ripped away at any time without their consent in order to be used as a means to an end. No one could live happily in a society where such acts were the norm. The other trolley scenario, where the one’s death is unintended but unavoidable, also introduces some potential suffering, but much less. It gives all people the proper assurance that their lives are valuable, that they will not be callously killed without notice to serve the interests of a stranger. Only in a society that shows this basic degree of respect for human life could the stability exist that makes happiness possible.

About Adam Lee

Adam Lee is an atheist writer and speaker living in New York City. His new novel, Broken Ring, is available in paperback and e-book. Read his full bio, or follow him on Twitter.

  • http://stupac2.blogspot.com Stuart Coleman

    We might not have had trains, but we did have things like stampeding game. They would work in this example. I don’t think that the switch is a good example of evolutionary morality anyway, since you could devise some scenario where the objects involved are all familiar to Pleistocene humans, but would probe the same question, and the outcome would probably be the same: that the one where you actively kill someone is bad, but passively killing that same person would be fine.

    Anyway, I hate these hypothetical questions since they’re so far removed from anything that actually happens. Granted, they can have implications for how we might actually build a morality, but I think that those implications would already be built in to any reasonable moral code.

  • http://stupac2.blogspot.com Stuart Coleman

    We might not have had trains, but we did have things like stampeding game. They would work in this example. I don’t think that the switch is a good example of evolutionary morality anyway, since you could devise some scenario where the objects involved are all familiar to Pleistocene humans, but would probe the same question, and the outcome would probably be the same: that the one where you actively kill someone is bad, but passively killing that same person would be fine.

    Anyway, I hate these hypothetical questions since they’re so far removed from anything that actually happens. Granted, they can have implications for how we might actually build a morality, but I think that those implications would already be built in to any reasonable moral code.

  • ST

    Ebon,

    First of all, thank you very much for addressing my post.

    As Stuart Coleman pointed out, there need not be a train or levers. I can easily imagine a scenario where you are in a tree at the intersection of three valleys, and you see a pride of lions heading towards a valley where five innocents lie defenseless. In the first scenario, you have the option of throwing a piece of meat down a ravine that leads into the second valley, distracting the lions, at the cost of sacrificing another man, equally defenseless, that lies in that second valley. To replace the fat man scenario, one can imagine that another tender but meaty young man is in the tree with you. You don’t have any meat chunks to throw down the ravine. You consider throwing yourself, but you are mostly skin and bone, and the lions won’t be satisfied. You could throw the tender man, which would save the other five.

    I disagree with Stuart as to the nature of the question. It is not hypothetical at all. Politicians and health care staff have to deal with similar questions on an almost every day basis. If I administer this vaccine to 10 million people, so many people will die of adverse reactions and allergies. If I don’t there is a chance nobody will die, and there is a chance that a lot more people will die. What do I do? Do I administer the vaccine? It is a very pertinent scenario, unfortunately.

    Unfortunately, I cannot say I am quite satisfied with the reply just yet. I think differentiating between using people directly as means (sacrifice the man in the tree to stop the pride of lions from getting to the five innocents) and side effect (throwing a piece of meat towards the other valley – where another man lies defenseless against the lions) is a good step in the right direction. I am still troubled by the fact that the end result is the same, and that it could be construed that, given that we knew the man would die, it is not so much a side effect as it is using the man as a means, albeit indirectly.

    Is a man directly killed any more innocent than one indirectly killed? Are we any less culpable?

    Why do you reject the idea that this could be an evolutionary trait: direct, albeit well-intentioned murder bad, murder as a side effect ok? We both agree that we evolved our moral sense. Need it be perfect, or can it have, just like vision, a few imperfections that can be revealed in special conditions?

  • ST

    Ebon,

    First of all, thank you very much for addressing my post.

    As Stuart Coleman pointed out, there need not be a train or levers. I can easily imagine a scenario where you are in a tree at the intersection of three valleys, and you see a pride of lions heading towards a valley where five innocents lie defenseless. In the first scenario, you have the option of throwing a piece of meat down a ravine that leads into the second valley, distracting the lions, at the cost of sacrificing another man, equally defenseless, that lies in that second valley. To replace the fat man scenario, one can imagine that another tender but meaty young man is in the tree with you. You don’t have any meat chunks to throw down the ravine. You consider throwing yourself, but you are mostly skin and bone, and the lions won’t be satisfied. You could throw the tender man, which would save the other five.

    I disagree with Stuart as to the nature of the question. It is not hypothetical at all. Politicians and health care staff have to deal with similar questions on an almost every day basis. If I administer this vaccine to 10 million people, so many people will die of adverse reactions and allergies. If I don’t there is a chance nobody will die, and there is a chance that a lot more people will die. What do I do? Do I administer the vaccine? It is a very pertinent scenario, unfortunately.

    Unfortunately, I cannot say I am quite satisfied with the reply just yet. I think differentiating between using people directly as means (sacrifice the man in the tree to stop the pride of lions from getting to the five innocents) and side effect (throwing a piece of meat towards the other valley – where another man lies defenseless against the lions) is a good step in the right direction. I am still troubled by the fact that the end result is the same, and that it could be construed that, given that we knew the man would die, it is not so much a side effect as it is using the man as a means, albeit indirectly.

    Is a man directly killed any more innocent than one indirectly killed? Are we any less culpable?

    Why do you reject the idea that this could be an evolutionary trait: direct, albeit well-intentioned murder bad, murder as a side effect ok? We both agree that we evolved our moral sense. Need it be perfect, or can it have, just like vision, a few imperfections that can be revealed in special conditions?

  • Mac

    I tend to feel the distinction is more one of emotions from direct and indirect action in killing a person than in an emotional detachment due to the means of the death. I don’t disagree the net benefit is seen to be higher in both cases if the lone individual is sacrificed.

    Perhaps it’s also because we’d expect to receive resistance in attempting to throw the person at the incoming calamity? Whereas the detached action of flipping a switch or throwing a piece of meat is something we know we can do without any direct confrontation?
    If we need to have an altercation with a bigger man, is possible they may just throw us down instead? Is that something our minds are inherently aware of?

    Another dimension I can see to this problem is the idea of a ruthless society that Ebon raises. The situation is that you are then a person who could viably be sacrificed to save others, whereas in the thought experiment you are conveniently the detached observer making the decision with no direct impact to yourself other than your sense of morality.
    What if you were the lone individual, or the big guy – would you call out for the switch to be thrown or the meat to be laid out? Many people I’m sure wouldn’t be keen for that, as our own life is inherently valuable to us (I do accept some people would go for this over sacrificing an unknown individual though).

  • Mac

    I tend to feel the distinction is more one of emotions from direct and indirect action in killing a person than in an emotional detachment due to the means of the death. I don’t disagree the net benefit is seen to be higher in both cases if the lone individual is sacrificed.

    Perhaps it’s also because we’d expect to receive resistance in attempting to throw the person at the incoming calamity? Whereas the detached action of flipping a switch or throwing a piece of meat is something we know we can do without any direct confrontation?
    If we need to have an altercation with a bigger man, is possible they may just throw us down instead? Is that something our minds are inherently aware of?

    Another dimension I can see to this problem is the idea of a ruthless society that Ebon raises. The situation is that you are then a person who could viably be sacrificed to save others, whereas in the thought experiment you are conveniently the detached observer making the decision with no direct impact to yourself other than your sense of morality.
    What if you were the lone individual, or the big guy – would you call out for the switch to be thrown or the meat to be laid out? Many people I’m sure wouldn’t be keen for that, as our own life is inherently valuable to us (I do accept some people would go for this over sacrificing an unknown individual though).

  • Polly

    Would anyone accept as a moal option to simply not act? After all, even though the near term effect is to save 5 people over 1, we really don’t have any information about the future potentialities of all involved. What if the lone guy down the road holds a vitally important solution to some problem or, a cure to a deadly disease that could help millions? What if the 5 down the 1st road are members of a death-cult bent on world dominion?

    Dramatics aside, my question is: CAN our inability to see the ultimate consequences of any set of scenarios be justification for inaction? Ever?

    ST Said:

    I am still troubled by the fact that the end result is the same, and that it could be construed that, given that we knew the man would die, it is not so much a side effect as it is using the man as a means, albeit indirectly.

    I disagree that we are using the man even indirectly. His presence in the 2nd ravine is wholly unnecessary to our objectives. The end result may be that his life is sacrificed, but then again, he may successfully evade the lions. And if he does, WE ARE NO WORSE OFF in terms of what we wanted to accomplish with the 5. Whereas if we push the tender man out of the tree and he evaded the lions, our attempts to save the 5 are thwarted.

  • Polly

    Would anyone accept as a moal option to simply not act? After all, even though the near term effect is to save 5 people over 1, we really don’t have any information about the future potentialities of all involved. What if the lone guy down the road holds a vitally important solution to some problem or, a cure to a deadly disease that could help millions? What if the 5 down the 1st road are members of a death-cult bent on world dominion?

    Dramatics aside, my question is: CAN our inability to see the ultimate consequences of any set of scenarios be justification for inaction? Ever?

    ST Said:

    I am still troubled by the fact that the end result is the same, and that it could be construed that, given that we knew the man would die, it is not so much a side effect as it is using the man as a means, albeit indirectly.

    I disagree that we are using the man even indirectly. His presence in the 2nd ravine is wholly unnecessary to our objectives. The end result may be that his life is sacrificed, but then again, he may successfully evade the lions. And if he does, WE ARE NO WORSE OFF in terms of what we wanted to accomplish with the 5. Whereas if we push the tender man out of the tree and he evaded the lions, our attempts to save the 5 are thwarted.

  • ST

    Polly,

    The answer is, no inability to fully see the consequences is not an excuse. There is no point at which we are ever able to see the ultimate consequences. Should we spend our entire lives paralyzed by fear?

    There’s no evading the lions. They are hungry and fast. (In reality, they can even climb trees)

    Mac,

    I think you are closest to the truth. It’s all about emotion. In the direct sacrifice scenario we’d have to bear in us for the rest of our lives the look of shock and reproach, and the dying screams of the person we directly throw down, whereas if we flip a switch, we don’t have to look into anyone’s eyes.

    Do you think the pilots of Enola Gay who dropped Little Boy on Hiroshima could have killed 140,000 people (men, women and children) by hand, one at a time, with a butchering knife, without going crazy with self-guilt and nightmares? Flipping a switch and seeing a mushroom cloud is different. Intellectually, they knew what they were doing, but since death and disembowelment were not directly obvious to them, they did not feel much guilt.

  • ST

    Polly,

    The answer is, no inability to fully see the consequences is not an excuse. There is no point at which we are ever able to see the ultimate consequences. Should we spend our entire lives paralyzed by fear?

    There’s no evading the lions. They are hungry and fast. (In reality, they can even climb trees)

    Mac,

    I think you are closest to the truth. It’s all about emotion. In the direct sacrifice scenario we’d have to bear in us for the rest of our lives the look of shock and reproach, and the dying screams of the person we directly throw down, whereas if we flip a switch, we don’t have to look into anyone’s eyes.

    Do you think the pilots of Enola Gay who dropped Little Boy on Hiroshima could have killed 140,000 people (men, women and children) by hand, one at a time, with a butchering knife, without going crazy with self-guilt and nightmares? Flipping a switch and seeing a mushroom cloud is different. Intellectually, they knew what they were doing, but since death and disembowelment were not directly obvious to them, they did not feel much guilt.

  • Polly

    There’s no evading the lions. They are hungry and fast. (In reality, they can even climb trees)

    You missed my point which is that the single man’s death is NOT a means to any end. The point I was trying to illustrate is that it’s not the man’s death that serves the purpose, whereas if we push the tender man out of the tree, it is his death that serves our purpose. The two scenarios are not the same even if the end result ends up the same. (If that’s true about lions, that’s interesting)

  • Polly

    There’s no evading the lions. They are hungry and fast. (In reality, they can even climb trees)

    You missed my point which is that the single man’s death is NOT a means to any end. The point I was trying to illustrate is that it’s not the man’s death that serves the purpose, whereas if we push the tender man out of the tree, it is his death that serves our purpose. The two scenarios are not the same even if the end result ends up the same. (If that’s true about lions, that’s interesting)

  • Ric

    Good comments, though I think you miss the point of the evolutionary exclamation. We don’t need to have encountered a train in our past to realize it’s dangerous. Evolution made us capable of learning what is dangerous. That criticism doesn’t fly.

  • http://www.johnnysstew.com/cool/coolwet J

    I hate these sort of “morality problems”. They always strike me as the sort of things posed to students by smug philosophy professors who have decided beforehand to mock any conclusion that the students as a whole come to, supposedly in the name of “critical thinking” but actually in a vain show of intellectual superiority.

    It’s problems like this that demonstrate the uselessness of supposedly all-encompassing value “systems” of the sort we in the West and Near East have been arguing uselessly over for 3500 years. It’s enough to make you want to desperate to just dunk your face in a sink of cold water and forget the whole thing.

  • RB

    I’ve been reading this site for a couple months now and has become one of my favorites but this is my first post.

    I think ST got closest to the point of what drives our perception of this situation. In the end it boils down to not just emotion but, more specifically, empathy. Because we can so closely relate to the other person in the tree or on the bridge it makes it harder for us to rationalize putting that person in a situation where they will be killed. On the other hand if we can detach ourselves from the other person, we do not experience this sense of empathy that is so ingrained in our evolutionary past.

    A test of this idea might be (at risk of making this hypothetical situation a little ridiculous) to imagine the train scenario with the man standing next to you as well as the switch option. In this case, however, there are two or three people on the other track. I would be curious to see how many people could rationalize throwing the man next to them over throwing the switch or perhaps doing nothing at all as Polly has suggested. Just a thought.

    For a good review of empathy in humans and primates and how they relate look at “The empathic brain: how, when and why?” Trends in Cognitive Science. 2006. Vol 10. pp.435-41.

  • Ric

    Whoops. I meant “evolutionary explanation” above, not “evolutionary exclamation”.

  • Ric

    Whoops. I meant “evolutionary explanation” above, not “evolutionary exclamation”.

  • OhioAtheist

    I’m afraid I’ll have to disagree. It seems to me that when you think of moral actions in terms of “intent,” you abandon consequentialism, and therefore utilitarianism. (Unless you don’t think that utilitarianism need be consequentialist, in which case I’d say you’re using a very unconventional and confusing terminology.) Hypothetical consequences carry no real moral weight, because they hurt no one; if they did, they would cease to be hypothetical and become actual consequences. Obviously a society such as the one you describe at the end would not be a happy one, but one person pushing one fat man into one trolley’s path will not send us hurtling down the slippery slope to this hypothetical dystopia. The only likely consequence of any great significance is the bringing about of one death as opposed to five: Far from ideal, but the best possible outcome under any utilitarian analysis.

    Of course (as rule-utilitarians point out), humans are fallible, and rarely have the time or inclination to think out all the consequences of their actions, so general moral rules are necessary. “Don’t use people as a means to an end” is usually a good rule. But we’re not talking about the big picture here; in this particular situation, the best consequences will come about if the fat man is pushed.

    The reason people are queasy about pushing the fat man is not because they rationally discovered the doctrine of double effect; on the contrary, the doctrine was invented as a means to sanctify preexisting moral intuitions against the assault of consequentialist theories. However, I see no reason why those intuitions should be counted as a secure foundation for moral precepts. They are not universal (I have little trouble ignoring them when they seem unreasonable) and are au fond arbitrary holdovers from our haphazard evolutionary past–indeed, as the doctrine of double effect demonstrates, it takes some rationalizations to patch over their inconsistencies.

  • ex machina

    It’s problems like this that demonstrate the uselessness of supposedly all-encompassing value “systems” of the sort we in the West and Near East have been arguing uselessly over for 3500 years. It’s enough to make you want to desperate to just dunk your face in a sink of cold water and forget the whole thing.

    I disagree. Philosophy is not something that only “Philosophers” do in their far-off ivory towers. You use systematic rational thought everyday to get to and from work or the grocery store. Moral problems like this just have more steps. Maybe you have the patience to muck through it, and maybe you don’t, but that does not make it less important.

  • http://off-the-map.org/atheist/ Siamang

    Dawkins looks at these “trolly problems” in The God Delusion.

    He mentions a study where they asked people all over the world similar problems. In some cases changing the problem to something about being in a fishing boat and seeing sharks to bridge cultural understanding of trolleys.

    He talks about how the number of people who will and won’t say that sacrificing the fat man is acceptable are about the same across all cultures.

    Which he says points to an evolutionary basis for this moral decision, not a cultural or philosophical one.

    IIRC, he argues that this decision happens unconsiously, and most people instantly know that it’s wrong to push the fat man off the bridge, if you ask them WHY, they have a very hard time articulating it logically.

    He says this is because split-second, live or die decisions had to be made with respect to the survival of the whole tribe all the time in our evolutionary past. These could not be trusted to our higher functions.

    The mathematics and game theories of Evolutionary Stable Systems and Nash equillibria are hard-wired into our survival centers of our brains.

    Or so go the arguments.

    This is a fascinating part of the book The God Delusion that I’ve not heard one peep about from reviewers.

  • OhioAtheist

    This is a fascinating part of the book The God Delusion that I’ve not heard one peep about from reviewers.

    Well of course, they’re too busy harping on his “theological unsophistication” to focus on the other 70% of the book.

  • http://asthewormturns.com dpoyesac

    In a recent Nature article, two neuroscientists tested the effect of brain damage on moral judgments using this scenario as one of the test cases. Here’s the link.

    It turns out that humans use at least two separate mechanisms in cases like these. One is a rational utilitarian mechanism ranking outcomes according to effects, and the other is a purely emotional response. The utilitarian module ranks saving five as better than killing one. But when you yourself are directly involved — by pushing the large man rather than just flipping the switch — the emotional centers get involved and finds the act unacceptable.

    Michael Koenigs and Liane Young compared the moral judgments of people with brain damage (to an emotional center called the ventromedial prefrontal cortex) to the judgments of people with healthy brains. Healthy-brained people took it to be wrong to push the large man off the bridge while it was acceptable to pull the switch. Those with VMPC damage ranked pushing the large man to be morally equal to pulling the switch.

    To me this indicates that our moral judgments are the result of our evolutionary heritage — so Dawkins is right on this. Of course, it is a common fallacy to assume that we can naïvely get our values from our facts, so this shouldn’t be taken as proof that it really is or isn’t morally acceptable to push large men off bridges. That requires further argument.

    If I remember correctly, the original thought experiment came from Judith Jarvis Thompson, a philosopher famous for her contributions to the analysis of the morality of abortion.

    ex machina;
    Thanks for the defense of the relevance of philosophy.

  • Jim Baerg

    ST said: “I disagree with Stuart as to the nature of the question. It is not hypothetical at all. Politicians and health care staff have to deal with similar questions on an almost every day basis. If I administer this vaccine to 10 million people, so many people will die of adverse reactions and allergies. If I don’t there is a chance nobody will die, and there is a chance that a lot more people will die. What do I do? Do I administer the vaccine? It is a very pertinent scenario, unfortunately.”

    In the case of the vaccine, it’s a statistical thing & we don’t know before *which* people will die in each case. Does that make a difference to the morality of it?

  • Polly

    I think applying evolutionary thought to morality is at best really fuzzy logic. This kind of scenario couldn’t possily have come up enough times to affect the survial rate of our ancestors. A whole species’s survival…or the propagation of one family line’s genes…dependant on some moral twitch in the brain? I seriously doubt it. I think morals come from our higher, post-evolved brain functions.
    Now that we’re aware, we care.

    Having said that, I’d love to hear information-packed contrary views.

  • http://off-the-map.org/atheist/ Siamang

    Polly, did you read my post above about the similarity of answers of these moral dilemmas occurring across cultures?

    Read this article about Marc Hauser at Harvard:

    http://www.americanscientist.org/template/InterviewTypeDetail/assetid/52880;jsessionid=baa9

    He describes the outlines of such a system. I don’t know if the continuing research will bear out his hypothesis, but it is a compelling one to say the least.

    This kind of scenario couldn’t possily have come up enough times to affect the survial rate of our ancestors.

    Of COURSE it could have. We’re talking millions of years of hunting parties. We’re talking millions of years of Caveman Ogg holding a club defending the mouth of the cave. Three children are inside the cave, and a bear is coming… You COULD push a tribal elder out, or one of the children, and that might satisfy the bear long enough for the others to escape. What do you do?

    We know that chimps can make this kind of calculation. We see similar behavior in gorillas as well.

    If it were something we reasoned through, we SHOULD see vastly differing results from culture to culture, shouldn’t we? Hauser says that we don’t, and Dawkins says that points to a biological origin rather than a philosophical one.

  • http://www.patheos.com/blog/daylightatheism/ Ebonmuse

    It seems to me that when you think of moral actions in terms of “intent,” you abandon consequentialism, and therefore utilitarianism.

    Perhaps that’s true of other forms of utilitarianism, but not of mine. Intent matters greatly because intent, in the long run, is what is ultimately responsible for almost all human happiness and suffering.

    Hypothetical consequences carry no real moral weight, because they hurt no one…

    That’s incorrect, OhioAtheist, and that is precisely the point. Hypothetical consequences do carry moral weight, precisely because we have to consider the possibility that other people can and most likely will find themselves in similar situations in the future. The whole underlying point of universal utilitarianism is precisely that a given course of action (i.e., not killing someone as a means to an end), produces better results in the long run if it is consistently followed, even though we might think an immediate gain could be realized by breaking it on certain specific occasions.

    What we’re dealing with here is Prisoner’s Dilemma logic. On any one occasion, I can realize an immediate gain by backstabbing my buddy; but in the long run, that strategy is a loser, because it’ll end up in spirals of mutual defection and betrayal. To truly do well, I have to participate in a strategy of cooperation. If we value the overall result and not just temporary short-term gain (as any rational person would), then we have to learn to cooperate and ignore the temptation of selfishness.

    Also, for Siamang:

    If it were something we reasoned through, we SHOULD see vastly differing results from culture to culture, shouldn’t we?

    No – because reason is universal. If a course of action is the rational one, then we would expect all rational people to hit on it and be able to justify it. Whether double-effect reasoning came about through evolution or not is entirely a different question from whether it’s a good idea, and the answer you give to one of those questions has no necessary bearing on the answer of the other.

  • bassmanpete

    Don’t do anything – it’s their own stupid fault for standing on the line anyway! Let the truck do its worst and propose the 5 for the Darwin Award :)

  • OhioAtheist

    The whole underlying point of universal utilitarianism is precisely that a given course of action (i.e., not killing someone as a means to an end), produces better results in the long run if it is consistently followed, even though we might think an immediate gain could be realized by breaking it on certain specific occasions.

    Here is where we disagree. You’re making an unwarranted jump from “This rule will work very well in most situations” to “We should always follow this rule.” However, if the purpose of the rule is to arrange for the best consequences (most happiness/least suffering), then it makes perfect sense to abandon it when it doesn’t live up to that purpose. In fact, the ultimate rule that we should consistently seek to follow–and the rule that should underlie all other rules–is to do that which will have the best consequences.

    Now as I said, secondary rules are necessary, because without them moral reasoning would be very messy and inefficient. But this does not mean that we can’t adjust the rules when specific situations warrant it. For instance, “Don’t kill people as means to ends” is a good rule on the intuitive, everyday level of moral reasoning, but in the case of the trolley it leads inexorably to worse consequences. On a higher level, therefore, it is entirely possible to qualify the original rule, allowing for some exceptions. And the delineations can continue all the way up the ladder of moral specificity until we reach a moral code that allows for the best consequences in all circumstances.

    Due to our limitations, we can’t climb all the way up that ladder, but that’s no reason to cling with all our might to the bottom rungs. When we find that a moral rule–even such an important one as “Don’t kill”–will not lead to the best consequences in a given situation, we have all the warrant we need to say “That rule works most of the time, but not this time, and therefore we will allow a different rule for this set of circumstances.” It’s very much like how classical physics, which works so well in the macroscopic everyday world, becomes obsolete as we peer into the atomic and subatomic world. As physics requires new rules as we “zoom in,” so does morality need more sophisticated precepts to fit circumstances that can and do differ.

  • Mrnaglfar

    When it comes to that scenerio it’s alot more complicated then that. A few quick points:

    1) Most people would agree pushing the large man in front of the train is wrong. A possible explaination for this is that it involves placing someone in the situation who wasn’t there to begin with. In the first case, by pulling the lever, someone who was already on the tracks dies; In the second case, you place someone in front of the tracks. This is one of the reasons for the emotional response.

    2) Modify the question now, to get a different response: 5 strangers on one track and your mother on the other track. How many would now throw the switch and kill their mother in order to save 5 strangers (providing of course you like your mother. If not, replace her with someone you care for the most)?

    3) On the second point, we can see how people’s morals can be manipulated. In that case it was by personalizing the question, but really, what it comes down to is how the dilemma is worded. Studies tend to show people are more apt to take risks depending on the wording. Paraphrased roughly from the book “The Paradox of Choice”, the study put people into two groups. In both scenerios, they are doctors in a village in which the people are dying, for the sake of argument, the population is 100. In one the problem roughly reads “if you do X, you will save 60 people, if you do Y you have a 33% chance to save everyone, and a 67% chance to save no one”. In this situation, most people will pick the first choice. In the second case, the question reads “if you do X, 40 people will die. If you do Y you have a 33% no one will die, and 67% chance to not save anyone”. In this case, most people now choose the second option. Again, same question, same results, but a change of wording will make people make a different choice. And this choice doesn’t even involve dragging anyone else into the situation who wasn’t already involved.

    In short, by chancing the wording from “save” to “die”, it will shift the line of thinking inbetween a gain and a loss.

  • Mrnaglfar

    And to follow up, consider some more questions:

    In regards to my second point, what if the train is coming towards your mother and by flipping the switch you would kill the other 5, rather then the other way around where not actting would save your mother’s life.

    What if 5 of your closest friends or family members were on that track, and the single person has the cure for AIDS.

    What if pushing that stranger infront of the train could save 10 lives instead of 5, or 20 lives, or 100. At what point is it still wrong?

  • Mac

    After some more thought on this, I think I can add a bit more to my arguement on direct/indirect action.

    First off though, I don’t believe the death of the individual could be called “unintended”, since they are known about in each case. Unintended I would take to more apply to an increase risk of killing an innocent bystander or accidental death. The individual is clearly part of the moral question here, so if they are unintended in one case, they should be in both.

    My thought would be (and this is heavily taking into account some of the comments above), that morally we accept the idea that sacrificing the individual is the more “right” course of action. However when required to take direct action against someone, some of our evolved morality kicks in, basically the “do unto others” that I am going to propose manifests itself as horror or shock at such a thought.

    This would serve the same function as the double effect that Ebon proposes, in that it aims to prevent moral outcomes that inevitably produce more overall suffering (which it has in the past, but may not in the future). This is the same reaction that convinces us stealing/murder, etc is wrong at an intuitive level because the same could be done back to us.

    The scenario where we throw a switch to condemn one person to death does not trigger this part of the brain since it doesn’t recognise the concept of switches or interpret secondary effects of actions even if there is intent.

  • http://off-the-map.org/atheist/ Siamang

    Whether double-effect reasoning came about through evolution or not is entirely a different question from whether it’s a good idea, and the answer you give to one of those questions has no necessary bearing on the answer of the other.

    Correct, Ebon. Sloppy construction on my part there.

  • John Gathercole

    One thing you missed, Ebonmuse, is that the difference between the lever and the fat man is quantitative, not qualitative. More people think pushing the fat man is wrong, but some people think it’s right. Likewise, some people think pulling the lever is wrong, because you are killing a person who would not have been killed if you had not intervened, and you don’t have the right to play God with that person’s life.

    Likewise, imagine a third scenario, where you are required to strangle or eviscerate a fat man to stop the trolley. More people would probably say this is wrong than simply pushing the fat man, but under the doctrine of double effect they should be the same. I agree with the explanation you originally proposed; it’s about the aversion to direct killing.

  • SM

    I don’t think your example of military ethics is as clear-cut as you suggest. What if the terror will do more to end the war than the attack on ‘military’ targets? Arthur Harris could sleep at nights because he firmly believed that a bomber did more to end World War II by attacking the easy target of civilian homes rather than the hard target of roads and factories. Is dropping a bomb on a power plant, expecting to kill about 100 people directly or indirectly, less wrong than throwing a grenade into a crowd, expecting to kill about 10 people, if both are meant to win an equally just war as efficiently as possible? Intent clearly matters somewhat, but so does effect. You cannot renounce responsibility for a known evil consequence of an action by calling it a side effect of some other good consequence which makes up for it, IMHO.

  • rob

    What if you have the choice of pushing the fat man or throwing the switch? Even more vaguely, what if you have the option of saving five people, and saving either one of two individuals? In either case, does it matter who the individuals are?

  • penn

    I came back in time to this post from your Happiness Machine post, and I hope others follow. I think you are wrong about pushing the man onto the tracks. Your intent is not to kill him. Your intent is to stop the train. It’s the same as in the self-defense scenario. In order to defend yourself you must kill your attacker. In the trolley scenario, in order to save 5 lives you have to kill the fat man.

    I think the two scenarios are the same morally, and the only difference is our visceral reaction. The idea of actually pushing a person to their death is chill inducing. The idea of actually using someone’s corpse to stop a train is also unseemly. The idea of pulling a lever and getting someone hit by train is much tidier emotionally.

  • Tony

    I’ve also come in late from the Happiness Machine post. The doctrine of double effect seems plausible as an explanation, but it occurs to me that the wording of these dilemmas, as they are presented here, leaves open at least 2 alternative explanations that should be considered.
    1) If you push the fat man he will immediately react with horror, so that, in addition to causing his death, you are also causing him a few moments of additional suffering with which we can all easily empathize. That is something that most of us will intuitively grasp as soon as we start to imagine the scenario, such that it doesn’t need to be explicitly spelled out in the description of the dilemma. However, the wording of the first dilemma allows us to assume that the single person on the other track may be completely oblivious to what is about to happen. This creates the possibility that in some peoples minds the outcomes of the two situations are not identical after all.
    2) In the second situation, although the possibility of throwing yourself in front of the train is explicitly ruled out as ineffective, in our mental model of the situation it still exists as a possible course of action. I suspect that in many peoples minds this would, on some primal level, be seen as the most heroic, courageous, and therefore correct thing to do, even though it would not help in any way. But, we are not permitted to give that as our answer. This may create a situation in which pushing the fat man represents an act of cowardice, because we are forcing him to do something that we feel that we really should be doing ourselves.

  • Dark Jaguar

    I think there’s a 3rd option but it all depends on how far away these people are. That’s usually my answer to these sorts of moral dillemas, break the question itself.

    Simply put, has anyone considered just shouting out to the people on the tracks to move out of the way? Or, maybe you can’t stop the train by throwing yourself in front of it, but are you massive enough to shove a victim off the track yourself? Even lacking knowledge on if these would work, wouldn’t they be the preferred moral solution? Also, given the idea that perhaps they could move or be moved out of the way, I’d likely redirect the tracks, but not shove the fat man, as in the first case I’m only doing so as a backup measure and to make the job of saving the one in the path of the train easier and more likely.

    I suppose it’s worth noting that a fat man would have to be truly reprehensibly obese to stand any chance of stopping a runaway train? I know that’s not the point of the question but it’s the first thing I think about when it’s presented to me.


CLOSE | X

HIDE | X