Why I’m not quite a consequentialist (a reply to Scott)

Via this post, Scott Alexander has alerted me to the fact that he’s not only the author of a non-libertarian FAQ, but also the author of a consequentialist FAQ. I can’t believe I wasn’t aware of it before, but now I’m going to write a response.

Let me start by saying that I agree with the “consequentialist” position on many moral questions. I agree with the basic idea of effective altruism that if you care about helping other people, you should try to figure out what actions on your part will actually help the most, rather than just trying to do vaguely “good” things. For example, don’t just donate to whatever causes are in the news; use organizations like GiveWell to figure out where your charity dollars will do the most good. (Not that I think GiveWell is the final word on which charities to give to, but they’re a good place to start.)

I also agree with the “consequentialist” answer to Eliezer Yudkowsky’s very clever torture vs. dust specks moral dilemma; at least if the problem is framed in the right way. The basic idea is that, given a choice between saving one person from being tortured for decades, and preventing N people from getting dust specks in their eyes, then for a sufficiently inconceivably huge N I’d want to prevent the dust specks. (Eliezer uses 3^^^3 in Knuth’s up-arrow notation for the N in his framing of the thought experiment.)

I know at this point many people will already be thinking I’m completely crazy, so an argument: if you had a dollar which you could either spend on (1) Dust Speck Guard, guaranteed to save you from one dust speck in the eye or (2) Torture Guard, guaranteed to eliminate a 1 in 3^^^3 chance of being tortured, which would you choose? I think if you understand how inconceivably small a chance “1 in 3^^^3″ is, if you understand that it’s effectively zero, then you’ll understand that Torture Guard is basically guaranteed to do nothing and you may as well go with Dust Speck Guard.

If it’s not obvious that you should choose Dust Speck Guard over Torture Guard, ask yourself how much money you’d pay to eliminate a 1 in a trillion chance of being struck with a disease that would cause severe, chronic, untreatable pain but not kill you. If you’d pay any significant money at all for that, ask yourself if that’s really consistent with your actual behavior in terms of being willing to take small risks like driving a car or going swimming. And then realize that 3^^^3 is much, much larger than one trillion.

Multiply the conclusion out over a civilization of 3^^^3 people, and you have an argument for choosing torture over dust specks. Maybe you’d rather give each person in the civilization a choice of Dust Speck Guard or Torture Guard, but if for some reason you had to make a single choice for everybody, I think you’d choose to prevent the dust specks and allow the torture.

With that preamble, I’m going to jump in to talking about Scott’s FAQ in the middle of it, section 4, where Scott starts talking about consequentialism (warning: long blockquote ahead. Feel free to skim!):

4.1: Sorry, I fell asleep several pages back. Remind me where we are now?

Morality is derived from our moral intuitions, but until these intuitions reach reflective equilibrium we cannot completely trust any specific intuition. It would be neat if we could condense a bunch of moral intuitions into more general principles which could then be used to decide tricky edge cases like abortion where our intuitions disagree. Two strong moral intuitions that might help with this sort of thing are the intuition that morality should live in the world, and the intuition that other people should have a non-zero value.

4.2: Oh, good. But I’m probably going to fall asleep again unless you derive the moral law RIGHT AWAY.

Okay. The moral law is that you should take actions that make the world better. Or, put more formally, when asked to select between several possible actions, the more moral choice is the one that leads to thebetter state of the world by whatever standards you judge states of the world by.

4.21: That’s it? I went through all this for something frickin’ obvious?

It’s actually not obvious at all. Philosophers call this position “consequentialism”, and when it’s phrased in a slightly different way the majority of the human race is dead set against it, sometimes violently.

4.3: Why?

Consider the following moral dilemma, Phillipa Foot’s famous “trolley problem”:

“A trolley is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?”

This tends to split the philosophical world into two camps. The consequentialists would flip the switch on the following grounds: flipping the switch leads to a state of the world in which one person is dead; not flipping the switch leads to a state of the world in which five people are dead. Assuming we like people living rather than dying, a state of the world in which only one person is dead is better than a state of the world in which five people are dead. Therefore, choose the best possible state of the world by flipping the switch.

The opposing camp, usually called deontologists, work on a principle of always keeping certain moral rules, like “don’t kill people”. A deontologist would refuse to flip the switch because doing so would make them directly responsible for the death of one person, whereas not flipping the switch would make five people die in a way that couldn’t really be traced to their actions.

Actually it’s not at all true that the trolley problem divides the world into consequentialists and deontologists. While generally speaking the cosequentialists are going to say you should throw the switch, many non-consequentialists agree with that much, or at least they agree that you may throw the switch.

Judith Jarvis Thompson, in one of the first articles written on the trolley problem, reported that everyone she described the case too agreed you may throw the switch. When the PhilPapers survey came out, it reported that 68% of professional philosophers favored throwing the switch; I was surprised the number was not higher. And I’ve read that in psychological experiments on the trolley problem, most people favor throwing the switch there too; I can’t find a good source at the moment but I think I’ve read figures of something like 75%.

So most people favor throwing the switch, but as Scott notes, most people aren’t consequentialists. In fact, when Thompson and Philippa Foot (Foot being the philosopher who originally came up with the problem) first discussed the trolley problem, they did so trying to account for the intuition that it’s OK to throw the switch in non-consequentialist terms. (Note that I say “non-consequentalist” rather than “deontologist” for reasons I explain here.)

To find the real difference between consequentialists and most non-consequentialists, we need to look at a different moral dilemma known as the “fat man,” which Scott also discusses (warning: another long blockquote!):

4.4: What’s wrong with the deontologist position?

It violates at least one of the two principles discussed above, the Morality Lives In The World Principle or the Others Have Non Zero Value principle.

There are only two possible justifications for the deontologist’s action. First, ey might feel that rules like “don’t murder” are vast overarching moral laws that are much more important than simple empirical facts like whether people live or die. But this violates the Morality Lives In The World principle; the world ends up better if you flip the switch, so it’s unclear exactly what is supposed to end off better by not flipping the switch except some sort of ghostly Ledger Of How Much Morality There Is.

The second possible justification is that the deontologist is violating the Principle of According Value to Others by taking the action that will minimize eir own guilt – after all, ey could just walk away from the situation without feeling like ey had any part in the deaths of the five, but there’s a clear connection between eir flipping the switch and the death of the one. Or ey might be engaging in moral signaling; showing that ey are so conspicuously moral that ey will not harm a person even to save five lives (no doubt ey would be even happier if ey only needed to cause one stubbed toe to save five lives; in refusing to do this ey could look even more sanctimonious.)

4.5: Well, your answer to the trolley problem sounds reasonable.

Really? Let’s make it harder. This is a variation of the Trolley Problem called the Fat Man Problem:

“As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?“

Once again the consequentialist solution is to kill the one to save the five; the deontologist solution is to refuse to do so.

4.6: Um, I’m still not sure pushing a fat guy to his death is the right thing to do.

Try to analyze where the reluctance is coming from, and decide whether all your moral intuitions, in full reflective equilibrium, would approve of that source of reluctance.

Are you unsure because you don’t know if it’s the best choice? If so, what feature of not-pushing is so important that saving four lives doesn’t make pushing obviously better?

Are you reluctant because you’d feel really bad afterwards? If so, is you not feeling bad more important than saving four lives?

Are you unsure because some deontologist would say that by eir definition you are no longer “moral”? . But anyone can use any definition for moral they want – I could start calling people moral if and only if they wore green clothes on Saturday, if I were so inclined. So if any deontologist refuses to call you moral just because you pulled the lever, an appropriate response would be to tell that deontologist to @#$& off.

Are you unsure because some vast cosmic clockwork would tick and note that the moral law had been violated in such and such a place by such and such an unworthy human? But we have no evidence that such cosmic clockwork exists (see: Principle of Morality Must Live In The World) and if it did, and it was telling us to let people die in order to prevent it from ticking, an appropriate response would be to tell that vast cosmic clockwork to @#$& off.

Francis Kamm, popular deontologist writer, said that pushing the fat man on the track, even though it would prevent people from dying, would violate the moral status of everyone involved, and ended concluded that people were “better dead and inviolable than alive and violable”.

As far as I can tell, she means “Better that everyone involved dies as long as you follow some arbitrary condition I just made up, than that most people live but the arbirary condition is not satisfied.” Do you really want to make your moral decisions like this?

Cases like the fat man are where most people part ways with consequentialism. I agree that the reasons for not pushing the fat man in 4.6 are not very convincing, so I’ll ignore 4.6. Instead, let me address 4.4, where Scott tries to summarize the general argument for taking the consequentialist position on cases like the standard trolley problem at the fat man.

I don’t for a moment dispute that, all else being equal, it’s better for just one person to die than for five people to die. It would be heroic if the one man threw himself in front of the trolley, saving the five. I think I’m safe from the accusation of violating the “Principle of According Value to Others” on that score.

What I’m squeamish about is the idea that we can always force such trade-offs on other people; that we can always decide to sacrifice one other person to save the five (even assuming roughly equal quality of life, life expectancy, etc. for all the people involved). Maybe here Scott would accuse me of violating the “Morality Lives In The World Principle”; I’m less sure I’m safe from that accusation.

On the other hand, I’m not sure how that argument would go, and I’d also note that Scott seems very gung-ho about moral intuitions (more gung-ho, perhaps, than I am). It seems like most of us have a pretty strong moral intuition against pushing the fat man. So it seems like we should expect a pretty strong argument before setting aside our anti-fat-man-pushing intuition, especially from Scott’s enthusiastic-about-moral-intuitions perspective.

On a similar note, while if it’s simply a question of which harm to prevent, I’d happily choose prevent 3^^^3 dust specks in the eye over decades of torture for one person, I probably wouldn’t torture one person for decades as a means to preventing the 3^^^3 dust specks. And I wouldn’t want to turn on a superintelligent AI if I knew it was going to kill everyone on earth and then start over creating utopia, even if I knew that ultimately the utopia would be better than what would otherwise come about.

And on a semi-related note, back when I was arguing with Scott about his “noncentral fallacy” (see here and here), one thing I meant to say but never got around to saying: Scott asked if I was a consequentialist or a deontologist. In addition to disliking the latter term, I don’t think it matters. What matters is that few ordinary people are consequentialists, so when you analyze the arguments of ordinary anti-abortion folks, your analysis has to start from the fact that they’re probably making non-consequentialist assumptions.

In particular, most people think killing people is generally wrong. They allow for some exceptions: self defense usually, killing enemy soldiers in a just war usually, probably some amount of unintentional killing of civilians in a just war. Some people think capital punishment is OK but that’s controversial. People who’ve studied moral philosophy might make a distinction between the effects of our actions that are intended versus those that are merely foreseen.

Regardless of the exact exceptions they make, however, few people take the consequentialist view that “any time you can get a better result by doing so” is a valid exception to the “don’t kill people” rule.

And if you accept the premise that a fetus is a person, and that abortion should be understood as a deliberate killing (rather than more akin to refusing to stay connected to the famous violinist), and you have something like the usual view of the morality of killing people, then you’re going to conclude that abortion is killing a person and, since it probably doesn’t seem remotely like any of the standard exceptions to the “don’t kill people” rule, then you’re going to conclude that abortion is wrong.

It’s missing all that that makes me think Scott’s “noncentral fallacy” stuff is a poor analysis of the reasons people have for opposing abortion. I mention this partly because the very first post of Scott’s linked above is very clear on the fact that most opponents of abortion are not consequentialists, which leaves me somewhat puzzled.

  • http://gamesgirlsgods.blogspot.com/ Feminerd

    And if you accept the premise that a fetus is a person, and that abortion should be understood as a deliberate killing (rather than more akin to refusing to stay connected to the famous violinist), and you have something like the usual view of the morality of killing people, then you’re going to conclude that abortion is killing a person and, since it probably doesn’t seem remotely like any of the standard exceptions to the “don’t kill people” rule, then you’re going to conclude that abortion is wrong.

    What most people forget is that we value bodily autonomy more than we value life. They are applying different standards to women and fetuses than they do to anyone else. Under no circumstances is anyone legally or morally obligated to donate blood, tissue, or organs to another person, even if that other person will die without said donation. What a fetus does is “borrow” the woman’s uterus and literally build itself out of her blood and nutrients. If a woman decides she does not wish to donate her blood, nutrients, and organ to the fetus, she can ethically remove it from her person. The fact that the fetus dies is not a killing qua killing, but rather a removal of donations the fetus was never morally entitled to.

    Does abortion count as killing another human being? Only as much as refusing to donate a liver lobe or bone marrow counts as killing another human being. Remember that we don’t even require corpses to donate tissue, preferring to respect the once-living person’s wishes over the current needs of living (but soon dead) persons. Any philosophical position that puts women’s rights to their own bodies below corpses’ rights is, um, ethically flawed to say the least.

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      Just to be clear–this is intended as expanding on what I’m saying, not disagreeing with it? Because I think I basically agree with what you write.

      • http://gamesgirlsgods.blogspot.com/ Feminerd

        Nodnod. It is indeed intended to expand upon it and take a fuller look at (some of) the ethical implications of the abortion debate. I’m pretty sure we agree on this subject as well.

    • http://kagerato.net/ kagerato

      That’s all quite correct, and it defeats the whole FETUS = BABY argument. However, I am by no means willing to declare that a fetus is a person, and still think that is the most ridiculous premise in the whole “pro-life” argument.

      • http://gamesgirlsgods.blogspot.com/ Feminerd

        I don’t think a fetus is a person either. I find this particular argument cuts through that whole line of argumentation and is a very effective argument. When personhood begins is a difficult philosophical question that may never be answered to everyone’s satisfaction, and while it’s fun to go round in circles on it as a discussion topic, I’d rather go with effective arguments when dealing with human rights.

  • eric

    The basic idea is that, given a choice
    between saving one person from being tortured for decades, and preventing N
    people from getting dust specks in their eyes…I know at this point many people
    will already be thinking I’m completely crazy, so an argument: if you had a
    dollar which you could either spend on (1) Dust Speck Guard, guaranteed to save
    you from one dust speck in the eye or (2) Torture Guard, guaranteed to
    eliminate a 1 in 3^^^3 chance of being tortured,

    The problems are not equivalent for a lot of reasons. In the
    first case, a guaranteed prevention is part of the package while its not in the
    second (for any reasonable amount of dollars). In the first case, it’s a clear
    either/or choice; by phrasing the second version in dollars spent, people will very
    naturally and unconsciously consider opportunity cost and what else they could
    spend that money on (alleviating poverty, etc. – even if you tell them they can’t
    spend the money on anything else, some part of their brains will tell them “that’s
    a lot of money, so this is a bad deal, because think of the other things you
    could do with that”). Due to the non-equivalence, I think its perfectly
    rational for someone to pick no-torture in the first and mote-guard in the second.

    If it’s not obvious that you should choose
    Dust Speck Guard over Torture Guard, ask yourself how much money you’d pay to eliminate a 1 in a trillion chance of being struck with a disease that would
    cause severe, chronic, untreatable pain but not kill you. If you’d pay any
    significant money at all for that, ask yourself if that’s really consistent
    with your actual behavior in terms of being willing to take small risks like
    driving a car or going swimming. And then realize that 3^^^3 is much, much
    larger than one trillion.

    Here you’re making a very common assumption, which is that
    humans ought to have no preferences in terms of risk or that any such
    preference is irrational and we should ignore it. That, e.g., we would happily
    play Russian roulette with a gun with ~18,000 chambers in order to reduce our
    commute time by 20 minutes a day, because that’s what we do when we get in our cars. Or that one shouldn’t oppose smoking if one eats peanut butter (its risk of lung cancer vs. liver cancer). Pragmatically, its simply untrue that we don’t
    have such preferences; we do. We do in fact care about how
    we might die as much or more than we care about the raw probability that we
    will die. Being able to choose how (or choose the likely range of how’s) has
    significant value to a lot of people. Secondly and more troubling, you’re assuming
    that a rational ethical system must or ought to ignore such preferences (for
    how we might die). This seems like a really bad assumption. You are essentially
    ignoring the psychological payoffs altogether, skewing the payoff term in the ‘is
    this a good bet’ calculation. What makes an equivalent bet is not equivalent
    probability (p) or equivalent benefit (b), its equivalent p*b. And in none of the examples either you or I cite, can you assume equivalent p*b.

    One can perfectly rationally go hang gliding while choosing not to live in Denver because of the increased background radiation. Such a choice might not make much sense to me, eric, personally, but if ones’ payoff from hang gliding is large and one’s payoff from avoiding excess background radiation is also large, then even though hang gliding is a far more risky activity, the choice can make rational sense to that individual. And that is why your mote/torture logic is, well, torturous.

    • http://kagerato.net/ kagerato

      Good points. Most moral systems are incomplete in the way you point out — they fail to fully account for people’s individual preferences and instead try to look at society in the aggregate. Yet preferences are basically everything that makes us what we are (and want to be).

  • Ray

    Things like the fat man scenario as objections to consequentialism have
    always struck me as a bit odd. There is a clear consequentialist
    argument against pushing the fat man off of the bridge: Whether it’s
    irrational or not, people as they are currently constituted will feel
    less safe in a world where innocent people are pushed off of bridges
    with impunity, regardless of the reason. Considering such a freak event
    would be a huge news story, even this proverbial dust-speck in the eye
    would be multiplied across millions of news-viewers. Is this enough to
    offset the deaths of the five people on the tracks? I have no idea, but
    it’s certainly plausible.

    • http://kagerato.net/ kagerato

      Yes. Pragmatic feelings about general safety definitely have something to do with why we establish some flat rules in behavior and law. Purist consequentialists often fail to account for the unintended and indirect consequences in moral action, which is pretty ironic. Artificial philosophical scenarios don’t really show us much of this. We must think carefully about the real world to determine how guidelines like “maximize happiness” or “maximize the number of people alive” will actually affect people.

  • MNb

    There is a more extreme and clear and less artificial example of the trolley example. Say you’re a doctor. You have a patient, call him A, who is unconscious, but will get better with care and patience. Now five victimes of an accident come in. They badly need transplantations or they will die. Patient A can provide all the organs and stuff. What will you do?

    • Eckswhyzed

      In thought experiment land, I would take patient A’s organs. But a world in which people are afraid to visit hospitals because they might get cut up against their will is worse than one in which you are more likely to die from a lack of a suitable organ donor. Plus there’s real world considerations like compatibility of organs etc.

      The problem with these sorts of examples is that it’s very hard to separate the consequentialist position from the others without all sorts of contrived circumstances.

    • http://kagerato.net/ kagerato

      Triage. Excellent comparison. The trolley scenario was always so contrived. Medical examples are very real and very common.

      You won’t find many people willing to say they’re going to kill people to harvest their organs. Who wants to live in a society where lives are directly and involuntarily sacrificed to maximize the number of people alive, without regard to the quality of life?

      We can take a step further back, too. It’s not just medical emergencies. We could produce more effective medicines, and maybe even classes of medicines we don’t even realize are possible, by doing human experimentation. Few people are going to sign up to be lab rats in dubious and dangerous trials, so we’re back again to forcing people to participate.

      • Eduardo

        They would accept if the pay is good.

  • L.Long

    The dust mote is silly. There is no way you can show that me buying a anti-torture thingy will be of any use.

    The 1st trolley problem is easy-THINK!!!. Throw the switch half way and derail the trolley and some get hurt but most likely none are killed.

    2nd trolley–If you think the fat guy will work then throw yourself off the bridge. Try throwing me off just means you will go 1st anyway as I defend myself and throw you off.

    • David Simon

      Way to sidestep the point there, pal. These objections are solvable, just adjust the details. When you are left only with the two bad choices and are forced to pick, then you’re actually doing some moral training.

      • eric

        I believe Long’s point is that, in life, there are rarely only two bad choices and a forced pick between them.

        The fat man and trolly examples are, IMO, to ethics what “assume a spherical cow” is to physics. A simplifynig assumption that can let students think about possible solutions or how to go about solving problems, but probably has nothing valuable to say about real life, practical, decision-making. Its too distant and too simplified to be useful when you’ve got actual cows and an actual truck with actual geometric space and balance issues to deal with. Analogously, fat man caclulations have very little of value to say about real life ethical decisions, which tend to have important complexities and a much greater range of possible responses.

        • David Simon

          I disagree. Simplified versions of complex problems are very useful when it comes to analysis; we can get at the core assumptions and principles of our ethical systems much more easily when there are fewer factors in play.

          However, I do agree that the fat man and trolley examples are pretty terrible; they claim a situation that’s just physically implausible. What human is fat enough to stop a train? And what real-life person can safely conclude that they are fat enough with such high confidence?

          I like MNb’s surgical example below much better.

          • L.Long

            Yes it is as Eric said there really are very few yes/no binary situations in real life. But if they do come to such a binary decision then one must also make the most important choice immediately, namely DON’T LET ANY ONE KNOW IT WAS YOU!!! Because in the two trolley examples you will be brought up on charges, or if you threw my fat brother off the bridge, I will be looking for you and you wont like the results. See there are NO binary choices in real life.

          • eric

            I like it too. But as multiple posters have pointed out, what it leads to is an understanding that long-term, indirect costs need to be considered, and aren’t by these simplistic toy situations. Which, again, I think supports L.Long’s point and mine: Long isn’t dodging the question; responding to such toy examples by pointing out that they are unrealistic is perfectly valid because the lack of realism means they provide little or no good insight into how to make ethical decisions in the real world.

  • http://kagerato.net/ kagerato

    The weirdest thing about the trolley pro-consequentialist argument in my view is that it completely ignores long-term efficacy and side effects. Even supposing we assume pushing the fat man is equally effective to the switching the tracks (it’s totally not, and that’s just ridiculous on its face), how many times is this trick supposed to work in reality before it breaks down? Do we have an outbreak of fat men just waiting at the right time and place to act as our involuntary collective saviors? Will we start using fat men as meat shields in war? What is it we have against fat men, anyway?

    In just about any kind of philosophy, there’s a good way to figure out whether your views are incomplete or trivially over-simplistic. Check how well they generalize to other kinds of analogous behaviors, and whether they hold up over time for those actions. Ideological consequentialism doesn’t hold up in many other scenarios. Indeed, it leads to absurdities.

    Let’s test how well naive moral principles work out in reality. Say we’ve got two countries at war, S and N. S is a nuclear super power. N is a “normal” country, nothing particularly special about it — smaller population, smaller military, fewer natural resources, inferior technology. N is pretty much guaranteed to lose this war in the long run, one way or another, as long as S really intends to fight with everything it has. Even so, assuming that the war drags out for a long time in conventional battles, the number of deaths and suffering from poverty from disease on both sides is guaranteed to grow and grow. A protracted occupation of N by S would also result in a resistance movement and widespread, continuous suffering and death. Given all of this, would S be justified in using a pre-emptive nuclear strike to end the war immediately? A pure ideological consequentialist _must_ allow for the possibility to be yes. It becomes just a matter of assessing the facts.

    Anyone familiar with history knows that this isn’t some thought experiment. It’s the United States and Japan in World War 2. If you read Truman’s justification for the atomic bombings sometime, try counting how many different assumed and unproven premises he throws in there. He absolutely makes several different allusions to “saving lives”. Though he doesn’t sound too concerned with the lives of Japanese civilians. Wonder why.

    There’s another huge weakness to the trolley thought experiment. It doesn’t assess any moral value to human emotions whatsoever. We’re asked to think about only the number of people who will live and die. (Or in other cases, to maximize net happiness, which isn’t even a calculable value in any reasonable sense.) Why are the lives of the track-strapped citizens necessarily and always more important than the mental anguish and suffering that may be experienced by the person asked to do the killing? If an emotional response doesn’t seem appropriate enough for you, replace the fat man stranger with a loved one of our moral actor. Seem any different now? If we allow for the mere fact of continued existence to outweigh any amount of mental suffering, we end up with very, very uncomfortable conclusions such as allowing widespread torture to prevent the death of even one person.

    One last problem with the trolley scenario. It asks us to make a decision with incomplete information. Indeed, it’s extremely implausible we would ever have complete information to work with in any situation even the slightest bit similar in the real world. What if the fat man we’re to push is one of the world’s greatest scientists, on the brink of an earth-shaking discovery? By pushing him off, we may be actually be condemning society even from a purely utilitarian perspective. Further, he doesn’t need to be the greatest genius who ever lived. He only needs to be of enough value that his continued existence would save five or more lives. Then his survival is clearly better, from the point of view of maximizing living people.

    None of this is to say that purist deontological approaches are any better. They’re just as bad, no doubt about it. No moral system is going to be able to cope intelligently with the real world if it doesn’t allow for contextual changes to influence the outcome of its thinking.

  • KenBrowning

    Another layer to these thought experiments is what happened when people were hooked up to an fMRI and went through both scenarios. When contemplating the trolly car switch experiment, logic centers lit up and emotional centers were much less engaged and the opposite was true when considering the fat man. My guess is that the emotive centers would light up even greater if the fat man is replaced by a close kin, fat relative or a leading contributor to one’s in-group.

    What would happen if the five tied to the track were Fox News analysts and Obama was the single tie-ee?

    I had not heard of the triage experiment brought up by MNb but it’s a great addition.

  • Yvain

    Our most substantive disagreement seems to be this paragraph:

    “What I’m squeamish about is the idea that we can always force such trade-offs on other people; that we can always decide to sacrifice one other person to save the five (even assuming roughly equal quality of life, life expectancy, etc. for all the people involved). Maybe here Scott would accuse me of violating the “Morality Lives In The World Principle”; I’m less sure I’m safe from that accusation.”

    The universe “forces the tradeoff”, you just choose one side or the other. You can trade off the one life for the five, or you can trade off the five lives for the one. The only asymmetry here is that trading off the five lives for the one helps you pretend you didn’t make a choice.

    So the real trade-off is “save five lives, kill one, and feel like you made a choice” versus “kill five lives, save one, but get to tell yourself it wasn’t your choice at all.”

    “It seems like most of us have a pretty strong moral intuition against pushing the fat man. So it seems like we should expect a pretty strong argument before setting aside our anti-fat-man-pushing intuition, especially from Scott’s enthusiastic-about-moral-intuitions perspective.”

    Analogy to math: people have base-level intuitions like “probabilities should add up to one”, which are used for axioms. People also have higher-level intuitions like “It shouldn’t make a difference which door you choose in the Monty Hall problem”. Privileging the higher-level intuitions here is a lot like trying to invent a new form of probability theory that gives the “intuitive” answer to the Monty Hall problem.

    I think you just gave an explanation for why people think killing fetuses is wrong which sounded pretty much like the noncentral fallacy, and then concluded with “and therefore it’s that, and not the noncentral fallacy”.


CLOSE | X

HIDE | X