Via this post, Scott Alexander has alerted me to the fact that he’s not only the author of a non-libertarian FAQ, but also the author of a consequentialist FAQ. I can’t believe I wasn’t aware of it before, but now I’m going to write a response.
Let me start by saying that I agree with the “consequentialist” position on many moral questions. I agree with the basic idea of effective altruism that if you care about helping other people, you should try to figure out what actions on your part will actually help the most, rather than just trying to do vaguely “good” things. For example, don’t just donate to whatever causes are in the news; use organizations like GiveWell to figure out where your charity dollars will do the most good. (Not that I think GiveWell is the final word on which charities to give to, but they’re a good place to start.)
I also agree with the “consequentialist” answer to Eliezer Yudkowsky’s very clever torture vs. dust specks moral dilemma; at least if the problem is framed in the right way. The basic idea is that, given a choice between saving one person from being tortured for decades, and preventing N people from getting dust specks in their eyes, then for a sufficiently inconceivably huge N I’d want to prevent the dust specks. (Eliezer uses 3^^^3 in Knuth’s up-arrow notation for the N in his framing of the thought experiment.)
I know at this point many people will already be thinking I’m completely crazy, so an argument: if you had a dollar which you could either spend on (1) Dust Speck Guard, guaranteed to save you from one dust speck in the eye or (2) Torture Guard, guaranteed to eliminate a 1 in 3^^^3 chance of being tortured, which would you choose? I think if you understand how inconceivably small a chance “1 in 3^^^3” is, if you understand that it’s effectively zero, then you’ll understand that Torture Guard is basically guaranteed to do nothing and you may as well go with Dust Speck Guard.
If it’s not obvious that you should choose Dust Speck Guard over Torture Guard, ask yourself how much money you’d pay to eliminate a 1 in a trillion chance of being struck with a disease that would cause severe, chronic, untreatable pain but not kill you. If you’d pay any significant money at all for that, ask yourself if that’s really consistent with your actual behavior in terms of being willing to take small risks like driving a car or going swimming. And then realize that 3^^^3 is much, much larger than one trillion.
Multiply the conclusion out over a civilization of 3^^^3 people, and you have an argument for choosing torture over dust specks. Maybe you’d rather give each person in the civilization a choice of Dust Speck Guard or Torture Guard, but if for some reason you had to make a single choice for everybody, I think you’d choose to prevent the dust specks and allow the torture.
With that preamble, I’m going to jump in to talking about Scott’s FAQ in the middle of it, section 4, where Scott starts talking about consequentialism (warning: long blockquote ahead. Feel free to skim!):
4.1: Sorry, I fell asleep several pages back. Remind me where we are now?
Morality is derived from our moral intuitions, but until these intuitions reach reflective equilibrium we cannot completely trust any specific intuition. It would be neat if we could condense a bunch of moral intuitions into more general principles which could then be used to decide tricky edge cases like abortion where our intuitions disagree. Two strong moral intuitions that might help with this sort of thing are the intuition that morality should live in the world, and the intuition that other people should have a non-zero value.
4.2: Oh, good. But I’m probably going to fall asleep again unless you derive the moral law RIGHT AWAY.
Okay. The moral law is that you should take actions that make the world better. Or, put more formally, when asked to select between several possible actions, the more moral choice is the one that leads to thebetter state of the world by whatever standards you judge states of the world by.
4.21: That’s it? I went through all this for something frickin’ obvious?
It’s actually not obvious at all. Philosophers call this position “consequentialism”, and when it’s phrased in a slightly different way the majority of the human race is dead set against it, sometimes violently.
Consider the following moral dilemma, Phillipa Foot’s famous “trolley problem”:
“A trolley is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?”
This tends to split the philosophical world into two camps. The consequentialists would flip the switch on the following grounds: flipping the switch leads to a state of the world in which one person is dead; not flipping the switch leads to a state of the world in which five people are dead. Assuming we like people living rather than dying, a state of the world in which only one person is dead is better than a state of the world in which five people are dead. Therefore, choose the best possible state of the world by flipping the switch.
The opposing camp, usually called deontologists, work on a principle of always keeping certain moral rules, like “don’t kill people”. A deontologist would refuse to flip the switch because doing so would make them directly responsible for the death of one person, whereas not flipping the switch would make five people die in a way that couldn’t really be traced to their actions.
Actually it’s not at all true that the trolley problem divides the world into consequentialists and deontologists. While generally speaking the cosequentialists are going to say you should throw the switch, many non-consequentialists agree with that much, or at least they agree that you may throw the switch.
Judith Jarvis Thompson, in one of the first articles written on the trolley problem, reported that everyone she described the case too agreed you may throw the switch. When the PhilPapers survey came out, it reported that 68% of professional philosophers favored throwing the switch; I was surprised the number was not higher. And I’ve read that in psychological experiments on the trolley problem, most people favor throwing the switch there too; I can’t find a good source at the moment but I think I’ve read figures of something like 75%.
So most people favor throwing the switch, but as Scott notes, most people aren’t consequentialists. In fact, when Thompson and Philippa Foot (Foot being the philosopher who originally came up with the problem) first discussed the trolley problem, they did so trying to account for the intuition that it’s OK to throw the switch in non-consequentialist terms. (Note that I say “non-consequentalist” rather than “deontologist” for reasons I explain here.)To find the real difference between consequentialists and most non-consequentialists, we need to look at a different moral dilemma known as the “fat man,” which Scott also discusses (warning: another long blockquote!):
4.4: What’s wrong with the deontologist position?
It violates at least one of the two principles discussed above, the Morality Lives In The World Principle or the Others Have Non Zero Value principle.
There are only two possible justifications for the deontologist’s action. First, ey might feel that rules like “don’t murder” are vast overarching moral laws that are much more important than simple empirical facts like whether people live or die. But this violates the Morality Lives In The World principle; the world ends up better if you flip the switch, so it’s unclear exactly what is supposed to end off better by not flipping the switch except some sort of ghostly Ledger Of How Much Morality There Is.
The second possible justification is that the deontologist is violating the Principle of According Value to Others by taking the action that will minimize eir own guilt – after all, ey could just walk away from the situation without feeling like ey had any part in the deaths of the five, but there’s a clear connection between eir flipping the switch and the death of the one. Or ey might be engaging in moral signaling; showing that ey are so conspicuously moral that ey will not harm a person even to save five lives (no doubt ey would be even happier if ey only needed to cause one stubbed toe to save five lives; in refusing to do this ey could look even more sanctimonious.)
4.5: Well, your answer to the trolley problem sounds reasonable.
Really? Let’s make it harder. This is a variation of the Trolley Problem called the Fat Man Problem:
“As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?“
Once again the consequentialist solution is to kill the one to save the five; the deontologist solution is to refuse to do so.
4.6: Um, I’m still not sure pushing a fat guy to his death is the right thing to do.
Try to analyze where the reluctance is coming from, and decide whether all your moral intuitions, in full reflective equilibrium, would approve of that source of reluctance.
Are you unsure because you don’t know if it’s the best choice? If so, what feature of not-pushing is so important that saving four lives doesn’t make pushing obviously better?
Are you reluctant because you’d feel really bad afterwards? If so, is you not feeling bad more important than saving four lives?
Are you unsure because some deontologist would say that by eir definition you are no longer “moral”? . But anyone can use any definition for moral they want – I could start calling people moral if and only if they wore green clothes on Saturday, if I were so inclined. So if any deontologist refuses to call you moral just because you pulled the lever, an appropriate response would be to tell that deontologist to @#$& off.
Are you unsure because some vast cosmic clockwork would tick and note that the moral law had been violated in such and such a place by such and such an unworthy human? But we have no evidence that such cosmic clockwork exists (see: Principle of Morality Must Live In The World) and if it did, and it was telling us to let people die in order to prevent it from ticking, an appropriate response would be to tell that vast cosmic clockwork to @#$& off.
Francis Kamm, popular deontologist writer, said that pushing the fat man on the track, even though it would prevent people from dying, would violate the moral status of everyone involved, and ended concluded that people were “better dead and inviolable than alive and violable”.
As far as I can tell, she means “Better that everyone involved dies as long as you follow some arbitrary condition I just made up, than that most people live but the arbirary condition is not satisfied.” Do you really want to make your moral decisions like this?
Cases like the fat man are where most people part ways with consequentialism. I agree that the reasons for not pushing the fat man in 4.6 are not very convincing, so I’ll ignore 4.6. Instead, let me address 4.4, where Scott tries to summarize the general argument for taking the consequentialist position on cases like the standard trolley problem at the fat man.
I don’t for a moment dispute that, all else being equal, it’s better for just one person to die than for five people to die. It would be heroic if the one man threw himself in front of the trolley, saving the five. I think I’m safe from the accusation of violating the “Principle of According Value to Others” on that score.
What I’m squeamish about is the idea that we can always force such trade-offs on other people; that we can always decide to sacrifice one other person to save the five (even assuming roughly equal quality of life, life expectancy, etc. for all the people involved). Maybe here Scott would accuse me of violating the “Morality Lives In The World Principle”; I’m less sure I’m safe from that accusation.
On the other hand, I’m not sure how that argument would go, and I’d also note that Scott seems very gung-ho about moral intuitions (more gung-ho, perhaps, than I am). It seems like most of us have a pretty strong moral intuition against pushing the fat man. So it seems like we should expect a pretty strong argument before setting aside our anti-fat-man-pushing intuition, especially from Scott’s enthusiastic-about-moral-intuitions perspective.
On a similar note, while if it’s simply a question of which harm to prevent, I’d happily choose prevent 3^^^3 dust specks in the eye over decades of torture for one person, I probably wouldn’t torture one person for decades as a means to preventing the 3^^^3 dust specks. And I wouldn’t want to turn on a superintelligent AI if I knew it was going to kill everyone on earth and then start over creating utopia, even if I knew that ultimately the utopia would be better than what would otherwise come about.
And on a semi-related note, back when I was arguing with Scott about his “noncentral fallacy” (see here and here), one thing I meant to say but never got around to saying: Scott asked if I was a consequentialist or a deontologist. In addition to disliking the latter term, I don’t think it matters. What matters is that few ordinary people are consequentialists, so when you analyze the arguments of ordinary anti-abortion folks, your analysis has to start from the fact that they’re probably making non-consequentialist assumptions.
In particular, most people think killing people is generally wrong. They allow for some exceptions: self defense usually, killing enemy soldiers in a just war usually, probably some amount of unintentional killing of civilians in a just war. Some people think capital punishment is OK but that’s controversial. People who’ve studied moral philosophy might make a distinction between the effects of our actions that are intended versus those that are merely foreseen.
Regardless of the exact exceptions they make, however, few people take the consequentialist view that “any time you can get a better result by doing so” is a valid exception to the “don’t kill people” rule.
And if you accept the premise that a fetus is a person, and that abortion should be understood as a deliberate killing (rather than more akin to refusing to stay connected to the famous violinist), and you have something like the usual view of the morality of killing people, then you’re going to conclude that abortion is killing a person and, since it probably doesn’t seem remotely like any of the standard exceptions to the “don’t kill people” rule, then you’re going to conclude that abortion is wrong.
It’s missing all that that makes me think Scott’s “noncentral fallacy” stuff is a poor analysis of the reasons people have for opposing abortion. I mention this partly because the very first post of Scott’s linked above is very clear on the fact that most opponents of abortion are not consequentialists, which leaves me somewhat puzzled.