This is a brief “throwing an idea out there” post: I’d like to coin the term “semi-consequentialism,” for the view that the consequences of our actions on people’s well-being is of central moral importance, without being committed to the theses often associated with consequentialism that (1) maximizing well-being is obligatory or even that (2) it’s always right to trade one person’s well-being for an even greater improvement in another person’s well-being.
Objections finding (2) problematic seem to be the most common objections to consequentialism. But even when you’re not hurting anyone, a lot of people find the idea of maximizing utility way too morally demanding. So you could replace that with the thesis that on the scale of superogatory actions (good but not obligatory actions), it’s generally speaking better to do more good, for example Bill Gates spending over half a billion dollars to fight malaria is better than spending an equal amount of money doing something that’s charitable but less helpful in total.
I can think of a number of possible antecedents to what I’m calling semi-consequentialism:
- Kant said it’s wrong to act based on what the consequences will be, but even lots of people who aren’t die-hard consequentialists find that absurd so there are Kantians who try to argue that Kant shouldn’t be taken literally or else Kant shouldn’t have said that given the rest of his theory so you can be a Kantian while caring about the consequences of your actions.
- I think Robert Nozick, in Anarchy, State, and Utopia mentions the possibility of a theory that’s like consequentialism but with some constraints on hurting people for the sake of maximizing utility.
- I think J. J. C. Smart did suggested a similar possibility in response to criticisms of his version of utilitarianism.
- This paper is maybe relevant here but I haven’t read it so I don’t know the details.
- I think I read an article in Philosophy Now! once abut ethical pluralism or something, which might let you combine consequentialism with other considerations. I don’t remember the details, but Googling “ethical pluralism” leads me to this, which I’m not sure is quite what I was thinking of.
- Oh, and I had a professor once and he said thought the problem with utilitarianism is it takes something important and turns it into the only thing that matters. He may have said this is a general problem with ethical theories.
Okay, so off the top of my head I can find lots of ways in which this is not a totally original thought, but… it still seems like there should be a huge literature on this, and it should be given top billing in undergraduate philosophy classes that cover moral philosophy. If you happen to have studied philosophical ideas in this area, can you fill me in on what ground has already been tread here?