There are good reasons to be good

I was invited to hang out with a friend last night who had some of her friends visiting from out of town.  Everybody was lovely and a good time was had by all, but at one point the conversation turned to why I felt the need to criticize religion.  I explained that religion is predicated on the idea that we should relax our intellectual standards for truth and, in many cases, make pretenses to knowledge we do not have – and that doing so is a virtue, rather than a failing.

The response came back, “But I think people need a reason to be moral.”

Of course, I agree.  But why, I asked, do we need bad reasons to be moral?  Good reasons are perfectly available.

The response was that some people need religion as a reason.  Presumably, they need religion because it’s a nice, simple story that even a cretin could understand.  Personally I think, ‘Because certain behaviors will make you happier than others’ is an equally simple reason, much more honest (not to mention less problematic) than ‘there’s a god in which you have no reason to believe who will reward you, not for being moral, but for believing in him, and who will punish you, not for being immoral, but for not believing in him’.

People need reasons to be moral, but in a world where good ideas mean the flourishing of the human race and bad ideas are a hamstring to our ability to thrive, we should strive for reasons to be moral that don’t come with a long trail of metaphysical baggage and encourage intellectually bankruptcy.

On the subject of morality, Richard Carrier recently debated a friend of mine, Michael McKay, at the St. Louis Ethical Society on the subject of Goal Theory vs. Desire Utilitarianism.  Both are brilliant, so when the videos are online I will likely post them here.  In the meantime, Dr. Carrier has given me permission to post his opening here, which I do happily as an example of a moral system that requires no appeal to god and no reliance on accepting fantasy.

McKay’s opening is already online and can be read here.

By request, I am posting here the text of my opening statement in the recent (May 3, 2011) debate between Mike McKay and myself, on whether my Goal Theory or Alonzo Fyfe’s “Desire Utilitarianism” is the correct model for moral reasoning. It is almost but not quite verbatim (since I improvised slightly here and there). Because the debate was on that metatopic, it simply presumes true a number of propositions common to both theories. If you are looking for a defense of those propositions, see my book Sense and Goodness without God and my forthcoming chapter in The End of Christianity (already available for pre order on Amazon) which is the most rigorous, and peer reviewed, defense of these common propositions (my chapter in End actually generically defends both models, mine and Fyfe’s, since I am certain the truth is somewhere between them). Video and an argument map will soon be available (which will go beyond what we got to in the debate, with Mike and my cooperation).

 

Opening Statement:

 

I’m told Mike and I don’t actually disagree very much. I defend Goal Theory, Mike defends Desire Utilitarianism. But Goal Theory is nothing more than Desire Utilitarianism taken to its logical conclusion. On Goal Theory all morality aims ultimately at one central goal. And if you identify that goal, then moral facts become simply a matter of empirically discovering what best achieves that goal. And I have argued that that goal is happiness. I do not mean this in the classic utilitarian sense, that the goal of morality is the happiness of others (such as the greatest happiness for the most people, though it may have that outcome), but in the egoistic sense, that the goal of morality is the happiness of the moral agent herself. This is what we’ll be debating today.

 

To make my case I have to define a lot of terms and set up the premises of my argument.

 

First we must define morality: I’ll be using the broadest definition, that morality is that which we ought most to do (i.e. that which we ought to do above all else). I affirm that this is a universally agreed definition, even by people who don’t realize they are using it. It is, in actual practice, what everyone really means by morality.

 

Next we must define desire: there are two ways to define this; there is a scientific definition (which addresses the actual phenomenology and mechanics of mammalian desires) and then there is an analytical definition (which is more general and describes all possible desires of all possible species of any kind, including even the desires of hypothetical emotionless androids).

 

The scientific definition reduces to this: a desire is a state of discomfort, which is relieved by achieving the object of desire. We are therefore driven to pursue the things we want by a state of dissatisfaction, and we call that state a desire. It feels a certain way, it comes in different degrees, it even comes and goes (i.e. there are desires we have but don’t feel because we are occupied by something else). I could even go into further detail about the biochemistry involved, the phenomenology involved, and so on, but that’s all just the machinery that our brains just happen to use in order to realize a more fundamental and universal kind of computation, and it’s that computation I’m really concerned with.

 

It can be defined analytically as: To desire a thing, is simply to prefer that thing to something else instead, i.e. to prefer having it to not having it. This can be manifested by any phenomenology, any mechanism. Even desktop computers have desires in this sense, in the same way ants do or lizards or mice. They just aren’t conscious of it. And their computational abilities are vastly less than ours. But the basic idea of desiring as preferring, and of preferring as choosing, is the same.

 

I’m also affirming a particular theory of motivation: that you always do what you most want.

 

Even when you say “I don’t want to do this, but I have to, so I’m doing it anyway,” that’s not exactly what’s really going on. Because if you really didn’t want to do it, you wouldn’t do it at all. The only reason you are doing it is that your desire to do it was greater than your desire not to. In the sense of the term “desire” I am using, it would be logically impossible for any other result to occur (other than a mad scientist moving your body like a puppet, but we’re talking about your own choices here, so that scenario isn’t relevant). So when you say “I don’t want to do this” you really just mean that you have a desire not to do it, not that that desire is your strongest desire. Your strongest desire is obviously to nevertheless do it anyway.

 

For example, exercising to stay fit. You might not feel like exercising, but you desire to get fit, and you know you can’t get fit unless you stick to your exercise schedule, and when you make this desire present in your mind, it becomes the stronger desire and thus overrides the other. The strange phenomenology of this is a peculiar artifact of our badly designed, mammalian brains and has nothing to do with any universal truth of the matter. The actual truth is simply that we want to get fit, because that’s what we conclude when we think rationally about what we really want.

 

(For an excellent philosophical analysis of this human phenomenology of desire, I highly recommend reading Neil Sinhababu’s 2009 paper in the Philosophical Review ["The Humean Theory of Motivation Reformulated and Defended" 118.4: 465-500])

 

This strange, flawed phenomenology of mammalian desire-computation results in the fact that we often have two sets of desires: our actual desires, and our present desires. And they don’t always align. What we happen to want at any given moment (to sleep in, for example) is not what we really want most (such as, to get to work on time and keep our job, so we can get paid and meet all our other desires with the resulting income).

 

A straightforward, correctly designed computer would never have this problem. Its present desires would always be its actual desires. But we aren’t designed that well. So we need a technology, kind of like a software patch, that fixes our broken computation routines, and makes us run correctly. One of those technologies is morality. (Others, by the way, are logic, science, and mathematics, which we also invented, and use to correct various other errors of computation in our badly designed brains)

 

So when I speak about our desires, and our greatest desire, I do not mean our present desires in any given moment, but our actual desires, what we would actually desire, and desire most, if we were reasoning logically, and aware of all we needed to know. In other words, our desires as they would be if we were rational and informed.

 

If it comes up, I can prove that we all want most, above all things, to be rational and informed when we make decisions (ironically, even when we really want to make irrational or uninformed decisions, such as when we are playing a game, we still want to be sure that we are rational and informed when we choose when to do that and when not to; we all want to be sure we can tell the difference between a game and reality, because our decisions, and their consequences, will be very different in each). So even then we all want to be sure we are rational and informed when we are making our root decisions. So whether we realize it or not, we all want that. But I assume that won’t be in dispute.

 

A rational and informed person will know there are things they don’t know, and would like to know, about most decisions they make. Accordingly they will desire to learn things first, and thus will do so. And the truths of the world will often be different from what we know. Thus when we are making decisions, we want those decisions to be aligned with reality as it is, not as we think it is, because how we think it is might be wrong.

 

For example, when you are deciding whether to marry your boyfriend, you certainly want to know whether he is cheating on you and planning to rob your bank account and run off with his lover. If that is actually happening, the best decision would be not to marry them, even if you don’t know it.

 

So even your actual desires may be wrong, too. And therefore we can say things about what your actual desires would be if you were fully informed. For example, your actual greatest desire might be to marry this man, but when fully informed, your actual greatest desire would be to not marry them.

 

And because of this there are also lots of occasions where we trust that others know better than we do. For example, when you are visiting a naval ship, you will want to follow all the safety directions posted everywhere, even though you won’t necessarily know why. But you do know the people who developed and posted those instructions know why, and almost certainly have very good reasons, and so you trust them. And even when you don’t trust them, you can inquire, and find out the reasons, and confirm their validity yourself.

 

Morality is like that: a system of instructions for how to live, as determined by people in the know whom you trust, because you have analyzed their methods and know them to be rational and sound—for us atheists, that means: derived from ample evidence, in a logically valid way (and not blindly trusting some idiot who claims god speaks to him somehow, when we can’t even vet the reliability of that idiot, much less this god fellow he’s supposed to be speaking to). And we don’t have to trust blindly, because we can verify the soundness and validity of an evidence-based morality ourselves, the same way we do any scientific fact, without having to redo all the science ourselves.

 

In short, we know that if we knew all the facts, the behavior recommended by a sound and valid moral system would be what we would most desire to emulate ourselves. So we do have the actual desire to follow that morality, and when we are acting rationally that actual desire will become our present desire.

 

I won’t explain, now, how all morality (even the morality advocated by Immanuel Kant, even the morality advocated by Christian evangelists) actually reduces to nothing more than this, plus a system of hypothetical imperatives, and thus reduces ultimately to propositions about what we desire. A hypothetical imperative is any statement like “If you want a certain outcome, then you ought to behave a certain way.” So the truth of hypothetical imperatives derives from what you want, your desires. And morality is just a bunch of hypothetical imperatives. So morality derives from your desires. I assume Mike already agrees with me on that, and if not, we can address it in follow up.

 

I also won’t discuss how specific moral facts derive from this analysis, such as that we ought to be reasonably honest and reasonably compassionate. Unless, again, it comes up, but I’ve already presented that analysis in my book Sense and Goodness without God.

 

But basically, if morality is what you ought to do above all else, then moral facts necessarily follow from what you desire above all else. Because what you desire above all else will by its very definition override all other desires. Therefore, choices motivated by that desire will override all other choices. And that is by universal definition what moral facts are: choices that override all other choices.

 

And these moral facts do not follow simply from what our greatest actual desire is, but from what our greatest actual desire would be if we were fully informed and rational. This makes moral facts into empirically discoverable scientific facts—since what we actually want, and what actually best achieves that, are both scientifically discoverable, empirical facts of the world. But that isn’t what we’re debating today. What we’re debating today is whether this actual greatest desire, from which all moral facts are thus entailed, is our desire to be happy.

 

I believe that conclusion was established already by Aristotle. Aristotle defined happiness in a particular way, he called it eudaimonia, a feeling that all is right with yourself and the world, a state of contentment or higher bliss, which was more desirable than mere pleasure or joy or anything else you might define happiness as. But he was hung up on that peculiar phenomenology and mechanics of the mammalian brain. I’m going beyond that to the more fundamental, and more universal truth of the matter, which is that the happiness-state Aristotle was trying to describe is what I shall more generically call a state of satisfaction.

 

Aristotle’s argument went something like this…

 

All desires have a reason. We don’t just desire things for no reason. Most things we desire, we desire because we desire something else that is achieved by it. Pick any desire, and ask why you want that, and you’ll realize there is a reason to want that thing, a reason to have that desire—you desire it for some particular end, and not just for itself. Otherwise you wouldn’t want it. Of course this can’t go on forever. We don’t have infinite desires. So there must be something we desire for no reason, possibly many things. But Aristotle argued that there was one ultimate reason that we desire anything at all, and it is that singular state of satisfaction, that eudaimonia. That, he said, is the only thing you don’t desire for some other end, the only thing that you desire solely for itself. When you ask, “Why do I want to be ultimately satisfied?” The question is inherently absurd. It’s like asking “What’s north of the north pole?”

 

But there are of course different degrees of satisfaction (it can be measured qualitatively: some states of satisfaction are more desirable than others; and quantitatively: how often, and for how long), and the greatest state of satisfaction, that than which no state is more satisfying, is perfect happiness; but all lesser states of satisfaction are degrees of happiness, and we always aim at getting higher up that ladder, or in greater quantities.

 

Our greatest goal in everything we do is simply this: the highest state of satisfaction we can obtain, for as long or as often as possible. This is not our present desire, but our actual desire. That is, if we sat down and thought about why we want anything, why we have any desire we do, why we prefer anything to anything else, we will always come around to the same conclusion: because it satisfies us to do so. Pick any desire we have that motivates us, in fact pick any greatest desire we have—and again I mean an actual desire, not a merely present desire—and ask “Why should I want that, rather than something else instead?” The reason will always be some reference to the state of satisfaction we will obtain by realizing that desire (and sometimes even by merely having the desire). Thus, our ultimate goal is that ‘satisfaction-state’. All desires are pursued for that end. Therefore that is our greatest desire. Therefore all moral facts derive from that desire.

 

Again: Because moral imperatives are imperatives that supersede all other imperatives, and because moral facts are all hypothetical imperatives, and hypothetical imperatives are true in virtue of our desires, therefore imperatives that supersede all other imperatives will derive from desires that supersede all desires. And as the desire for greater satisfaction—which we colloquially describe as different degrees of happiness—that desire for greater satisfaction supersedes all other desires, because it is the only reason to have any desire at all. Therefore, that state of satisfaction is, and must be, the singular goal of morality.

About JT Eberhard

When not defending the planet from inevitable apocalypse at the rotting hands of the undead, JT is a writer and public speaker about atheism, gay rights, and more. He spent two and a half years with the Secular Student Alliance as their first high school organizer. During that time he built the SSA’s high school program and oversaw the development of groups nationwide. JT is also the co-founder of the popular Skepticon conference and served as the events lead organizer during its first three years.


CLOSE | X

HIDE | X