A Christian vs. an Atheist: On God and Government, Part 3

This is part 3 of my “Think! Of God and Government” debate series with Christian author Andrew Murtagh. Read my latest post and Andrew’s reply.

Andrew,

Thanks again for inviting me to take part in our radio debate last month. It was fun, even if I sometimes felt like I was arguing with the host more than I was arguing with you! I’ve always found that thinking on my feet is tricky, but I welcome the chance to get more practice.

On the subject of how people acquire religious belief, I think this is mainly a question of what precedes what. I don’t know what your personal story is, and if you believe that Christianity speaks uniquely to the human condition in a way that other religions don’t, I’d like to hear why. But when I see someone first acquiring religious belief in childhood, or through a powerfully emotional conversion experience, and later using sophisticated apologetics to defend it, it sets my skeptical senses tingling. Human beings are very good at finding clever rationalizations for things we want to believe for other reasons, and the most obvious proof of this is that millions of people make the same arguments in support of completely different and incompatible faiths!

But perhaps we’ll revisit this later on. You asked about my views on morality, and I’m happy to oblige. I’ve written about this at much greater length, but I’ll try to present a short summary here.

As I mentioned earlier, I believe that the basis for morality is human happiness. It can’t be otherwise, because if you start with any other virtue – justice, say, or filial duty, or individual liberty, or religious piety – you can ask why we should value that quality, why we should care about it at all. And whatever answer you give, you can ask the same question again. If you chase this regression as far as you can, you’ll ultimately end up at happiness: the only quality that’s intrinsically valuable, the only thing we desire for its own sake and not because it gives rise to some other good.

Here’s what I see as the really important step: the realization that happiness is an empirical phenomenon. Although individuals have unique likes and dislikes, there are basic, fixed facts about human nature which we all have in common. This means that there are objective truths about what does and doesn’t promote human well-being. These moral truths exist not by decree of a supernatural being, but simply by virtue of the kind of beings we are and the ways in which we relate to each other.

This means that not all opinions about morality are created equal; not all ideas about what promotes happiness are correct. Some are better than others, reflect reality more closely than others. And the only way to reliably identify those that are better and those that are worse is through rational debate based on empirical evidence.

As I see it, the purpose of society is to ensure that everyone’s basic needs are met, and then, as far as is practical, to get out of the way and make it possible for each person to pursue their own vision of the good life. In his book The Moral Landscape, Sam Harris invites us to imagine a landscape where every point stands for a particular way of organizing society, and the elevation of that point represents the happiness that that society produces. On this landscape there are foothills, ridges and mountain peaks, representing blissful utopias, and there are sinkholes, valleys and deep depressions, representing dystopias of various kinds – dictatorships, theocracies, oligarchies – where the well-being of the many is trampled for the sake of the few. Our task is to figure out how to move uphill: to find the innovations, whether they be technological, political or philosophical, that improve life for all of us.

Obviously, this is a very complex task, and I don’t claim to have all the answers. But I think there are some fundamental moral truths that it would be folly to deny. For example, because conscious existence is obviously a prerequisite for happiness, I agree that human life should be preserved whenever possible. To destroy another person, except in the direst cases of self-defense or to relieve otherwise unbearable suffering, is a terrible crime because it robs them of all the happiness they might otherwise have had.

I’d like to hear more about where we part company, but in general, I find that my views differ from religious believers in one of two ways. With some, I have a philosophical disagreement: the theists who believe that human happiness is not important, only God’s happiness is, and therefore he’s entitled to treat us however he sees fit. I think Calvinists could fairly be described as believing this, for example. (I once read a tract which said that people in heaven and people in hell glorify God equally, just in different ways – the saved by making it possible for him to show his love, the damned by making it possible for him to show his wrath. That always gives me a shudder.)

Then there are the theists who believe that human happiness is paramount, but that there’s an afterlife which is infinitely more significant than this one, and so our primary moral responsibility is to follow whatever rules have to be followed to secure access to the better class of afterlife. I think this view is less pernicious than the other, but I still have a fundamental disagreement with it. As I’ve often said, afterlife-based morality can produce good results for human beings, but when it does, it’s only by coincidence.

I’m curious to hear if you’d describe yourself as belonging to either of these camps, or if you’d advocate something different entirely.

About Adam Lee

Adam Lee is an atheist writer and speaker living in New York City. His new novel, Broken Ring, is available in paperback and e-book. Read his full bio, or follow him on Twitter.

  • Calm Canary

    How do you respond to the standard criticisms of happiness-based moral theories, e.g. that they imply that it is desirable to for people to take drugs or artificially stimulate their pleasure centers as often as is technically feasible.

    Also, if you acknowledge that some moral systems aren’t even trying to maximize happiness, how is yours objective? Even if you think that human happiness is the only intrinsically valuable thing, if other people instead sincerely believe that obeying God or protecting natural rights is intrinsically valuable, how can you know that they are wrong?

  • Kenny

    Interesting experiment: enter “the basis for morality is” into Google and the top suggestions are
    duty
    religion
    compassion
    and that’s all. Compassion comes closest to the idea, but Happiness itself doesn’t show up until page 7 of Google search results.

    The argument for human happiness is so simply profound and profoundly simple, I had to wonder if it hadn’t already been thought of before. Were you inspired by this essay by Annie Besant?
    http://www.gutenberg.org/files/15545/15545-h/15545-h.htm

  • eyelessgame

    I’ll try to tackle the first question. To rephrase the question, if I may: “Seeking happiness is what is good. To seek happiness, people will take drugs. People taking drugs is bad.” There is a logical flaw in that argument somewhere, and finding that flaw is where you find the counterarguent.

    If people taking drugs is something that you feel is bad, why do you feel it’s bad? You do (likely) believe it’s bad. I do too. Examine the second premise – that people will take drugs in order to reach a state of happiness. And, of course, we see that taking drugs does not, in fact, lead to happiness in the long term – what drugs or sex or other immediately hedonistic pursuits lead to is a great deal of short-term pleasure, which is not the same thing as happiness.

    To be clear, physical pleasure isn’t incompatible with happiness. But it’s not the same thing, either. Ayn Rand isn’t completely wrong: there is happiness in being creative and productive. The weird thing is, though, that she’s wrong about why: happiness in production and creativity is largely rooted in what production and creativity does tor the happiness of other people, and for non-sociopaths, the happiness of other people is positively correlated with the happiness of the self.

  • GCT

    Also, if you acknowledge that some moral systems aren’t even trying to maximize happiness, how is yours objective?

    Some people think evolution isn’t objective. Does that mean that it isn’t?

    Even if you think that human happiness is the only intrinsically valuable thing, if other people instead sincerely believe that obeying God or protecting natural rights is intrinsically valuable, how can you know that they are wrong?

    This was already addressed in the OP:

    As I mentioned earlier, I believe that the basis for morality is human happiness. It can’t be otherwise, because if you start with any other virtue – justice, say, or filial duty, or individual liberty, or religious piety – you can ask why we should value that quality, why we should care about it at all. And whatever answer you give, you can ask the same question again. If you chase this regression as far as you can, you’ll ultimately end up at happiness: the only quality that’s intrinsically valuable, the only thing we desire for its own sake and not because it gives rise to some other good.

  • JohnH2

    You have nearly recovered Aristotle.

  • James

    There’s a term I’ve seen used in philosophical discussion, although I’m not sure how widely it’s spread – wireheading. Imagine scientists have discovered the right bits of your brain to stimulate to make it maximally happy – they can hack it by sticking some electronics in or around it. Somebody with those electronics has wireheaded, and they are permanently in a state of utterly maximal bliss, as happy as possible, no matter what, because the electronics is just permanently stimulating their pleasure centres.

    Wireheading is a tricky concept for happiness-based utilitarian ethics because it seems counterintuitive that the most ethical actions lead to making everyone wirehead.

    As far as I see it, there are a few possible responses:
    - Your intuition is wrong; wireheading is ethically optimal; the reason your intuition thinks otherwise is because this can’t occur in the ancestral environment
    - Wireheading is not ethically optimal because it prevents further developments which improve happiness even more (breeding more humans to make happy, technological improvements that improve brains to make their optimal happiness higher). If all the scientists are wireheading they aren’t sciencing.
    - Wireheading isn’t possible

    I think 3 is unlikely – the mind is the brain, and I’d find it exceedingly strange if it wasn’t possible to hack it to be really really happy all the time, especially when drugs like opiates come pretty close. 2 seems potentially sound, although there’s potentially a problem with the Repugnant Conclusion.

  • BlueGuyRedState

    You are attempting to equate short term pleasure with happiness which is not only wrong, it is absurdly, demonstrably wrong by examination of the real world. Certainly, some people do seek short term pleasure (often to their own detriment) but the vast majority of the world has a much more nuanced, long term definition of happiness.

    Actually, evolution would permit no other outcome in the long run. Natural selection is hardly perfect or fast, but only people that place significant importance on the survival of their family and offspring are able to keep their genes in the pool. Voluntarily cooperating with the rest of society has also proven to be beneficial for survival, and it is natural that we would measure some of our happiness as a result of other peoples opinions of us.

    Finally, even if you base your decisions on the dictates of an “objective moral code” of some kind, you are still doing it to increase your own happiness by either entering heaven or avoiding hell or deriving some kind of benefit.

    In short, everyone ALREADY is seeking happiness in one form or another. So why not attempt to maximize happiness for the most people?

  • David Cortesi

    Using “happiness” as your standard is not going to work well. First, it’s too vague a word, has too many meanings for too many people. Second, as a goal it doesn’t encompass what is actually more important to most people: avoiding fear and pain. You don’t need to go around being “happy” all the time, but you do really need to be without fear and pain, a state of contentment or equanimity.

    A state of no-fear-no-pain is a solid goal, measurable in principle, and immediately comprehensible to most. Thus you could say, Any act that contributes to an increase of fear or pain in a conscious being is morally wrong, and any act that contributes to a decrease of fear or pain in a conscious being is morally good.

    Another approach is to set the goal, not as simple happiness, but as “satisfaction of more desires”. Acts that thwart desires of others are morally bad, acts that promote the satisfaction of desires in others are morally good. Since “desire” encompasses notably “desire to not feel fear or pain” this is more general than the fear-pain standard. Alonzo Fife and Luke Muelhauser have developed this “desire utilitarianism” in great depth including answers to questions like “what about desires to do drugs and abuse kittens?” See http://commonsenseatheism.com/?p=776 for the FAQ.

  • http://www.patheos.com/blogs/daylightatheism Adam Lee

    Here, from a 2009 post, is my response to the “happiness machine” thought experiment.

  • http://www.patheos.com/blogs/daylightatheism Adam Lee

    I don’t see what grounds there are for saying that “fear” or “pain” are more definitionally precise than “happiness”. Yes, there are different kinds of happiness; there are also different kinds of fear and pain. (Did you mean only physical pain? What about emotional pain? Loneliness? Anguish? Ennui?) All three of these things are qualia, and for that reason they’re all bound to be a little fuzzy around the edges.

    Another approach is to set the goal, not as simple happiness, but as “satisfaction of more desires”

    I disagree that this is a worthwhile goal. What happens if the majority of people hold strong desires to oppress a minority?

  • L.Long

    What is best is being happy with the caveat that you harm no one.
    Taking drugs makes me happy!
    And this is bad? Why?
    Discounting illegal-which I DO NOT agree with-so he over doses and dies?
    So? We all die and he was happy. Since he LLLooovveesss jesus, and did not suicide he is now in heaven and happier still.
    Of course religious -who love telling others what to do and what is wrong- will say he sinned and is going to hell. Well where is your source for that? And if there is none then it would be a moral act.

  • http://www.patheos.com/blogs/daylightatheism Adam Lee

    Nope, I’ve never read Besant. It was a while ago now, but I recall that the seeds of UU came about from my reading of Hume, Mill and Bentham. I flatter myself that I’ve improved on their ideas just a tiny bit!

  • Izkata

    I think 3 is the most likely, because humans are good at adapting to things. It wouldn’t take much until “happy” becomes “the same ol’ thing” – you’d need something stronger every time.

  • Izkata

    Likewise, however, what happens if oppressing a minority is what makes the majority happy?

    Regardless of what the measure is, I think it needs a qualification beyond what was mentioned in the OP. Something like, “as long as it is not at the expense of anothers’ “.

  • http://www.patheos.com/blogs/daylightatheism Adam Lee

    That’s why I say it’s wrong to obtain happiness for yourself in a way that causes suffering to others!

  • Izkata

    Actually, you didn’t. You mentioned euthanasia/killing another in self-defence, as well as dystopias where the happiness of the many is sacrificed for the happiness of the few. But you didn’t mention the more middle-ish ground, the happiness of the majority at the expense of the minority, where the “overall happiness” level would still be relatively high.

    Also, what the frell kind of markup parser does Disqus use? How is it trying to interpret that a HTML while at the same time escaping it so the browser doesn’t see HTML? (For the record, I tried to type “expense of anothers’ (happiness/desires/etc)”, with angle brackets in place of the inner parens)

  • Leum

    Happiness is also a qualitative, rather than a quantitative, phenomenon. This makes the idea of “maximizing” happiness extremely vague; it is not going to be possible to define it well. I don’t think any one ethical system is able to account for all moral questions, rather, ethical systems form in response to particular areas of morality that aren’t answered by our moral intuition, and then serve, once fully-formed, to test our moral intuition.

  • Leeloo Dallas Multipass

    My personal response is that I MYSELF want to be happy, and a person who is, for instance, on a constant heroin high is sufficiently unlike me to not really be me. Similarly, if there was a way to have my mind altered to make me cisgendered, I’d have an easier life and would probably be happier, but my identity is sufficiently core to me that the idea of that is repellent, like “me” dying, even if there was a contiguous consciousness. I think this is a big part of the reason why so many people who prize happiness would still opt not to be connected to the “happiness machine”, even in a perfect world where the practical problems with it weren’t a factor.

  • Richard Hollis

    “Here’s what I see as the really important step: the realization that happiness is an empirical phenomenon. Although individuals have unique likes and dislikes, there are basic, fixed facts about human nature which we all have in common. This means that there are objective truths about what does and doesn’t promote human well-being.”

    Whilst I can agree that happiness is the fundamental basis for a system of morality, I think this is where I start to disagree. Happiness is good, and unhappiness is bad – fine! But what makes people happy can vary massively (especially in terms of building a society). For some it is freedom. For others it is structure. For some it is creativity. For others it is routine. For some it is adventure. For some it is safety. All these (and countless others) are perfectly legitimate, and each will give rise to moral codes with different emphases and priorities.

    Morality, therefore, is subjective. Because what is good to some people and societies can be bad to others, and the matter can never be settled by appeal to evidence or objective fact.

  • Alex SL

    Honestly I do not know how to justify basing things on happiness either because I have never found a way to bridge the is-ought gap. I ultimately realized that the attempt to derive an ought from somewhere is as question begging and futile as it is superfluous.

    Because really morality is about how to treat other humans. It is for humans and about humans. And that means humans (we) are the only ones who can decide about it. We have to call the shots, collectively, and not rely on some outside source (be it divine commands, evolutionary psychology, or some great principle arrived at through careful philosophical reasoning). And this is also how, I think, ethics, morality and societal rules in general are arrived at in real life: perennial negotiation of groups of humans with each other, formal and informal.

    Of course, at that moment human nature comes in because it determines what we value and are willing to struggle for, but sinking it all into the concept of happiness may be a bit too simplistic; part of what different people bring to the table might be an intrinsic instinct for fairness (which, sadly, might even include things like retribution or a desire to perpetuate stupidly harmful things because one had to suffer from them oneself).

    Not saying this is necessarily a pleasant idea, but I consider a contractual view of morals to be more realistic than a consequentialist one that relies on glossing over the is-ought problem. There is no ought to be had unless we make one up, but the point is one should be honest and admit that we have to make it up.

  • GCT

    Of course, at that moment human nature comes in because it determines what we value and are willing to struggle for, but sinking it all into the concept of happiness may be a bit too simplistic; part of what different people bring to the table might be an intrinsic instinct for fairness (which, sadly, might even include things like retribution or a desire to perpetuate stupidly harmful things because one had to suffer from them oneself).

    As I pointed out to Calm Canary, this is already covered in the OP.

    There is no ought to be had unless we make one up, but the point is one should be honest and admit that we have to make it up.

    Are you claiming that Adam is being dishonest? In the same response to the above paragraph, he lays out what he chooses to maximize and why. This is not a dishonest approach.

  • GCT

    Even if Adam didn’t explicitly say it in this piece (he has lots of other blog entries and essays that deal with morality) it’s implicit in the idea of maximizing happiness for humanity.

  • Calm Canary

    That doesn’t address my concerns. Maybe when Adam hears, “because that will lead to more happiness,” he stops asking questions, but a non-utilitarian would just say, “but why should we care about happiness at all?” The passage you quoted gives no objective reason to stop asking questions at happiness rather than somewhere else.

  • Austin

    “Some people think evolution isn’t objective. Does that mean that it isn’t?”

    Terrible analogy…the point was that unlike Evolution, the very definition/purpose of morality is not objectively defined and thus you cannot arrive at “THAT” via any objective process.
    If someone says the purpose of morality has nothing to do with happiness, any disagreement with that is subjective.

  • GCT

    And, if you read the rest of my comment I point out where Adam already addressed that in the OP.

    Terrible analogy…the point was that unlike Evolution, the very definition/purpose of morality is not objectively defined and thus you cannot arrive at “THAT” via any objective process.

    This seems rather circular to me. You claim morality is not objectively defined, so therefore it can’t be objective defined?

    If someone says the purpose of morality has nothing to do with happiness, any disagreement with that is subjective.

    Ah, so as long as someone makes an assertion the conversation is stopped dead and can not progress? We can’t ask further questions and get at the root?

  • Austin

    “This seems rather circular to me. You claim morality is not objectively defined, so therefore it can’t be objective defined?”

    Nothing circular. Because the definition of the word morality (what it means) is not defined, then what makes someone moral in cannot be objectively defined. Same actually applies to words like Success etc..

    “Ah, so as long as someone makes an assertion the conversation is stopped dead and can not progress?”

    You can ask all the questions your heart desires. What you can’t do, is make the claim that disagreeing with your position is similar to disagreeing with Evolution..like you did.

  • Alex SL

    My point is this: consequentialist reasoning such as Harris’ is generally presented as a philosophical / logical conclusion: “If you chase this regression as far as you can, you’ll ultimately end up
    at happiness: the only quality that’s intrinsically valuable, the only
    thing we desire for its own sake and not because it gives rise to some
    other good.”

    The problem is, I don’t see the regression ending. We can still ask the same question: Why should we value happiness? The answer still isn’t “we should because X” because no such answer can be had. The answer is “we should not necessarily, but it is a brute fact that we do value it.”

    Which is perfectly fine if you accept a contractualist view on morality. But under a consequentialist one there are two problems straight away. One, as Adam pointed out, is that not everybody does actually value happiness most highly. But the second is that if we are honest*, this answer does not, as claimed at least by Harris, show a feature of morality deduced through careful reasoning but simply human fiat. (An additional perhaps minor problem would be the question what you count as happiness.)

    *) On the lines of: Well, let’s be honest, there is a problem with that. Not, as you appear to have understood, on the lines of: I know that you believe the exact opposite of what you write but you are actively trying to deceive your readers.

  • Verbose Stoic

    As I mentioned earlier,
    I believe that the basis for morality is human happiness. It can’t be
    otherwise, because if you start with any other virtue – justice, say, or
    filial duty, or individual liberty, or religious piety – you can ask
    why we should value that quality, why we should care about it at all.
    And whatever answer you give, you can ask the same question again. If
    you chase this regression as far as you can, you’ll ultimately end up at
    happiness: the only quality that’s intrinsically valuable, the only
    thing we desire for its own sake and not because it gives rise to some
    other good.

    There’s a major issue here though, evidenced by your progression. By trying to make justice an instrumental value that we undertake to achieve the intrinsic value of happiness, you lead to this conclusion: if we take ANY value or any decision that just seems to be a moral value, then we end up saying that if it didn’t actually produce the most happiness, then morally we ought not consider it a moral value. Thus, if justice, say, didn’t actually produce the most happiness, then we ought not value justice, at least not morally. But being just just seems like a moral value, and for the most part we all think that if being just somehow led to less happiness overall than being unjust, then we should still be just. And you can’t avoid the problem by claiming that being just just WILL produce the most happiness, because the issue is that once you use happiness to justify the moral it really doesn’t seem like morality anymore; morality seems to be something that justifies itself and is not justified by appealing to it making me or even everyone happy.

    So the two problems with this approach are:

    1) It’s easy to appeal to this to claim that each individual really values their own happiness intrinsically, but it’s hard to use that to justify caring about anyone or everyone else’s happiness. While it’s reasonable to say that most if not all people consider their own happiness to have intrinsic value, the happiness of others for many if not most is only instrumental, and any form of Utilitarianism needs to justify caring about the happiness of everyone.

    2) Even if you pull that off, you end up making the things that actually seem to be necessarily moral values to not necessarily be moral values, while the thing that doesn’t seem to be necessarily a moral value — happiness — becomes the only thing that actually has necessary moral value. Just pointing out that happiness is intrinsic doesn’t work to ground morality if happiness is not, in and of itself, a necessarily moral value and if we turn a necessarily moral value like justice into something that isn’t necessarily moral at all.

  • GCT

    The point that Adam is making (perhaps successfully and perhaps not) is that he’s not stopped asking the question, but all roads lead to happiness in the end. IOW, ask the question all you want, and happiness ends up being the final answer. Ask again and you can’t go any further. That’s what he’s saying in the quote and that’s why it does address your concerns, provided that Adam is right. You’re welcome to challenge what he’s said, but so far you’ve not even understood the point he made.

  • GCT

    Because the definition of the word morality (what it means) is not defined, then what makes someone moral in cannot be objectively defined.

    Except for the fact that criteria was set out. It may be right or it may be wrong, but you should actually deal with the argument presented instead of putting your head in the sand and claiming that you can’t see/hear it.

    What you can’t do, is make the claim that disagreeing with your position is similar to disagreeing with Evolution..like you did.

    I did no such thing. I pointed out that the objection made was weak and that if we used that same objection in another context we would notice how weak it is. Do you really not understand such a simple concept?

  • GCT

    The problem is, I don’t see the regression ending. We can still ask the same question: Why should we value happiness? The answer still isn’t “we should because X” because no such answer can be had. The answer is “we should not necessarily, but it is a brute fact that we do value it.”

    Isn’t that Adam’s point? His claim is that you can ask the question all you like and eventually you end up with happiness as the answer and can go no further. You don’t seem to be arguing against that, but are finding fault with it regardless?

    One, as Adam pointed out, is that not everybody does actually value happiness most highly.

    And, I think we both agree that those are immoral systems. So, I’m not sure what you’re arguing here, except trying to argue that something can’t be objective unless it’s universally accepted? That doesn’t follow.

    But the second is that if we are honest*, this answer does not, as claimed at least by Harris, show a feature of morality deduced through careful reasoning but simply human fiat.

    Adam is quite up front in stating that we have to choose a goal. I don’t see why you think that invalidates his position.

    (An additional perhaps minor problem would be the question what you count as happiness.)

    Again, this is dealt with in the OP.

    Not, as you appear to have understood, on the lines of: I know that you believe the exact opposite of what you write but you are actively trying to deceive your readers.

    Thank you for the clarification. I was hoping I had misunderstood what you meant.

  • GCT

    …if we take ANY value or any decision that just seems to be a moral value, then we end up saying that if it didn’t actually produce the most happiness, then morally we ought not consider it a moral value.

    Except we don’t all deal in absolutes like you do.

    …morality seems to be something that justifies itself and is not justified by appealing to it making me or even everyone happy.

    This is a classic case of begging the question.

    Now, can we please have your latest “example” of the alien super-race that needs to be exterminated through genocide before they use their perpetual motion guns to kill off all the puppies in the world and make everyone sad, thus destroying happiness for the universe, which somehow breaks UU?

  • Austin

    “Except for the fact that criteria was set out. It may be right or it may be wrong, but you should actually deal ”

    It is neither right nor wrong…that is the deference between subjective and objective. Adam believes that the purpose of morality is to increase happiness. If you encounter someone who believes that happiness has nothing to do with morality you can neither have any argument (because you’ve disagreed on first principle) nor call them wrong, because the definition is not objective.

    ” I pointed out that the objection made was weak and that if we used that same objection in another context”

    For an analogy to work, both scenarios must share the same properties. Disagreeing with Evolution – which is an objectively provable fact IS NOT THE SAME as disagreeing with someone’s morality – which isn’t. Can you really not understand such a simple concept?

    And due to your inability to speak without insults ( a clear substitute for rational thought) means this will be my last response to you on this thread.

  • GCT

    It is neither right nor wrong…that is the deference between subjective and objective. Adam believes that the purpose of morality is to increase happiness. If you encounter someone who believes that happiness has nothing to do with morality you can neither have any argument (because you’ve disagreed on first principle) nor call them wrong, because the definition is not objective.

    That’s not what makes a moral system objective or not. You are confusing “objective” with “universal” and/or “absolute”.

    You could also question what they feel is important and if you drill down far enough, Adam seems to think the answer will eventually end up being human happiness. That’s something that was stated in the OP that you still have yet to address or even acknowledge even though it’s been pointed out…multiple times by me alone.

    For an analogy to work, both scenarios must share the same properties. Disagreeing with Evolution – which is an objectively provable fact IS NOT THE SAME as disagreeing with someone’s morality – which isn’t. Can you really not understand such a simple concept?

    The properties being shared are there, you just refuse to see them. This is independent of your question begging where you want to define your stance as true. Do you agree, however, that your characterization of my objection was incorrect at least?

    And due to your inability to speak without insults ( a clear substitute for rational thought) means this will be my last response to you on this thread.

    What insults? Oh, I see, because I questioned why you were mischaracterizing my argument it’s insulting…somehow. As if you haven’t just done the same while still adhering to your dishonest position, which BTW, is rather less than civil. OK, if you want to tone troll instead of actually dealing with the arguments presented, fine by me – your straw men arguments are rather tedious. Don’t mess up the flounce though.

  • Verbose Stoic

    It’s not begging the question at all. If Adam’s proposal for morality is strongly counter-intuitive to what we think of as morality — by saying that justice isn’t necessarily moral but happiness is, which is the precise opposite of what we intuitively think — then we ought to be skeptical of his proposal, and specifically we should wonder if what he’s talking about is morality at all. And when we look at it deeper, we discover that that does indeed seem to be the problem: he’s derived an intrinsic value, and never managed to demonstrate that it’s a moral value at all. Thus, he needs to justify happiness as a specifically moral value before he can claim that he has something that’s a candidate for morality at all … or, in short, before he can claim to be espousing a moral position.

  • http://www.patheos.com/blogs/daylightatheism Adam Lee

    Terrible analogy…the point was that unlike Evolution, the very definition/purpose of morality is not objectively defined and thus you cannot arrive at “THAT” via any objective process.

    A better analogy would be this: Let’s say I deny the validity of inductive reasoning, which after all cannot be proven to work by any “objective process”. Does that overthrow the validity of all scientific knowledge, including evolution?

    Every system of thought, from morality to science to mathematics, has to be based on axioms that a stubborn person can reject. If the most you can say about my system of morality is that it can’t pull itself up by its bootstraps and prove its own first principles, I think I’m in good company.

  • Alex SL

    > Isn’t that Adam’s point?

    Then I have misunderstood him. I thought he said the regression ended there, I do not see how it does.

    > And, I think we both agree that those are immoral systems.

    Why? How do you know that it is objectively immoral to desire some abstract concept of justice instead of happiness, for example? Majority vote? But from the perspective of basing ethics on philosophical reasoning that is begging the question.

    See, maybe I misunderstand this, but from my understanding the various positions in ethics are like this:

    Deontology: reasoning demonstrates rules we must follow

    Consequentialism: reasoning demonstrates an ultimate goal (happiness, welfare, whatever) that we must aim for

    Me: there is no “must” (is-ought-problem), we just pull something out of our nether regions

    Perhaps Adam and I agree, but Harris at least believes that it is an objective truth that morals are about happiness, and I fail to see how that can be logically justified without smuggling a “must” into a chain of reasoning that properly only contains “happens to be”.

  • PhiloKGB

    Justice not only is not the “precise opposite” of hypothetically intuited happiness, it is also an abstraction not remotely capable of being morally judged. Justice is what justice systems do. There is no objective metric of what is actual justice and what is not. Furthermore, to unilaterally declare what I do or do not intuit would be merely laughable and wrong if your attempt was not so bizarrely framed. When I volunteer to do water testing or water distribution nearby, I consider it a moral act, not because I feel I’m acting justly — justice in this case would be holding the companies at fault responsible in every way for ensuring that rural West Virginians have potable water — but because, to frame it in Adam’s context, I am increasing their happiness.

  • Austin

    That analogy unfortunately falls short as well. The contention isn’t around the the validity of morality, it’s around its very purpose. This situation would be similar to someone stating that the purpose of inductive reasoning isn’t the pursuit of truth. Fortunately, this problem doesn’t exist, because since its inception and introduction into our language, the purpose of induction has always been clearly stated as such, so anyone contending this point is being stubborn, unreasonable and flat out wrong. However, morality doesn’t benefit from the same clarity. Its very purpose has never been clear. So someone may agree with you that given your belief that the purpose of morality is general happiness the rest of your tenets and rules hold true, but hold the position that you are wrong about its purpose – that the sole purpose of morality is to do the will of [insert preferred deity or whatever] whatever that may be regardless of human hapiness. Given this position it’s unfair to call them stubborn (which requires someone to be ignoring obvious evidence), because there is simply nothing objective to support the position that the purpose of morality is NOT to do the will of [insert preferred deity or whatever]

  • Verbose Stoic

    Justice not only is not the “precise opposite” of hypothetically
    intuited happiness, it is also an abstraction not remotely capable of
    being morally judged. Justice is what justice systems do. There is no
    objective metric of what is actual justice and what is not.

    No, it is not the case that what we call justice is just what justice systems do. We call justice systems justice systems because they’re supposed to be aimed at providing justice, which everyone thinks is something that we can indeed have an objective measure of. After all, we all think that we can criticize at least some justice systems for not, in fact, being just. It’s just as objective as happiness is.

    Furthermore, to unilaterally declare what I do or do not intuit would be
    merely laughable and wrong if your attempt was not so bizarrely framed.

    It was a generalization; some always may not share those intuitions, but in general it is indeed the case … and I’d bet that I could come up with some thought experiments that you’d at least have to agree that most people will answer in a manner consistent with “Morality is not just happiness”.

    When I volunteer to do water testing or water distribution nearby, I
    consider it a moral act, not because I feel I’m acting justly — justice
    in this case would be holding the companies at fault responsible in
    every way for ensuring that rural West Virginians have potable water —
    but because, to frame it in Adam’s context, I am increasing their
    happiness.

    Justice is not the only moral value. Most people also think compassion is a moral value, along with a whole host of others. But, at any rate, surely you can agree that you could do that water testing and distribution in a moral or immoral way, and so that’s it’s quite possible that you could increase their happiness in a way that’s moral or immoral. Thus, happiness is morally neutral; it can be achieved morally or immorally. Contrast that with justice, at least, where if justice is achieved immorally it’s no longer a moral virtue, by definition.

    Adam can try to claim that achieving happiness immorally isn’t morally right either, but the problem is that that doesn’t follow from what happiness means; the people would still be happy, but would have achieved that happiness in an immoral way. Justice achieved immorally is no longer justice at all.

  • GCT

    Again, you’re misusing the word “objective.” Not everyone has to accept something for it to be objective, just as not everyone accepts science.

  • GCT

    Then I have misunderstood him. I thought he said the regression ended there, I do not see how it does.

    It “ends” there in the sense that he claims you can’t delve down deeper. When someone claims to value justice or fairness, you can ask them why. Ask them enough times and eventually it becomes about happiness for humans. Keep asking after that and you won’t get any meaningful answers. He calls it an axiom and first principle.

    Why? How do you know that it is objectively immoral to desire some abstract concept of justice instead of happiness, for example?

    You don’t think systems made from divine mandate are immoral?

    As for the justice/happiness divide, that’s part of the point Adam is making. He’s claiming that those who claim to value justice are really, at their core, valuing human happiness. The reason they value justice is because they value human happiness. Additionally, not everyone has to accept Adam’s moral code for it to be objective. If that were the case, then nothing could be objective so long as there’s one person out there who is hyper-skeptical.

  • GCT

    It’s not begging the question at all.

    Yes, it is. You’re simply claiming that your idea of morality is the right one by fiat. His ideas are not strongly counter-intuitive to everyone, so you can’t make that argument either (also begging the question).

    I’d bet that I could come up with some thought experiments that you’d at least have to agree that most people will answer in a manner consistent with “Morality is not just happiness”.

    Oh, please, please don’t. I’m sick and tired of your “thought experiments” where one group of people somehow evolve some DNA trait that make them violent and homicidal, thus forcing the rest of us to commit genocide and thoroughly wipe them out in order to survive. Not only are they completely irrational, illogical, unscientific, and not at all coupled to reality, but they are quasi-racist.

    Justice is not the only moral value. Most people also think compassion is a moral value, along with a whole host of others.

    Adam has already dealt with that, and I’ve pointed numerous people to the quote in the OP. He says that people value justice and compassion because they value happiness. If you ask them why, eventually they claim it’s because they value happiness. Simply claiming that other people value other things without dealing with the fact that Adam addressed that is pretty shady.

  • http://teethofthebuzzsaw.blogspot.com/ Leo Buzalsky

    Thus, if justice, say, didn’t actually produce the most happiness, then we ought not value justice, at least not morally.

    I realize you’re quoting someone else (not sure who) but, first, it needs to be pointed out that “justice” is a very broad word, so it’s not a very good term for which to have a discussion over. Some, say, may consider capital punishment to fall under “justice;” others would disagree. So, for the sake of this discussion, we’ll have to assume when we talk about justice that we are talking about things that are actually justice as opposed to subjective views of justice.

    Second, I would drop the term “most.” It’s not fully clear to me how well that could even be evaluated. (Not sure if that’s your phrasing or the original commenter’s.)

    With that, I would have to say, “Yes, we ought not value justice.”

    But being just just seems like a moral value, and for the most part we
    all think that if being just somehow led to less happiness overall than
    being unjust, then we should still be just.

    But you’re evaluating justice in a world where it does appear to produce happiness. You can’t set up this hypothetical world and then make judgement calls based on this world; you need to stick to your hypothetical.

    Though, I fear that what we consider to be justice would be such
    treatments that produce happiness. I.e, if capital punishment does not
    produce happiness, then capital punishment is not just. So if justice is that which improves happiness, then it is nonsensical to even consider a hypothetical where justices does not. Because whatever that is would by definition not be justice.

    So, actually the problem sort of can be avoided “by claiming that being just just WILL produce the most happiness” because that’s how we go about determining what is just.

    because the issue is that once you use happiness to justify the moral it
    really doesn’t seem like morality anymore; morality seems to be
    something that justifies itself and is not justified by appealing to it
    making me or even everyone happy.

    I’m not really following. How do you know if morality justifies itself? In other words, on what basis is it being justified? If the basis you’re using is on the happiness it produces, you would seem to have just contradicted yourself.


CLOSE | X

HIDE | X