Human Approximations of Morality

Adam Lee, author of Daylight Atheism, has just gotten into an argument with Peter Hitchens over Divine Command Theory.  Peter Hitchens contends:

“For a moral code to be effective, it must be attributed to, and vested in, a nonhuman source. It must be beyond the power of humanity to change it to suit itself.”

Adam responded in two posts, the first titled “There Is No Non-Human Moral Authority” and a second responding to comments Hitchens made in-thread. I think I disagree with both the guys in this argument, but I’m a little worried that my dispute with Adam might be definitional, so let me try to lay out my position as clearly as possible (and with as many math analogies as I can).  Here’s Adam’s position:

The fatal flaw in this position is that, contrary to your confident presumption, there is no non-human moral authority. Every religious book is written, edited, and printed by humans. All moral opinions, interpretations, and proclamations are human opinions. If there were a huge, glowing set of tablets with commandments engraved on them that descended from the sky accompanied by angels blowing trumpets, and the choice was between following those or making up moral laws on our own, we’d be having a very different debate; but there is no such thing. All moral laws are humanly produced. The question is which set of human-created laws we should follow and why.

I agree with Adam that there isn’t a non-human agent we can consult on moral issues (no oracle or book of cheat codes), but I disagree strongly that all moral law flows from human opinion.  Human writings are attempts to describe and summarize moral obligation, just as mathematical theorems describe relationships between transcendent objects and biology gives good-enough approximations of how things work at the scale of cells, allowing us to ignore atoms and quantum whatsits.  All of these are human-constructed maps of territories that already exist and are more complex and detailed than we are up to dealing with on a day to day basis.

(It might be a good idea to pop over to Less Wrong for a moment and check out Yudkowsky’s essay on confusing the map for the territory: “Fallacies of Compression“).

A Divine Agent isn’t required to conceive of moral law as non-contingent on human descriptions.  Accepting this fact shouldn’t be a problem for any atheist who happily accepts that the Pythagorean theorem was true before Pythagoras and would be true even if humans hadn’t ever come down from the trees.  Just as mathematicians can refine and revise their descriptions of the properties of mathematical object, philosophers can revise their description of moral law.

The upshot is that the question we’re trying to answer is not “Which set of human-created laws we should follow and why?” but “Which set of human-created approximations to Moral Law is most accurate, and how do we check?”  There’s a bonus question, too, which is “How fine-grained an approximation of Moral Law do I need to be able to choose actions correctly?”

I stuck by Kantianism for a long while, and although my read of it had some serious, dangerous flaws, “Treat other people as ends-in-themselves” is a pretty good big picture encapsulation of your moral obligations.  And a lot of the time, you don’t need to do any intellectual heavy lifting to see your duty clear.  When you’re not sure which people count as people or you’re not sure whether a certain act means treating someone as a mere means, then it’s time to drill down and find or invent a more zoomed-in ethics.  But all of this is still an attempt to better describe something that already exists independent of human portraits of it.  And you can play around with High Energy Theoretical Ethics if you want to fill in some of the “Here Be Dragons” regions, but that level of detail won’t be necessary for most folks.

But whether your moral map is highly detailed or a set of broad principals, whether you’re mostly on the money or dangerously misguided, and whether you think of it as universally binding or wholly subjective, it’s still a human-made map of a human-independent territory.

About Leah Libresco

Leah Anthony Libresco graduated from Yale in 2011. She works as an Editorial Assistant at The American Conservative by day, and by night writes for Patheos about theology, philosophy, and math at www.patheos.com/blogs/unequallyyoked. She was received into the Catholic Church in November 2012."

  • anodognosic

    I would agree with you depending on what you mean by human-independent territory. It seems clear to me that we can’t simply decide what is right, or change it, and that we do discover moral truths in a way analogous to mathematical truths. But morality does seem to be inextricably tied to the nature of humans–that is, it is still dependent on the existence and psychology of humans, and need not have any power to compel any thinking beings whose nature may be radically different from that of humans. What do you think of that distinction, Leah? Is it fair, or do you believe that morality exists independent of human existence and nature as well as human will?

  • http://blogs.forbes.com/alexknapp Alex Knapp

    Are we seriously talking about this in the 21st Century? Plato took care of this question a long time ago.

    • Hibernia86

      Is this the same Plato who would talk about “forms”, such as the perfect tree and the perfect apple, which everything in our universe is an approximation of? Because none of that makes any scientific sense. All matter in our universe is made of atoms and all terms we use for objects, such as trees or apples, are just terms we use to generalize a complex universe.

      Is he really any better on ethics?

  • http://prodigalnomore.wordpress.com The Ubiquitous

    “The fatal flaw in this position is that, contrary to your confident presumption, there is no non-human moral authority.”

    Sure there is.

    “No there isn’t.”

    Sure there is.

    “No there isn’t.”

    Sure there is. We can cite articles of faith all day long, but he’s going to have a harder time proving that God as we know him does not exist than I would showing that God so known is a reasonable belief.

    Lee seems to offer boilerplate, puddle-deep relativism, at least as seems from your excerpt. Suppose we live in a society whose baseline assumption is that there is no non-human moral agent, furthermore that morality is simply made. What prevents a sufficiently charming, clever and prideful strongman from usurping control? I take it as self evident that fascism is certainly wrong, but maybe Our Beloved Leader’s will to power will change even this.

    Junk food for thought aside, Hitchens the Lesser would have done no service to his argument if he skipped straight to God, but left as the excerpt stands, he seems more interested in proving that there exists human-independent territory than in showing that the cartography is God-guided.

    I submit that Leah agrees more fundamentally with Peter as represented than with Adam likewise. (That naming is providential is no surprise for the biblically literate.)

    • anodognosic

      I think it’s important to distinguish between ethics and metaethics here. The question of whether or not morality is divinely determined is one of metaethics, and history pretty clearly shows that a society’s metaethics doesn’t have a huge bearing on whether a sufficiently charming, clever and prideful strongman will usurp control–this has happened time and again in very devout societies. And as a matter of ethics, Lee is pretty much right that we don’t have a reliable authority to look to, even if we have a guarantor.

      Also, how serious is your final comment? (I’ve learned not to assume)

      • http://prodigalnomore.wordpress.com The Ubiquitous

        Naming is providential, and this is biblically supported, but in this instance I’m joking. (Thanks for understanding that we aren’t all stiff, stuffed shirts, by the way. “Wheree’er the Catholic sun doth shine,” &c.)

        As I’m pointing out above, the question is not, in fact, one of whether morality is divinely inspired. Hitchens the Lesser does not invoke God, and Adam constructs a straw man from the assumption that he does. Again, all that Hitchens affirms is something that Leah does not deny, and to refute what Hitchens did not say Adam invokes an eminently falsifiable vision of morality that Leah opposes. Namely, that all things moral come from humans.

        Taking what Adam says for what it is, if the broader point is that we do not have shining tablets, then I think he absolutely overestimates the utility of clear signs from God — they rarely work. How often is Israel a faithless bride, after all? At least once does Jesus seem to grudgingly work a miracle for someone who wants only the miracle and does not really care for “putting on Christ?” You do not have to accept the authority of the Bible to admit that, supposing God exists as we say he does, this is very likely how men at all times and places would react to him, that if God really did come down to our level we’d execute him.

        In any case, I gladly accept your correction and amend my point: How will a society which truly buys into rule by “the morality we create” depose a despot? Will every nation be like Russia after 40 years of Soviet rule, as a whole not quite resisting or resenting but suffering nonetheless? Say what you will about rule with Christian assumptions, that it is a slow salve, none at all, or worse, but I think the effect of Christian principles is pretty evident from the slow, steady, no-turning-back transition from a working class of slaves to a working class of serfs to very nearly a working class of free citizens between the fall of Rome and the Renaissance. Even as men in power were often corrupt and cruel, there were more and more limits, and some men in power resented this enough to fuel schism from the Church during the Reformation.

        Even after the state decided heresy was a capital offense, the Church stepped in to moderate. Leave the moral power in the hands of only local, human authority and you have something different than the Inquisition, and something far more terrible: You have witch trials. In the hands of a larger authority you have pogroms and the Holodomor. I have no hope that even if we harnessed the whole of humanity as cloud computing harnesses a network we would suddenly find ourselves farther from, not closer to, the next great guillotine. We must at least behave like we have limits, that we are not the law unto ourselves, for if we really were the sole source of morality, it would be the last thing we’d ever learn.

  • Heartfout

    I am a bit confused as to why moral laws should be linked to physical laws in the way you describe. It seems like a bit of wordplay, using the word law to link the two. Could you explain why you think that it is true?

  • http://last-conformer.net/ Gilbert

    I think there is a massive difference between objective morals and the Pythagorean theorem.

    It’s not just that the description “a right triangle in the Euclidean plane where the square of the hypotenuse is different from the sum of the squares of the legs” turns out not to have any referents, it’s that it turns out to be a contradiction in terms like “square circle”*. For comparison, we can’t derive properties of physical objects from their definition, because they, unlike mathematical objects, have contingent properties. The physical world could be Euclidean or not, where the Euclidean plane doesn’t have that alternative.

    Now moral nihilism is wrong and evil but not logically contradictory. And morals supposedly apply to real humans who, too, can’t be constituted by their definition unless you believe in an objective human nature in which case hello natural law and good bye transhumanism.

    In the end mathematical theorems don’t need a reason to be true and morel law does.

    *Actually, you probably can come up with some perverse metric that allows for square circles. But I think it’s still clear what I’m saying.

  • http://bigthink.com/blogs/daylight-atheism Adam Lee

    Leah,

    I think our dispute here probably is definitional. I believe that morality itself is objective, in the sense that it isn’t merely a matter of human opinion. In my view, morality consists of making the decisions that have the greatest beneficial effect on human happiness; and there is one set of principles that best achieves this goal, whether or not we can easily tell what it is. I like how anodognosic phrased it up-thread, that morality is tied to the nature and psychology of humans, which I agree with.

    My argument with Peter Hitchens was more of a second-order argument: How do we tell what is moral? His belief seems to be that there’s some non-human moral agent who has the exclusive right to say what is or isn’t moral, and all we have to do is ask. (More specifically, since no such agent is active in the world, we should ask the humans who assert the right to act as his representatives.)

    My contention is that morality can and must be determined through reason and empirical investigation, not revelation. There may be moral authorities, but only in the same sense that there are scientific authorities: not infallible arbiters whose word is law, but people who’ve investigated the subject, thought more deeply about the subject, and whose opinions are therefore more worth listening to. That’s what I meant to convey when I asked which set of human-created laws we should follow. I agree that all of them are approximations of varying quality, but since we don’t have direct access to eternal truth, we all have to follow someone’s conception of morality – even if it’s our own.

    • http://prodigalnomore.wordpress.com The Ubiquitous

      Oy vey. I suppose it’s theoretically possible your disagreement with Leah is definitional. As attributed to Voltaire: “If you would talk with me, define your terms.” Your post has either no definitions at all or is so slathered in emotive polemic that it’s very hard to find one. I only single you out because you’re here, not because you’re the only one, so do not take the following personally.

      For better or worse, you’ve inspired me to define definition with an image, declaring the definitive quality of the New Atheism: it has all the definition of a thundercloud on an overcast day. To wit, it defies definitions. Thundercloud is right, too, as New Atheism has as much substance, no more staying power and every last drop of sound and fury, though certainly it is temporarily dangerous for tall men in foil hats or those caught napping in the shade of a tree. Like a thundercloud over a land choked in a drought, it may despite itself loosen the earth for something productive, and good may come of it yet, but I would still not wish it on anyone.

      • http://prodigalnomore.wordpress.com The Ubiquitous

        And now, the caveat: In defining New Atheism distinct from the broader species of atheism, I note qualities not necessarily found in at-large “disbelief in the supernatural,” my working definition of atheism. Miss Libresco and presumably others show this could be a respectable intellectual position of polite people, or even the honest admission of definitive doubt. The last might even be laudable even from the Christian point of view, if only to a degree.

    • http://kpharri.wordpress.com/ Keith

      Adam

      I think it’s rather odd that you start off by claiming that morality isn’t dependent on human opinion, and then in the very next sentence tell us what your opinion about human morality is.

      If morality doesn’t depend on human opinion, then instead of sharing your opinion with us, demonstrate to us, in the same way you might demonstrate the Pythagorean theorem, why morality is necessarily an an exercise in maximizing well-being.

      I don’t think you could complete such a task, any more than Sam Harris could in the Moral Landscape.

      As it happens, I fully support the pursuit of human well-being, simply because I think that’s what most people want. Most people want to be happy and free of want and suffering, and why shouldn’t we be? So, let’s just get on with it and use whatever tools – including science – we have at our disposal to pursue this goal. Whether people wish to call this “morality” or not is irrelevant.

    • leahlibresco

      Got it. Thanks a lot for jumping in. I thought we might be getting snarled over human-independent agent vs. human-independent morals.

  • Ash

    Adam and Leah,

    I’m curious if you both consider morality in the way that Harris and Carrier do, as a means to achieve maximum well-being and minimum suffering. Speaking for myself, this is the only way I can see morality being objective. I agree with Harris that there might not be any one single moral “code” that can achieve a given end in this framework, but that one can nevertheless describe all such codes as either promoting or preventing well-being, which is an objective question.

    Thoughts?

    • http://thinkinggrounds.blogspot.com Christian H

      I’m neither Adam nor Leah, but if I may respond…

      The reason I would be hesitant to define morality as the maximization of well-being/happiness/what-have-you (other than the fact that I rather doubt that happiness is actually objective, and if it is objective that it is predictable with any sort of reliability) is that the we would then have to be ready to accept whatever situation maximizes happiness/well-being/whatever. Would we be willing to accept a scenario in which murderers, sociopathic rapists, and homophobes enjoy unimaginable pleasure as a result of being cruel, while a tiny minority of people comprised of children, knee-jerk altruists, and kindly old women suffer horribly as a result of being kind? If this system turned out to contain net maximal happiness and net minimal suffering, would it be acceptable? What degree of the suffering of the innocent and triumph of the cruel is acceptable in support of maximal happiness/minimal unhappiness? Put differently, is justice even a possible conceptualization in this system?

      • Ash

        Would we be willing to accept a scenario in which murderers, sociopathic rapists, and homophobes enjoy unimaginable pleasure as a result of being cruel, while a tiny minority of people comprised of children, knee-jerk altruists, and kindly old women suffer horribly as a result of being kind?

        One error you are making here is in conflating well-being with mere gratification (assuming that raping and murdering is ever really gratifying, which is debatable). Surely you can conceptualize a more rational and sophisticated understanding of what well-being might entail. You are also making the error of assuming that this system implies that it is morally acceptable to maximize happiness at the expense of another’s suffering, which is an absurdity.

        Put differently, is justice even a possible conceptualization in this system?

        I would argue it is the only system in which justice makes any sense at all. If justice isn’t about ameliorating the suffering caused by people’s actions, then what is it about? If morally good acts aren’t about maximizing well-being, then what are they about? No matter what your answer, it will ultimately reduce to these two things: maximizing well-being and minimizing suffering.

        Moreover, that we cannot precisely define well-being says nothing about the fact that such a state as well-being exists and that some actions promote it while other actions decrease it. Your point only serves as a reminder that we need to continue to study well-being so that we can be better informed about what constitues moral action. If we required absolute precision in order to formulate moral frameworks, then we would have died out long ago.

        • http://last-conformer.net/ Gilbert

          I don’t know about Christian H., but I for one can’t “conceptualize a more rational and sophisticated understanding of what well-being might entail” unless I take the difference from the morality of the means of attaining gratification. That is, of course, not an option available in your system, because it would render it circular.

          Also, it is indeed an absurdity to presume it is “morally acceptable to maximize happiness at the expense of another’s suffering”, but I don’t see how being an absurdity should prevent it from following from your system.

          • Ash

            I don’t know about Christian H., but I for one can’t “conceptualize a more rational and sophisticated understanding of what well-being might entail” unless I take the difference from the morality of the means of attaining gratification.

            Huh? You are saying that you cannot think of a possible understanding of well-being beyond hedonistic gratification? If so, that is unfortunate, but correctible. I recommend Googling “positive psychology” to begin…

            Also, it is indeed an absurdity to presume it is “morally acceptable to maximize happiness at the expense of another’s suffering”, but I don’t see how being an absurdity should prevent it from following from your system.

            It is an absurdity because it fundamentally contradicts the reality of how morality appears to work or the purpose it attempts to serve. In this case, to gain pleasure at the expense of another’s suffering is, by definition, immoral if we grant that morality is concerned with maximizing well-being and decreasing suffering.

        • http://thinkinggrounds.blogspot.com Christian H

          To an extent I must apologize: I was confusing your position with Sam Harris’ (as presented in a TED Talk; I haven’t read his books), in which he fairly clearly /was/ talking about happiness rather than some more nuanced well-being, and was also fairly clearly talking about the maximal happiness of the whole system, in which we engineer societies in order to produce the heighest net happiness, and not so much about providing maximal happiness for oneself, and so making a choice between two different rivals for a particular happiness unit is still relevant. Perhaps that does not pertain to the system that you hold, and I would suggest that decisions on the level of the individual have different concerns (ie. “not at the expense of others”) than decisions on the level of society do. But we do need to think about the level of society, and how to pick who gets the most [unit] when there is a conflict.

          However, I would also suggest that it is impossible to maximize one’s own happiness (I’ll get to the well-being part later) without harming another. “Finite beings in a finite world” is generally how it’s explained. We in the First World tend to imagine that there is enough for everyone, but this is not likely the case. Globally, I’m highly skeptical that anyone could even /survive/ without harming others somewhere. This may not dismantle the system you suggest, but it does require that either maximizing happiness at the expense of others can be morally acceptable, or that we will all commit morally unacceptable actions, and not just by mistake. I side with the latter, and I’m guessing you do, too. I bring this up, though, because to accept the latter is to give maximizing one’s happiness/well-being at the expense of another slightly more acceptability than you suggest. Unless you suggest that well-being is possible when we’re dead, however you define well-being, this is still needs to be answered.

          As to well-being, I surely can imagine more nuanced ideas of well-being than hedonic pleasure. I tend to agree with Gilbert on this, but your response to him goes part of the way to convincing me. I’m skeptical that well-being could be measured–and I don’t mean accurately, I mean at all. And even if it can be, I’m skeptical that it won’t still be scarce. But I will need to think about this.

          • http://thinkinggrounds.blogspot.com Christian H

            Apologies for that terrible first paragraph. I hope you can figure out how those lists-within-lists balance; I wasn’t very clear with my punctuation and conjunctions.

          • Ash

            I was confusing your position with Sam Harris’

            My position is the same. I encourage you to listen again to his talk if you can’t read his book…he very clearly outlines his “happiness” in the same way I’m outlining “well-being.” If you believe he is only talking about happiness in the sense of pleasure or gratification, then you have profoundly misunderstood his thesis.

            …so making a choice between two different rivals for a particular happiness unit is still relevant.

            That is why we need moral systems. If we had no need to compete for that which leads to well-being, we would have no need for morality as such.

            Perhaps that does not pertain to the system that you hold, and I would suggest that decisions on the level of the individual have different concerns (ie. “not at the expense of others”) than decisions on the level of society do.

            Only in scale, not in kind. All moral decisions are ultimately about increasing well-being and decreasing suffering. That there are often conflicts between personal and social goals does not negate this theory, it only means we must accept that good moral systems need to be complex, adaptable, and flexible.

            However, I would also suggest that it is impossible to maximize one’s own happiness (I’ll get to the well-being part later) without harming another.

            That’s quite a claim and I think untrue. I think a Buddhist monk who lives in a commune meditating all day and leading a life of deep well-being does not come at the cost of harm to another.

            But let’s say you are right. The inability to achieve perfection does not negate the fact that morality nevertheless aims to increase well-being and reduce suffering. After all, do you say there is no such thing as living a healthy lifestyle, or that such a pursuit is not worthwhile just because no one can achieve “perfect” health?

            …but it does require that either maximizing happiness at the expense of others can be morally acceptable, or that we will all commit morally unacceptable actions, and not just by mistake.

            I think you are getting confused about the framework of morality with the idea of a perfect world. Just because maximal well-being isn’t strictly possible doesn’t mean that isn’t what morality aims for.

            I’m skeptical that well-being could be measured–and I don’t mean accurately, I mean at all.

            Well-being is measured all the time. There is an entire field of study called positive psychology that researches it. It is a developing area of research, but there is already a fair bit of consensus about what constitutes well-being. The Wikipedia article is a good place to start: http://en.wikipedia.org/wiki/Positive_psychology

  • Pingback: It’s Utilitarianism All the Way Down? | Unequally Yoked

  • Pingback: We Go Together / Like Essence and Telos / Doo-bop a doo-bee doo | Unequally Yoked


CLOSE | X

HIDE | X