Morals: the periodic chart of behavior

I wrote this to someone yesterday.

The periodic chart is a way of describing the way the world is.  As we’ve come to understand the nature of reality better, the periodic chart has changed to incorporate that knowledge – but it’s all based on how the world really works.

Morals are the periodic chart of behavior.  They are a system for determining the best behavior based on how the world is, and as our understanding about reality increases, our moral rules have changed.

There are, of course, several competing hypothesis about what these moral rules are.  They vary from culture to culture, so don’t act like yours is the only one in town.

All the atheist must realize is that if a world with moral rules would be better, then that’s all the motivation we need to make them up.  If we see, based only on how the world works, that telling the truth, not stealing, not killing each other, etc., makes people happier, then that’s all we need in order to suggest that “good” people ought to do those things.  I think we can very easily defend that the world really works in these ways.

The theist, however, bases their morality on the commands of a god.  That god is an assertion about reality.  It is an assertion that hasn’t been adequately defended in the whole of human history.  The theist has a much more difficult claim to defend than, “people who steal create a lousier world and are generally unhappy themselves.”

About JT Eberhard

When not defending the planet from inevitable apocalypse at the rotting hands of the undead, JT is a writer and public speaker about atheism, gay rights, and more. He spent two and a half years with the Secular Student Alliance as their first high school organizer. During that time he built the SSA’s high school program and oversaw the development of groups nationwide. JT is also the co-founder of the popular Skepticon conference and served as the events lead organizer during its first three years.

  • Chrissetti

    The theist has a much more difficult claim to defend than, “people who steal create a lousier world and are generally unhappy themselves.”

    The theist doesn’t need to. If ‘moral’ is simply obeying the orders of a god then only compliance with those rules need be taken into account. if following the rules makes the world a more lousy place then they’re still doing ‘good.’

  • Pro_bonobo

    Can I steal this to post on my fb wall? So great.

    • http://freethoughtblogs.com/wwjtd JT Eberhard

      Can’t steal what’s free to take. :)

  • Mark

    “If we see, based only on how the world works, that telling the truth, not stealing, not killing each other, etc., makes people happier, then that’s all we need in order to suggest that “good” people ought to do those things.”

    So, are we to understand you to believe that what “makes people happier” is the ultimate ends? Does happiness of people mean mankind collectively or individually? How do priorities of survival and advancement play into this perspective?

    • Azkyroth, Former Growing Toaster Oven

      So, are we to understand you to believe that what “makes people happier” is the ultimate ends?

      Of course.

      Does happiness of people mean mankind collectively or individually?

      Yes.

      How do priorities of survival and advancement play into this perspective?

      Are those important to people’s happiness?

      • Mark

        Yes, they certainly could be, but at the expense of the happiness of others. Survival and advancement are not specifically mentioned by JT here, so I am wondering if he believes them to be essential to individual or collective happiness, and if so, how are potential conflicts resolved in his “periodic chart”?

        • http://becomingjulie.blogspot.co.uk/ BecomingJulie

          All means to the same end are equally valid.

          Corollary: Means that appear not to be equally valid will be found, on closer inspection, to serve different ends.

  • Pierce R. Butler

    .. “people who steal create a lousier world and are generally unhappy themselves.”

    Certain field research indicates the second half of this statement may not apply in all cases, particularly once the theft has been successfully completed.

  • Alukonis, metal ninja

    But how are morals periodic? What are the moral periodic trends? Across rows? Down groups? Make a table!

  • Suido

    Tables sound tricky, it’s kinda long and I don’t like reading anyway, so can we get this in big writing on two tablets?

  • LoyalOpp

    I agree with you 100% about the need and function of moral rules, but you have to be careful when presenting morality as “objective”, even if in a purely semantic sense (and I’m aware you don’t do that explicitly here so I’m making somewhat of a strawman argument).

    There is no such thing as good or bad, moral or immoral, except how we define them. These things can never be truly objective, except as objective relative to our invented standards. Moral laws do not, cannot, and will never exist as “reality” in the way that the law of gravity, evolution, your desk chair, or your grandmother exist in reality.

    This is so important because it correctly frames the challenge of how we can determine moral standards. There is no absolute moral truth we can strive to, but we can strive to achieve the most good as we define good, and that is good enough.

    Also, it’s important because the intellectually honest conclusion, if you believe that moral law exists outside of the human mind – as a “real” thing that would have existed whether or not humans ever evolved – would be that there is some sort of diety (see: Leah Libresco). There is no plausible explanation for something subjective and invented like good and bad, right and wrong, moral and immoral to exist as an absolutely reality absent a supreme being making them so. Of course, since none of those exist absolutely, this raises no intellectual problem for the thinking atheist.

    • Azkyroth, Former Growing Toaster Oven

      There is no such thing as good or bad, moral or immoral, except how we define them. These things can never be truly objective, except as objective relative to our invented standards. Moral laws do not, cannot, and will never exist as “reality” in the way that the law of gravity, evolution, your desk chair, or your grandmother exist in reality.

      This is true of numbers. And yet very, very few people have crossed their arms and smirkingly informed us that we can’t do physics because of it.

      A certain number of axioms are needed to deal with “what is,” axioms like “there is a world external to my consciousness” and “my senses at least tell me something useful about it” and “consistent patterns of behavior exist in that world” and especially “my actions can influence that world.” A minimum of one additional axiom is needed in order to reason that certain states of that world are preferable to others. That’s all.

      • LoyalOpp

        It’s nothing like numbers. Numbers would exist even if consciousness never evolved, even if life never evolved. We discovered them, but they were already there. Sure, the way we labeled them is of our own creation, but not their existence.

        Can’t you see how that is different than moral law that exists as an absolute outside of consciousness? If people had never evolved, how bizarre is it to believe that the phrase “treat others as you’d like to be treated” would still but out there floating in the air?

        Just like some people want to live forever, so they imagine heaven, others want there to be scientifically objective moral guidelines, so they imagine absolute moral law. It’s equally as ridiculous in light of the evidence and with the application of reason.

        And my opinion does not in any way carry the implication that we “can’t do” morality. We can absolutely do morality, and we can make it objective as far as how we define it. But we always have to keep in mind that we are the ones defining it, we are not discovering some absolute truth. I completely agree that we have the ability and even the responsibility to reason that some states are preferable to others, but we can never be objectively “correct.”

        Think of it this way, 1+1=2 cannot, will not, ever change. It is objective. As long as you allow for even one single act that was immoral at one time and moral at another, morality is not immutable, and therefore cannot be absolute.

        • abb3w

          Mathematical theorems exist, yes; but only as abstractions. Furthermore, mathematical theorems such as “1+1=2″ can change, if the underlying axiom schema used to derive the theorem change (though the particular case requires an extremely restricted system). Or the symbol “2″ may correspond to { {{}} } rather than { {}, {{}} } depending on your construction axioms. The axioms themselves also exist abstractly yet objectively.

          At a certain level, some such choices become effectively arbitrary, so you can model mathematical axiom system A under mathematical axiom system B. As long as you have a system able to model fairly simple arithmetic, or set theory, or a couple other basic starting points, the choice of which to use is akin to choosing to discuss philosophy in French, English, or German.

          One can use the language of mathematics to talk about the universe we experience. (Using French, English, or German merely relies on this implicitly.) This requires one additional axiom; dropping some formalities, “experience has a pattern” gets to the usual place where the gap of induction can be bridged by science enough to answer “probably” for distinguishing a hawk from a handsaw — what is usually meant by “objective” in empirical questions. The axiom, however, may alternately be taken in refutation; however, the consequent philosophical discussion tends to be uninteresting. More often a more specific axiom is tried; however, the more specific is potentially subject to testing under the more general case.

          Morality appears to refer to an ordering relationship on a set of choices; “A is equivalent or worse than B” corresponds to A≤B. (An isomorphism exists between deontological approaches and consequentialist approaches.) Mathematically, such a set and ordering is a poset. The standard axioms allowing “1+1=2″ also allow constructively showing the existence of a set of of such ordering relationships, once one is given the existence of the set. Thus, the axiom is needed not to talk about morality, but to indicate which such ordering relationship and thus which objective morality you’re talking about.

          As with any other axiom (such as the Parallel Postulate or the anti-Parallel Postulate) such a “moral law” is a relationship between concepts that also “exists as an absolute outside of consciousness”. However, its mere objective existence doesn’t indicate which one is being referred to. The tricky bit is that this is a semantically arbitrary choice. If the ordering relationship is based (say) on preference of “eating more cheese” over “eating less cheese”, you may objectively determine what is meant by moral. The problem at that point isn’t that the morality isn’t objective; it’s that the one of that axiom has (almost) no relation to the conventional associations of the word “moral”.

      • abb3w

        The catch is, the choice of which axiom to bridge the is-ought gap is as arbitrary as the choice of the Parallel Postulate versus Anti-Parallel Postulate. Parallel versus Anti-Parallel means talking about a different sort of “line” and “geometry”; which moral axiom determines what sort of “morality” gets discussed.

        • Azkyroth, Former Growing Toaster Oven

          In an a priori sense, I suppose, but the social effectiveness of behavior deriving from any particular axiom can certainly be compared.

          • abb3w

            Somewhat. The social effect of working with axiom A instead of axiom B can sorta be compared. Certainly, you can consider whether the effects are equal — although unless A and B are mutually reducible and can each be exactly derived from the other, it seems likely to be at least SOME difference that an irritated philosopher might harp on.

            Marginally more significantly, without an additional axiom as basis, you can’t make an ordered comparison, to conclude that A is “better” than B (or vice versa). The word “effectiveness” seems to involve a presumption as to a desired consequence, and an ordering based on some poset as to the similarity of desired and actual consequence.

            But yes, you can certainly talk about the effects/consequences of social axiom choices as well as individual choices.

    • Azkyroth, Former Growing Toaster Oven

      Also, it’s important because the intellectually honest conclusion, if you believe that moral law exists outside of the human mind – as a “real” thing that would have existed whether or not humans ever evolved – would be that there is some sort of diety (see: Leah Libresco).

      Beyond the additional axiom needed to reason in “ought” terms, the properties of human psychology are such that certain kinds of experiences and situations affect them in fairly predictable ways, just like the properties of fluids are such that certain shapes move much more efficiently through them than others and the properties of materials are such that certain classes of them can be used successfully to conduct a useful amount of electricity and others cannot. If humans were very different psychologically, a functional human morality would likewise be very different. No deity required.

      • LoyalOpp

        There is a big difference between “fairly” predictable, and “absolutely” predictable. With sufficient information, the flow of a liquid can be absolutely predicted. While you can make general statements about human psychology, many people are very, very different.

        And, you defeat your own argument. If moral law was an absolute that existed as truth occurring whether or not humans ever evolved, it would not change if human psychology was different. It could not change, it would be a law. The fact that you think that means you actually agree with me, that we have to do the best we can do to determine our morality, knowing that there is no absolute, objective law to discover.

        • abb3w

          If moral law was an absolute that existed as truth occurring whether or not humans ever evolved, it would not change if human psychology was different.

          Somewhat. However, moral law referencing human psychology tends to be a particular case of a more general form.

        • Azkyroth, Former Growing Toaster Oven

          If moral law was an absolute that existed as truth occurring whether or not humans ever evolved, it would not change if human psychology was different.

          Oh for FUCK’S SAKE.

          Hydrodynamics similarly has no meaning in the absence of fluids. That doesn’t make it “not objective.”

          • Anonymous

            When I say “objective,” I mean existing immutably and absolutely outside of the human (or for that matter any conscious creature’s) experience. If you mean “objective” in the sense that we can draw on what we know to create moral standards, I’m not arguing with that. I’m talking about absolute moral law.

            And that’s why your analogy is terrible. The laws of hydrodynamics are immutable. If there weren’t water, they might not have “meaning,” as you put it, but they would still exist and still work the exact same way as soon as water came into existence. Since you admitted moral law can change, it, by definition, cannot be objective, cannot be anything more than how we define it. It is not an absolute and it does not exist outside of the human experience.

    • MurOllavan

      I’m a desire utilitarian because I agree with the theory of value and action that comes with it. I think morality is objective and exists outside the human mind. Just because the majority of moral questions will never be answered in practice doesn’t mean they don’t have an objective answer.

  • invivoMark

    So basically, you follow John Stuart Mill’s rule utilitarianism.

    As a utilitarian myself, I think you’re not wrong. If more people subscribed to this line of thought, the world would be a more pleasant place to live in (sort of by definition).

    • Azkyroth, Former Growing Toaster Oven

      Well, probably not; there’s been 150 years of thought and observation since then. But clearly a form of utilitarianism.

      Incidentally, I find it ironic that Utilitarianism and Darwin’s theory of evolution were proposed about the same time, and even today are bombarded with criticism at about the same level.

  • http://biblicalscholarship.wordpress.com/ jayman777

    All the atheist must realize is that if a world with moral rules would be better, then that’s all the motivation we need to make them up.

    Make them up? If you’re concerned with how the world actually is then shouldn’t moral rules be discovered instead of made up?

    If we see, based only on how the world works, that telling the truth, not stealing, not killing each other, etc., makes people happier, then that’s all we need in order to suggest that “good” people ought to do those things.

    The phrase “makes people happier” is too vague. If five people in need of new organs kill one person to harvest his organs does that make people happier? How do you measure happiness?

    And, on your view, why should the individual be concerned with being good if he can get away with evil? If stealing makes me happy and I know I can get away with it then what reason can you give me to act differently? Even if your system allowed us to label some things as good and other things as bad it doesn’t seem to provide any reason for why any individual person should care to be good.

    • Azkyroth, Former Growing Toaster Oven

      The phrase “makes people happier” is too vague. If five people in need of new organs kill one person to harvest his organs does that make people happier?

      ….how happy would YOU being knowing an innocent person was murdered for the piece of him you’re now walking around with?

      • http://biblicalscholarship.wordpress.com/ jayman777

        Azkyroth: “how happy would YOU being knowing an innocent person was murdered for the piece of him you’re now walking around with?”

        Are the living happier than the dead? If so, from the atheist utilitarian perspective (which I don’t share), I would be happier alive than dead. But this is not a matter of my personal happiness in such a scenario. There are certainly people out there who would feel little remorse in doing such a thing. Can the atheist utilitarian give them any reason to do otherwise?

        • Mark

          “Are the living happier than the dead? If so, from the atheist utilitarian perspective (which I don’t share), I would be happier alive than dead.”

          This is rubbish and overly simplistic. Of course the living are generally more happy than the dead but if I was in need of an organ and was given the option of getting one but an innocent person had to be killed for it then I wouldn’t dream of accepting it for a second. I would rather die than someone be murdered to compensate for my poor fortune in having dodgy organs. It also completely ignores the fact that if this behaviour were to be regarded as good and acceptable then we’d all be walking around living in fear of being murdered by organ harvesters, which is good neither for individual or collective well being. These questions aren’t anywhere near as complicated as some make them out to be.

        • Azkyroth, Former Growing Toaster Oven

          This is the equivalent of asking “If we came from monkeys, why are there still monkeys?”

          First, all people’s happiness is equally valuable, so focusing on “the individual” misses the point completely. Second, people, the world, and society do not work in the stupidly simplistic way your counter-conceits propose. Third, it’s tiresome having to respond to these idiotically misinformed, childishly simple canards every time the topic comes up. DO SOME FUCKING READING.

      • anat

        The question I’d ask is ‘how happy would you be living in a society where someone was rather likely to kill you for your organs?’ Do you really think that living with the fear of being killed for one’s organs can be offset by the small chance that one would be the one to benefit from harvesting another person’s organs?

    • invivoMark

      For any moral system, you have to initially agree on some thing by which you can ultimately judge an action or a rule. You can’t prove that this thing is the ultimate good, it’s simply considered to be so.

      For instance, if you formulate libertarianism as a moral code, then the ultimate good is the rights of individuals, and whatever actions or laws increase the rights of all individuals are considered good. You just have to assume that individual rights are inherently worth having. You could try to justify that by saying that the more rights people have, the happier they are, but then, damn! You’re turning your moral system into utilitarianism.

      So if you’re going to follow libertarianism, you have to assume that individual rights are in and of themselves always good, and if you’re going to follow utilitarianism, you have to assume that happiness is the ultimate good.

      Given that, let’s examine your hypothetical. If there were six people in the whole world, and sacrificing one of them would make the other five unambiguously happier, then yeah, utilitarianism would say that the sixth should be sacrificed.

      But that’s a naive view of utilitarianism. There aren’t six people in the world, there are billions. And human sacrifice doesn’t make everyone unambiguously happier, because every death deeply affects dozens or hundreds or thousands of people. Killing people makes other people unhappy. Moreover, almost everyone would live a more fearful and uncomfortable life if they knew that they lived in a society which could arbitrarily sacrifice any of its members.

      We like having rights. It makes us happy. And so a society that protects those rights is valued by a libertarian mindset.

      I’m curious about your first paragraph, though. Where/how do you think that moral laws can be “discovered”? Which moral laws have been discovered, which can be empirically shown to be true?

      • http://biblicalscholarship.wordpress.com/ jayman777

        invivoMark:

        For any moral system, you have to initially agree on some thing by which you can ultimately judge an action or a rule. You can’t prove that this thing is the ultimate good, it’s simply considered to be so.

        My point is not so much over what is good. Rather it is over why I should strive for the good (whatever that may be). I have a desire to be happy and therefore I have a reason to act in ways I believe will maximize my happiness. This is true regardless of whether we label happiness good or bad. My desire for happiness is a fact about the world. I know it through introspection and not through reasoned argument. But what is my reason for following the moral system proposed by JT when my happiness comes in conflict with another person’s happiness? If the atheist utilitarian can’t answer this question then his moral theory has little practical import for there is no reason to strive for the good.

        Killing people makes other people unhappy. Moreover, almost everyone would live a more fearful and uncomfortable life if they knew that they lived in a society which could arbitrarily sacrifice any of its members.

        Yes, there can definitely be side-effects and they need to be considered in the moral calculus. It makes sense for a society in general to forbid murder or theft. Yet, at the same time, it seems to make sense for the individual who has evil desires to fulfill those evil desires if he can get away with it.

        I’m curious about your first paragraph, though. Where/how do you think that moral laws can be “discovered”? Which moral laws have been discovered, which can be empirically shown to be true?

        Moral laws (if they are objective) would be discovered through reason or divine revelation (or both). Empirical observations may play a role in moral reasoning but are not the foundation of morality. A moral law cannot be shown to be true by running a science experiment.

        • invivoMark

          Divine revelation, my butt. There’s no such thing, and if there were, there would be no way to distinguish it from delusions and insanity.

          And according to your logic, that leaves us with pure reason as the only remaining method of determining a universal, empirical moral law. And I’ve never heard of a way to come up with an objective moral law using reason alone, without first assuming that something is inherently morally good, so we’re left with relying on such assumptions.

          So which assumption would you like to start with? Happiness is my favorite, because I like to be happy, and I consider a better world to be one in which more people are happy.

          My point is not so much over what is good. Rather it is over why I should strive for the good (whatever that may be).

          Because morally good things are morally good.

          But what is my reason for following the moral system proposed by JT when my happiness comes in conflict with another person’s happiness?

          Because making others happy is a morally good thing.

          Yet, at the same time, it seems to make sense for the individual who has evil desires to fulfill those evil desires if he can get away with it.

          Right. But that would be morally bad, because he’s making others unhappy.

          See, the entire point of talking about a system of morality is precisely because some people don’t want to behave in ways that others consider moral. I can describe moral systems all day long, but I’ll never change the fact that evil people want to do evil. And if you want to hurt others, nothing I can say will change that desire. At the same time, if you hurt others, I would say that your actions are immoral.

          • http://biblicalscholarship.wordpress.com/ jayman777

            invivoMark:

            Divine revelation, my butt. There’s no such thing, and if there were, there would be no way to distinguish it from delusions and insanity.

            You sell your abilities short. The first step would be to see if the alleged divine revelation were consistent with reality. If so, then it probably isn’t the result of delusions or insanity. The second step would be to see if there is a reason to believe the alleged divine revelation is from an omniscient source. You may believe no alleged divine revelation passes this test but you are probably smart enough to construct such tests and examine the evidence.

            And according to your logic, that leaves us with pure reason as the only remaining method of determining a universal, empirical moral law.

            No, you need to remove “empirical” from your sentence.

            And I’ve never heard of a way to come up with an objective moral law using reason alone, without first assuming that something is inherently morally good, so we’re left with relying on such assumptions.

            Someone else mentioned desire utilitarianism (aka desirism). Alonzo Fyfe runs a blog called “Atheist Ethicist” if you want more information. He starts with human desires and hypothetical imperatives to fulfill those desires. I have problems with desirism but something like it might be the best an atheist can offer.

            Another option is some form of natural law. This is mostly associated with theism but the Camels With Hammers blog tries to form an atheistic version of natural law. I have not read much of what he’s written on the matter but I believe he starts with human nature and what allows human flourishing.

            The important point is that you need to start with human needs, wants, desires, and the like to provide humans with a reason to act. It is reasonable to act to fulfill my desires. It is reasonable to act in such a way that I flourish. It is reasonable to act in such a way as to go to heaven. It is reasonable to act in such a way as to avoid hell. There is no reason for me to do something merely because you label it “good”.

            Because morally good things are morally good.
            Because making others happy is a morally good thing.
            Right. But that would be morally bad, because he’s making others unhappy.

            Your answers (as you seem to realize) are only persuasive to someone who already wants to do the good.

            I can describe moral systems all day long, but I’ll never change the fact that evil people want to do evil. And if you want to hurt others, nothing I can say will change that desire. At the same time, if you hurt others, I would say that your actions are immoral.

            One nice thing about Fyfe’s desire utilitarianism is that he realizes you have to change people’s desires and not merely their beliefs. People act so as to fulfill the strongest of their desires. If you change their desires you will change their behavior (even, often, when they can get away with evil). But the changing of desires involves praise, reward, condemnation, and punishment as opposed to just talk.

            But, since I’m a theist, you actually could talk me into doing the good by pointing to God’s commands and the rewards or punishments he has in store for me based on my behavior. And there’s no situation in which I will be able to hid from God’s eyes. Theism provides reasons for action that non-philosophers can easily grasp.

          • Mark

            “You sell your abilities short. The first step would be to see if the alleged divine revelation were consistent with reality. If so, then it probably isn’t the result of delusions or insanity.”

            More hogwash. I could be convinced that I’ve had a divine revelation that some frogs are green. Is this claim consistent with reality? Sure. It doesn’t make it any less delusional and insane that I’ve had a divine revelation though.

          • abb3w

            That reasoning is easy to grasp doesn’t necessarily make the conclusions sound.

          • Mark

            Jayman, you’ve completely missed the point of a system of morality. You should do something that is considered “good” by a system of morality because it is considered good by a system of morality. That’s what systems of morality do: they define what you should do.

            abb3w is completely right, you’re making the is-ought fallacy.

        • abb3w

          Essentially, you’re raising Hume’s is-ought problem. The answer boils down to “ought” and “good” both involving definition of an ordering relationship over a set of choices, with an axiom to define either also defining the other. Thereafter, the answer to some of your questions mostly becomes “because it’s been defined that way; any other answer requires going back and changing definitions”.

          So, if you’re not using JT’s system, it’s because you’re working from some other axiom.

          • http://biblicalscholarship.wordpress.com/ jayman777

            abb3w:

            Essentially, you’re raising Hume’s is-ought problem.

            Only to the extent that it is a problem for JT’s morality, not morality in general.

            Thereafter, the answer to some of your questions mostly becomes “because it’s been defined that way; any other answer requires going back and changing definitions”.

            Not really. The is-ought problem is resolvable by noting what a person desires and what actions will fulfill those desires. Changing definitions does not change the matter.

          • abb3w

            ….kinda. The is-ought problem is somewhat resolvable by taking an axiomatic bridge, to explicitly define what sense of the word “ought” is being used. If you take the principle that “what is desired is better than that which isn’t, such that you OUGHT to do what is desired”, it can somewhat suffice as a bridge. (Essentially, a hedonistic morality.)

            However, that’s only “resolving” just as much as taking designating any other ordering basis by arbitrary axiom allows “resolving” the problem, such as “what god commands is better than what he forbids, such that you OUGHT to do what god has commanded”. (Essentially, an authoritarian morality.)

            The particular desire-based bridge also has a couple other problems. It’s either entirely subjective, requiring a reference to particularly whose desires you’re talking about; or you need a mechanism in the axiom for resolving the ordering conflict when person X prefers A over B, but person Y prefers B over A. The former (in absence of any other rule that describes each individual’s ordering) seems to leave morality entirely subjective in basis; the latter runs into more subtle but serious problems over Arrow’s impossibility theorem.

            Nohow, taking an explicit axiomatic bridge is only a resolution in so far as the extension of an inference system by an axiom regarding an incomplete proposition resolves the incomplete proposition: yes, it gives one answer, but not the only answer, and the original system remains incomplete.

    • http://www.facebook.com/profile.php?id=1017276335 Strewth

      How happy would you be knowing other people were eyeing your vital organs covetously and didn’t think there was any problem in taking them without asking your input on the matter?

      • PessimiStick

        Happier than if I were dead, certainly. The real question is how do you quantify the happiness of the 5 living people against the loss of happiness of the dead person. I’m not sure that’s actually doable.

        • Mark

          The fact that you would be happier than if you were dead is neither here nor there. The dead are not concerned with morality or happiness because they’re dead. The real question is whether you would be happier if you were not walking around fearing abduction at any moment by organ harvesters. I think the vast majority would answer “yes”. Which is the very obvious answer to why killing one person in order to harvest their organs for the benefit of more than one people is immoral and perhaps just as importantly unsustainable.

        • Azkyroth, Former Growing Toaster Oven

          You don’t even have to quantify it, to compare the severely guilt-impaired happiness of five people to the significant decrease in happiness of millions living in a society where being murdered for one’s organs is allowed to happen. Consequences aren’t limited to the most stupidly narrow and immediate ones you can think of.

          I call this strawman “Goldfish Utilitarianism.” People who invoke it apparently think humans also have four-second memories. (Yes, I know goldfish don’t either).

  • anteprepro

    Apparently “if it makes people happy” was the entirety of JT’s proposed moral system, as at least two True Believers have deduced. Obviously, it couldn’t just be one of the kind of factors one might consider. Nope, JT presented the entirety of his “periodic chart of behavior” in one sentence, among the five paragraphs mentioning the nature of morality. “Example”? What the hell is that supposed to be?

    Good work, super sleuths

  • abb3w

    Both an atheist conception that “a world with moral rules would be better” and a theist who “bases their morality on the commands of a god” actually both have the same basic problem. Basing morality on the commands of a god implicitly requires the premise “you ought to do what god says”. This theist concept of “ought” and the atheist concept of “better” both require establishing an ordering relationship over a set of choice — some axiom to indicate which bridge you are taking across Hume’s is-ought divide.

    The question of whether the deity the theist claims has actual existence is an additional problem, to keep their morality to collapsing into the trivial poset (all pairs of choices being incomparable). However, it’s not the BIG one.

    The big question is “Which axiom are you using to define what ordering relationship you’re semantically referencing when talking about ought-versus-oughtn’t and good-versus-bad”?

    • http://biblicalscholarship.wordpress.com/ jayman777

      abb3w,

      I’m working with what is sometimes called a hypothetical imperative. Here are some examples:

      (a) If I desire to feel satiated I ought to eat something.
      (b) If I desire to avoid pain I ought not place my hand on a hot stove.
      (c) If I desire to arrive at work on time I ought to leave home 30 minutes before work begins.

      This kind of imperative should be acceptable to both theists and atheists.

      The Christian believes God will reward those who follow his commands and punish those who violate his commands. Thus:

      (1) If I desire to be rewarded by God I ought to follow his commands.
      (2) If I desire to avoid God’s punishment I ought not violate his commands.

      Now no one desires to be punished (otherwise it wouldn’t be punishment) so all Christians desire to be rewarded/avoid punishment and therefore ought to follow God’s commands.

      • Mark

        That’s not a system of morality, it’s a series of logical if-thens predicated on absolute selfishness.

        • abb3w

          Actually, it’s a set of if-thens, not a sequence.

          I suppose there’s some selfishness in phrasing the ordering in terms of “I desire” rather than “it is good” or, but that seems principally just a quasi-arbitrary choice of word for how one refers to such orderings. As such, it seems perfectly cromulent.

      • abb3w

        I’m fine with that. Each one involves deriving additional “ought” propositions, given an initial one.

        From my stance, though, the big step is the one you seem to be sweeping under a rug of “it’s obvious/intuitive that…”. For a conclusion to be inferred, you need an initial ordering basis, such as “satiation is better than hunger”, “avoiding pain is better than seeking pain”, “keeping my job is better than losing it” (with additional empirical is-propositions involved), et cetera.

        The case of “no-one desires to be punished” packs the ordering in a deeper implicit, in that another agent changing your environment in way A (smiting you with boils) may be considered worse than changing your environment in way B (causing your business to be financially successful), such that the former is “punishment” and the latter “reward”.

        • http://biblicalscholarship.wordpress.com/ jayman777

          abb3w, if I am reading you correctly, I agree that a person’s different desires have different weights. We will act so as to fulfill our strongest desires. Thus, if my desire for satiation is stronger than my desire to be thin I am liable to eat too much and fulfill only the first desire. And yes, I do think it is self-evident to the individual what his desires are.

          • abb3w

            Actually, I’m referring to morality in a social context, and the problem of balancing at the social level when person A and person B have disagreements.

          • abb3w

            Whoops, sorry, wrong comment. I’m more examining the relation of “desire” and “good” relative to “ought”, for what you were replying to.

  • http://biblicalscholarship.wordpress.com/ jayman777

    Mark:

    Jayman, you’ve completely missed the point of a system of morality. You should do something that is considered “good” by a system of morality because it is considered good by a system of morality. That’s what systems of morality do: they define what you should do.

    I agree that a system of morality should define the good but I think it is naive to believe that merely defining the good will cause people to do the good. As invivoMark wrote: “I can describe moral systems all day long, but I’ll never change the fact that evil people want to do evil.” Now I proposed using praise, reward, condemnation, and punishment to change evil desires into good desires. What is your suggestion for getting people who want to do evil to want to do good?

    That’s not a system of morality, it’s a series of logical if-thens predicated on absolute selfishness.

    Do you agree they are reasons for action though? If you are going to provide a reason for action you have to work with the desires the individual actually has (whatever they are). Why should an evil person act so as to conform to your system of morality (whatever it is)? Just telling them “Because it’s good” isn’t going to cut it.

    • invivoMark

      I apologize, those comments were me, not Mark. I post on two computers, and one hadn’t gotten the message that I’d changed my handle.

      I agree with using reward and punishment for getting evil people to do less evil and more good. That’s what laws are for. Disagreeing with that simple principle is equivalent to being an anarchist.

      A system of morality tells us what is good, not how to achieve it.


CLOSE | X

HIDE | X