Is there more to life than happiness?

This is a belated follow-up to my post “On completely misunderstanding the threat of superhuman AI.” There, make the point that it seems like you could maximize pleasure by hooking people up to drugs or electrodes, and the fact that we wouldn’t want that shows that pleasure isn’t all we care about that.

I only briefly mentioned the point that something similar might be true of happiness. That’s actually a really important point. If you read Eliezer Yudkowsky’s short story “Failed Utopia #4-2,” part of the implied message seems to be that there’s more to life than happiness. We’re led to believe the AI is right when it claims that as a result of its actions, humans will be “considerably happier starting around a week from now.”

At the time, though, I wasn’t quite sure that was really Eliezer’s point. Recently, however, I’ve been reading through his old blog posts systematically, and came across a post on doublethink, specifically the possibility that if all you care about is happiness, it might be rational to be irrational. Most of the post is about whether it’s really possible to do that, but Eliezer also mentions that, on his view, “there is more to life than happiness.”

And that seems right. Religion is positively correlated with happiness, not as strongly as some things, but still correlated. Yes, correlation is not causation, but suppose future research found evidence for causation. There are plenty of happy atheists around, other correlations are stronger, so we’re never going to discover religion is necessary for happiness, but what if you knew for a fact that it helped? Atheists, would you therefore want to be religious? I wouldn’t, even assuming Eliezer was wrong and the necessary level of doublethink were an option for me.

This issue is somewhat tricky, though, because our concept of happiness is morally (or quasi-morally) charged in a way that our concept of “pleasure” isn’t. From a post on the Experimental Philosophy blog titled, “Could Paris Hilton Ever Be Happy?”:

Luke Misenheimer, Joshua Knobe and I have recently been doing some research on attributions of happiness (much like Dan Haybron and Sven Nyholm). In particular, we suspected that there would be an evaluative component in people’s attributions of happiness that was totally absent from their attributions of unhappiness.

To investigate whether or not unhappiness had an evaluative component, participants were told about a woman named Maria who is described as a caring individual with a great family life and a variety of meaningful friendships and projects. Nonetheless, she feels terrible all the time and regards her life as fundamentally a failure.

Participants were then asked whether they agreed that Maria is unhappy. Not too surprisingly they agreed that Maria was unhappy despite having a good life. What this seems to show is that a person could be unhappy whether or not they have a good life.

We took a really similar approach to testing happiness. Maria is described as a vapid individual who has no real no goals beyond going to parties and gaining greater social status. Nonetheless, she enjoys her day-to-day activities and feels like there isn’t anything she would rather be doing with her life.

Participants were then asked whether they agreed that Maria is happy.

Surprisingly, participants disagreed! They reported that, despite Maria’s positive mental states, she wasn’t happy. This seems to suggest that the ordinary concept of happiness has an evaluative component that the concept of unhappiness does not.

Curious to hear people’s thoughts on this. I wonder if maybe our concept of happiness is unusually fuzzy even by the standards of human concepts, so there may be no uniquely right way to say, program an AI to make people happy.

  • Kraza Von Kalifornen

    I’d say that *everyday* happiness is definitely not sufficient. I also value drama, general emotion, defiance, and succeeding at major goals.

  • http://www.facebook.com/peter.moritz.351 Peter Moritz

    I know what pain is. What exactly is “happiness”? A fools paradise?

    I think happiness as fuzzily defined as some feelgood thingy, it is for idiots who are unaware that their happiness usually is connected to the suffering of others – see the recent catastrophe in Bangladesh, where the “happiness” of bargain hunters is directly related to the death of hundreds.

    I feel satisfied if I have achieved a goal, I feel content and safe in the relationships within my family. I feel satisfied after a good day at the job, having done my duty or went beyond it.

    If happiness is a description of those feelings – it is way too imprecise to be useful.

  • Pulse

    I’m going to pull in some terms I learned from Alonzo Fyfe on his blog on desirism. You may or may not agree with his overall conclusions.

    Desire fulfillment is the state of affairs in which the object of desire (that which is desired) is realized in actuality. If a man desires a million dollars, this would be the state of affairs in which he actually possesses a million dollars.

    Desire satisfaction is the state of affairs in which the desirer believes the object of desire is realized. If a man desires a million dollars, this would be the state of affairs in which he believes he possesses a million dollars, regardless of whether this is actually true.

    Fyfe argues that people actually seek out desire fulfillment (the sort of fuzzy happiness described in the post) rather than settling for desire satisfaction (mere pleasure). If a man desires a million dollars, he would prefer to actually possess a million dollars and not know it rather than to believe he possesses a million dollars when he actually doesn’t. Of course, the better scenario would involve actual possession and belief of possession, desire fulfillment and desire satisfaction, happiness and pleasure.

    If a superhuman AI understood this distinction (and cared at all for human desires), then it would encourage and/or enforce activities that generally tend to maximize desire fulfillment (people actually getting what they really want) rather than simply plugging humans into pleasure machines.

    Oh, and the reason that the second version of Maria isn’t happy even though she is getting everything she (shallowly) desires is that (according to Fyfe) she isn’t getting what she should desire. But that’s a different conversation.

  • JohnH2

    It is like no one here has ever even heard of Plato’s Apology or Aristotle’s Nicomachean Ethics. In English we confuse quite a few ideas in the term “happiness”; it can mean in English anything from transitory physical pleasure, the feeling of pleasure at more emotional or mental tasks, a state of general well being, and real and lasting joy that comes from being and doing good (Eudaimonia).

    A state of well being can not be said to be true happiness if the future holds a collapse of that well being, this is the problem with the second Maria; She may find pleasure at the moment and have a state of well being but it, in history and in most everyone’s experience, is transitory; twenty years from that point she will not be able to gain the same pleasure from her past existence, and will not be able to continue that existence unabated.

  • eric

    Its not just fuzzy, but its probably (in part) a relative quantity. Meaning that some of our happiness derives from how we measure ourselves against others. An AI cannot make everyone’s life above average, that’s just impossible. Maybe it can run a deep deception and make everyone think they are better off than their neighbors, but even that sounds very difficult as long as humans are interacting with each other.
    Another problem is that happiness may be partly related to process or action rather than just achievement; we may derive happiness from earning some goal, rather than just having it. An AI can grant everyone a Ph.D. degree, but it is hard to conceive of it being able to give everyone the feeling of having earned it (except through deep deception).

  • eric

    I couldn’t figure out why I (also) thought 2nd Maria is unhappy, but you put your finger on it by mentioning age. I think part of the reason people want to say 2nd Maria isn’t happy is because we expect she will not be happy with 2013 Maria’s lifestyle in 2023. IOW, we are not really answering the question in the limited sense of “right this minute is she happy” but in the broader sense of ‘do you think her current actions will help to result in long-term happiness.”
    I can easily answer “yes” and “no” to those two questions, respectively. And I suspect my gut response of “no” to the original hypothetical question comes because I interpret it more as asking the latter, and not the former. To make it even more painfully clear, if someone asked me “if 2nd Maria gets hit by a bus tomorrow, would you say that she was happy in her last weeks of life?” I’d have no problem answering yes.

  • Alice

    Well, human beings are social; we have a deep need to help others and be productive members of society instead of just resource consumers. If our ancestors were 100% self-absorbed then the human race could not have survived.
    Now it is difficult to say how altruistic we truly are. Do we help others solely for their benefit or because when we help others, we feel good about ourselves, people might praise us, and because the person we helped may return the favor someday? I think human beings have a combination of selfish and altruistic impulses.

  • Pingback: Kim Wymer

  • Pingback: yellow october


CLOSE | X

HIDE | X