The Happiness Machine

As any regular reader of Daylight Atheism knows, the topic of morality is a major concern of mine. In essays on Ebon Musings, I’ve sketched out a secular moral theory I call universal utilitarianism. Here on this site, In the past, I’ve written about the roots of this morality and the virtues that can be derived from it, as well as musings on what UU has to say about some controversial moral topics. In 2009, I plan on taking these explorations in a new direction.

This year, I intend to write some posts further detailing universal utilitarianism and how it can respond to difficult ethical dilemmas – not the practical dilemmas that we encounter in daily life, but thought experiments specifically dreamed up to stretch moral philosophies to the breaking point. If UU can survive being tested in this way, then I think we’ll have greater reason for confidence that it can cope with everyday issues. I’ve already written about one such problem, the “trolley problem”, in “The Doctrine of Double Effect“. Today I’ll confront a different one.

Today’s post concerns the Happiness Machine, a hypothetical invention that produces pure pleasure for the user in unlimited quantities – say, an electrical implant that stimulates the brain’s pleasure centers, producing a feeling of bliss at the push of a button. It’s undeniable that universal utilitarianism counsels us to seek happiness as the highest good. If we follow UU, then if this machine is invented, should our highest goal be to hook ourselves up to it for the rest of our lives?

Lynet, of Elliptica, has an answer in Challenging the Paramounce of Happiness:

I wouldn’t. It would be like dying. Even with heaven included, I don’t want to die.

I suspect many of my readers share this intuition, as I do myself. Intuitively, there’s something deeply repellent about this scenario, but what is it, and can UU justify this intuition despite its promotion of happiness as the highest good?

The first thing to note is that the Happiness Machine is not an entirely hypothetical scenario. It strongly resembles a real-world phenomenon: the use of narcotic drugs for pleasure. And, if such a machine were ever invented, we can be fairly confident that users would end up in much the same way as addicts of these drugs.

First of all, what would keep users of this machine alive? If the Happiness Machine works as advertised – if it truly replaces all suffering with total contentment – then it will make you oblivious to your need for the necessities of life. We satisfy our bodily needs, in the end, because it causes suffering if we do not. If they cannot feel this suffering, users of the Happiness Machine will soon die of starvation and dehydration and miss out on all the further happiness they might have had in a longer life. Clearly, this is not a good outcome.

But if that problem could be solved, another would rapidly follow. Pure sensory pleasure will soon become insipid and unsatisfying. The human mind habituates: if you constantly experience a high level of pleasure, it does not remain equally pleasurable indefinitely. Rather, it soon becomes the base level against which new experiences are judged. The same stimulus produces a steadily diminishing reward. If you use the Happiness Machine often, soon it won’t be a source of bliss, but something you’ll need to use constantly just to function, and ordinary activities without it will become unbearable. Like any other drug addict, you’ll experience a brief period of pleasure, but it will be followed by a much longer period of misery and dependency. In the long run, it will cause far more suffering than happiness, and might even permanently impair the brain’s capacity to take pleasure in anything else.

And what about the potential loss of independence? If someone controls the master switch for all the Happiness Machines, or if they hold the patent and are the only ones who can repair it, they will have a population of slaves. The addiction which such a machine would produce would render its users utterly dependent on whoever can supply that continued jolt of pleasure. To anyone who values freedom and autonomy, the thought of being controlled by another in this way ought to be intolerable, and again, a sure pathway to a life of misery and servitude.

The only way to avoid habituation and dependency is to live a life with not just one source of pleasure, but a variety of meaningful pursuits. The most enduring and fulfilling kind of happiness is the kind that has this rich texture of knowledge and experience, the kind that only comes from interacting with the world. (If nothing else, the more you know about what’s out there, the better a position you’re in to appreciate the things you really like.) Running a wire into the pleasure neurons of the brain is a poor substitute.

Finally, excessive use of the Happiness Machine undermines the development of empathy that UU holds as the highest moral virtue. After all, UU does not counsel us to only seek pleasure for ourselves, but to live in the world and be the source of happiness for others, to work to defeat suffering and improve the lives of our fellow humans. Someone who is anesthetized by this machine, cocooned in a blissful coma and deaf to the cares of other people, is not acting in accord with the principles of UU but against them. Like a greedy millionaire who hoards his wealth and refuses to give to charity, addicts of the Happiness Machine are not doing good but merely indulging their own selfishness.

About Adam Lee

Adam Lee is an atheist writer and speaker living in New York City. His new novel, Broken Ring, is available in paperback and e-book. Read his full bio, or follow him on Twitter.

  • Valhar2000

    You assume that the reason stimuli become less pleasureble and interesting is that the pelasure centers they affect become less sensitive; are you sure this is the case? It is also possible that the other areas of the brain or nervious system whose job it is to receive these stimuli and excite the pleasure centers are the ones that become less sensitive to any particular stimulus as time goes on. If this were the case, direct stimulation of the pleasure centers would provide maximum happiness at all times, until either the machine or the brain broke down.

  • penn

    Wait, you start with a magical Happiness Machine in a thought experiment and then you get bogged down in the practical details. My Happiness Machine is given out for free and we have more than enough for the entire population of earth. It also supplies you with food, water, and exercise. Each individual user has sole control over it. It runs on Mr. Fusion, so that it never needs additional energy, and it never needs any repairs or maintenance. It is non-habit forming, and re-calibrates your pleasure awareness downward while you sleep, so that the subjective reward never diminishes.

    Now, would you endorse Happiness Machine 2.0?

  • http://www.myspace.com/driftwoodduo Steve Bowen

    This thought experiment and the consequences depends a lot on just how good the happiness machine is. If we imagine a scenario where the machine really does deliver absolute (and un-diminishing)bliss while maintaining the body’s functions 24/7 a lot of the arguments above fail. After all you would never have the “come down” never care about the suffering of others or the loss of independence, so why would’nt you enter the machine?

  • vgtr

    I would claim that a true hypothetical Happiness Machine would compensate for the brain’s desensitization to stimulus somehow; claiming that the Happiness Machine would eventually stop giving happiness is just a convenient way to evade the problem.

  • http://whyihatejesus.blogspot.com OMGF

    First of all, what would keep users of this machine alive? If the Happiness Machine works as advertised – if it truly replaces all suffering with total contentment – then it will make you oblivious to your need for the necessities of life. We satisfy our bodily needs, in the end, because it causes suffering if we do not. If they cannot feel this suffering, users of the Happiness Machine will soon die of starvation and dehydration and miss out on all the further happiness they might have had in a longer life. Clearly, this is not a good outcome.

    I’m not so sure it’s clear that this is not a good outcome. It’s a pragmatic choice of complete happiness now, for however long it lasts, versus a mixed bag of happiness or unhappiness that’s completely unknown for an unknown amount of time. Even if you did starve to death, you’d be happy all the way to the end.

    Pure sensory pleasure will soon become insipid and unsatisfying. The human mind habituates: if you constantly experience a high level of pleasure, it does not remain equally pleasurable indefinitely. Rather, it soon becomes the base level against which new experiences are judged.

    I think this objection is defeated by the very premises of the thought experiment, that the machine can provide unlimited pleasure. If one continually needs more and more pleasure, the machine can provide it, as is stated in the assumptions that make this thought experiment difficult to deal with in the first place.

    I think your last point is really the best one to use here, that this would make us all self-absorbed, greedy, and self-indulgent. I think that UU does hold up against this thought experiment, I just see some areas that either need to be abandoned or should be better thought out.

  • Steven

    Spider Robinson explores the idea of direct pleasure-centre stimulation (“wireheading”) to good effect in his science-fiction novel “Deathkiller”. He comes to much the same conclusion that humans are not really evolved for continuous pleasure. It makes joy a lot sweeter if you know about despair as well.

  • velkyn

    sounds like Larry Niven’s “Tasp”, a electrical current right to the brain (Larry Niven is a SF writer). He wrote that it would be rather self-selecting on who would use it and those using it wouldn’t be that likely to pass on their genes, being too wrapped up in the tasp to want anythingm sex or to the end point, food.

  • Chris Parra

    What about computers? Couldn’t those be considered “Happiness Machines”? After all, internet porn stimulates the pleasure centers of the brain, and other areas, doesn’t it?

  • David Ellis


    I’m not so sure it’s clear that this is not a good outcome. It’s a pragmatic choice of complete happiness now, for however long it lasts, versus a mixed bag of happiness or unhappiness that’s completely unknown for an unknown amount of time. Even if you did starve to death, you’d be happy all the way to the end.

    True, but happiness, while one of the highest goods, is not the only one I value.

    For any who haven’t read it I recommend the science fiction novel BLINDSIGHT by Peter Watts. It deals with a lot of other territory as well (most of it of great philosophical interest) but one of its subplots is about the protagonists mother having entered a program in which she is put into a happiness machine—her body lying as if in a coma while her brain is hooked to a computer creating a virtual world taking whatever form the person wishes.

  • http://whyihatejesus.blogspot.com OMGF

    True, but happiness, while one of the highest goods, is not the only one I value.

    Hence the reason why I believe that Ebon’s last point holds the most weight.

  • mikespeir

    How would this play play alongside the theist’s contention that without evil (pain, sorrow, etc.) we could never know good (comfort, happiness, etc.)?

  • TJ

    her body lying as if in a coma while her brain is hooked to a computer creating a virtual world taking whatever form the person wishes.

    That’s the variation that came to mind for me. What if the happiness machine, instead of stimulating the pleasure center directly, stimulated the senses in such ways as made the recipient feel intense pleasure.. whether that was morally and ethically guilt-free and totally safe sex with a different partner five times a day (with no chaffing!), or (barely) climbing to the top of Everest, or exploring the mountains of Mars, or conversing with dead relatives or great philosophers of the past, or getting a personal physics lecture from Stephen Hawking (walking around–heck, I think Stephen Hawking might sign up for that one)? What if, further, it could accelerate perceived time so that you could live and experience a hundred (different) lifetimes in a year? You could learn Chinese by being born a Chinese emperor and living that life (all in a day or two). Or spend a lifetime as another gender or species (real or imagined)? And if these were Happiness Machines 2.0, free, for everyone, and they took care of your body, too.. would that be a bad thing?

  • prase

    I absolutely agree that happiness machines are not the ideal goal of progress. What I don’t agree is that this somehow follows from UU. It seems to me that, on the contrary, any form of utilitarianism, when applied consistently, must endorse at least some kind of happiness machines (HM in the following).

    As I understand the post, your objections against HMs are:

    1) HM deprives us from even the basic survival instincts.

    2) Constant happiness is impossible, as the mind adapts to encountered sensations.

    3) Addiction to the use of HM makes us vulnerable to manipulation.

    4) HMs endanger progress (since they get away all our motivation for research) and therefore reduce the chance of better happiness maximisation due to more advanced technologies.

    Honestly, all of these in a way resemble some of the 19th century Luddite arguments against use of machines in industry. In particular,

    1) and 2) are instances of argument from lack of imagination. As for 1), the present drugs act in this way, but it in no way means that any drug or brain machinery has to have such effects. In fact, it is not as much difficult for me to imagine being happy and still want to survive. I don’t eat only when I suffer with hunger. Moreover, people are known to deliberately end their lives because of dire depression but I have never heard (OK, anecdotal absence of evidence) of suicides resulting from excessive happines. The drug users often die, but mostly after passing a period when they are hardly happy at all, which connects to the argument 2)…

    …which is also unfounded. There is a good and simple evolutionary reason for our minds to become satisfied/bored by sensations that initially make us happy. But why do you think that people can never overcome this adaptation? We are more powerful than the evolution is because we can plan forward. I see no evidence for saying that the brain cannot be modified to perceive constant, unfading happiness. After all, unfading pain is clearly possible (as there is no hard evolutionary reason against it), so it is not true that any sensations must expire in time.

    4) is very similar to 1), it does follow from nowhere that more happy people won’t do research.

    3) is a serious argument against HM, but I can’t figure out how to arrive here from the UU. If happiness is the ultimate goal (possibly the only important goal as I understand how the UU is defined), what it matters whether you are manipulated or not?

    Or more generally. You postulate

    Always minimize both actual and potential suffering; always maximize both actual and potential happiness.

    Now, maybe I should leave away the question what is “potential happiness” even if I think it would deserve more elaboration when it plays so central role in your philosophy. But still, how is happiness counted? Is it better to have 2n happy people than n equally happy people, all other things being also equal? If so, is it better to to have 2n happy (n large) and one unhappy person than n happy and none unhappy? Shall we care more about happiness of presently living people more than about the people who are yet not born, even when it’s still not known if they will be born? You don’t seem to adress such questions much precisely -

    I agree that universal utilitarianism requires human beings to think for themselves…

    Not that I disagree with the need to think for yourself, but one should have in mind that when the moral philosophy doesn’t give enough axioms, people will substitute them with their own. And it couldn’t be then said that their conclusions folow from the philosophy.

    To be more concrete, I don’t believe that an alien given the UU’s axioms and otherwise lacking standard human values would arrive to the conclusion that people shall investigate nature, value science and knowlegde, think critically etc. I would more suspect it will come out with some version of happiness machine. And it seems to me that your derivation of the desirability of knowledge was possible only because you knew before what you wanted to derive, thus rationalising the argument. That is not, in my opinion, characteristic only for UU, all philosophies which build on single value have this fault. Happiness is not the only independent value I have.

    Well, this comment appears rather long and I appologise for that. I have alway wanted to post here some longer criticism of UU and this was, I think, a good opportunity. By the way, please don’t take my critique much seriously, I still think UU is a rather good philosophy compared to the others.

  • Mathew Wilder

    I have never shared your (Ebon) or Nozick’s intuition that the Happiness Machine is bad.

    I mean, if you die because you care about the pleasure more than food, so what? At least you’re happy til the end.

    In some ways this is similar to the Matrix, and as cool as that movie is, I never really got wanting to leave it. I mean, the “real” world is a pretty shitty place in that movie. Why should we think the Matrix is any less real in terms of experience?

    My intuition is that even those who say they wouldn’t choose to plug in, if exposed to the Machine, would quickly change their minds. I vaguely recall some classic psychology experiments with pigeons where the pigeons ended up choosing the pleasure stimulator over food. I think the same would happen to us.

  • Christopher

    You know, this “Happiness Machince” scenario resembles a short stroy I wrote in college for the campus magazine – in it, a man is diagnosed as being “overly-cynical” (a mental disease in this fictional society) and is given an implant that stimulates dopamine production whenever “negative” thoughts (read: any thought that something is absurd or unpleasurable) are dectected.

    For a while, our nameless protagnist does well with the new implant, but becomes tired of this constant pleasure and begins mutilating himself to feel discomfort again – his sister tries to stop him and he ends up killing her whilst attempting to escape her. After this, he experiences a brief moment of terrible guilt and pain, followed by pure extacy due to the implants effects. The end result: he learns to associate death and suffering with pleasure and goes on a shooting rampage to get that high again – in which the police gun him down.

    I was inspired to write that as a commentary of society’s over-medication after a friend of mine began to experience thoughts of suicide from a medication that was supposed to relieve her depression symptoms – a case in which the “cure” was far more damaging than the disease: which is what I think such a technology as a “Happiness Machine” can easily become, especially if it falls into the hands of a deranged society that sees “unhappiness” as a mental illness…

  • Alex Weaver

    I don’t know. I’ve never been of the opinion that an ethical philosophy has to be able to answer the extreme-dilemma situations to be useful. In cases like this, it seems like an argument akin to claiming that because Newtonian mechanics break down at extremely high speeds or gravitational fields, they therefore are unreliable for designing, say, car engines. Other dilemmas make me think of “well, what if you found a massive object that repelled other masses?” type questions.

  • Polly

    In some ways this is similar to the Matrix, and as cool as that movie is, I never really got wanting to leave it.

    Neither did I, with the exception of the CONTROL issue. The bots could inhabit your avatar at any moment of their choosing or kill you at will. That’s the only downside I could see. A matrix with no central intelligence dominating might be pretty great. But, exploration of the unknown would be ruled-out so that would still be a downer.

    I think the instinct that repels us from the HM is not UU, but the moral equivalent of CHEAP GRACE. You get all the happiness and joy of accomplishments, relationships, and exploration without “actually” engaging in those things, without engaging life.

    CHEAP THRILLS FOREVER, is the best that can be said for such a machine – definitely NOT happiness.

  • exrelayman

    Just congrats on a very stimulating post. Some very clear thinkers have responded with excellent points also. I’m with the pigeons per previous comment. I do so wish we had at least a perfect pain buster machine for those who suffer but our laws prevent them from choosing to die. Christians think Jehovah will be this machine for them in the next life. Maybe they too are with the pigeons.

  • http://goddesscassandra.blogspot.com Antigone

    The only thing about this I can see is I can already start to see the exceptions, the first and foremost in my head is by those who are terminally ill or those who are depressed. This seems like it would be excellent to hook themselves into them, and live out the rest of their days happy, instead of in pain.

  • Leum

    What about quality? The happiness machine you describe provides a sensory happiness, not an intellectual or emotional one. If we accept J S Mill’s maxim that it’s better to be Socrates dissatisfied than a pig satisfied, then such a happiness machine becomes undesirable because it only provides a basic sort of happiness.

    However, suppose the machine triggered the full range of happiness-inducing effects including awe, intellectual fulfillment, and love? Such a machine would probably be indistinguishable from real experience to the point where it would be little different in the machine or out of it.

    I don’t know. I’ve never been of the opinion that an ethical philosophy has to be able to answer the extreme-dilemma situations to be useful. In cases like this, it seems like an argument akin to claiming that because Newtonian mechanics break down at extremely high speeds or gravitational fields, they therefore are unreliable for designing, say, car engines. Other dilemmas make me think of “well, what if you found a massive object that repelled other masses?” type questions.

    Really good point. Just as extreme cases make bad law, it’s very possible that extreme dilemmas make bad morality.

  • bbk

    Ebon, you know I’m a pain in your hide and always criticize ethics systems based on altruistic morality. It never sat well with my own intuition nor my understanding of game theory. I suggest picking up the newest copy of Skeptic magazine because there’s a really interesting article called Doubting Altruism which highlights some recent research on this topic.

    I think in addition to trying this mental excercise against utilitarianism and your own UU, you should also try it against some other frameworks. I highly reccomend evaulating natural ethics (not natural law ethics – search for Tullberg).

    One of my main concerns with any ethics system is outcome. I treat ethics the same way that I treat physics or economics. The biggest value of an ethics system is not it’s elegance or ideals, but it’s predictive value over our behavior. If an ethics system is to guide our actions, then the real-life outcome of those actions better be predicted accurately by that framework. If, on the other hand, the ethical framework can be gamed and its adherents will be mistreated, then there is something fundamentally flawed with that system.

    Within that light, I think that the Happiness Machine is quite problematic for UU, to the point that the goals of the system begin to contradict themselves and require little handicaps to be added here and there to the machine in order to bring it down.

  • http://rgzblog.blogspot.com rgz

    Firstly narcotic toleration is a chemical mechanism and is powerles against electrical stimulation of the pleasure center, also its impossible for it not to cause instantaneous addiction, if the machine also is life supporting no one who has ever used it even once will want anything else.

    Now, there is a method of effectively ending all suffering, even the anguish of leaving those behind you broken, suicide. Suicide is very similar to this Happiness Machine and has indeed solved many problems, but those who took this route did not reproduce, or inclined their offspring to not reproduce, as a result people developed a general aversion to suicide.

    The same way I don’t think we have a choice whether to use the machine or not. We don’t need a reason not to. If some of us do, the next generation will be even more aversive to it, simple, irrational, evolution.

    When Dawkins introduced the concept of memes he placed it in the higher category of “replicators” alongside genes, he also said that genes promote behavior that promotes the spread of those genes. If we raise the concept of replicator up to the human level we can say, that people promote the behavior of people like themselves.

    I have a completely empathy based theory of morals.

    Actually I’m reposting this to my blog later with some expansions.

  • David Ellis


    How would this play play alongside the theist’s contention that without evil (pain, sorrow, etc.) we could never know good (comfort, happiness, etc.)?

    Even if it were true (which is far from obvious) we only have to ask ourselves whether we think our children should be raped or get leukemia in order to better appreciate the goods in life as opposed to say, experiencing a broken heart or getting a cold, to see how poorly that theodicy deals with the sorts of extreme suffering to be found in the world.

  • http://www.blacksunjournal.com BlackSun

    I think with this post you are on a slippery slope towards Puritanism and neo-Prohibition. If you stated it a little differently, you would avoid this trap. I would reframe it in terms of a conflict between short-term and long-term happiness. There is a trade off between the two, but each person must determine where they wish to fall on that spectrum. I can think of no decision more personal or private.

    As another commenter pointed out, what about Happiness machine 2.0? You claim that any level of pleasure would become “boring,” and require ever-increasing levels of stimulation. This is similar to many of the arguments against drug use, and also echoes your argument against immortality. Clearly, you have a specific idea of what constitutes a valuable human life. I may tend to agree with you, but that gives me no right to declare that those values should govern everyone else’s experience. When you frame it as a question of Universal Utilitarianism, you implicitly come down on the side of favoring the interests of society over the individual–which always comes down to an argument for some form of coercion.

    You make a number of assmptions, the first being that experiencing perpetual happiness on a brain level would “burn out” the brain’s pleasure centers. This may be true with methamphetamines and such, but what if the hypothetical happiness machine did not damage the brain? What if someone lived an otherwise normal life and wanted to hook up to the machine for a few hours a day?

    Your argument implies that people are essentially powerless against pure pleasure. It’s a typical justification for draconian drug laws. But there are large numbers of people who have demonstrated that they can safely use recreational drugs with no ill effects. See Saying Yes by Jacob Sullum.

    My personal view is that creativity is the source of highest pleasure–not just high seratonin or dopamine levels, but the full range. A person needs many brain states including suffering to fuel their art. Still, recreational drugs have also “given the muse” to many artists.

    In short, like other drugs, the existence of a happiness machine would represent both a challenge and opportunity to see if people could reap the benefits without hurting themselves. The outcome would be Darwinian, those who could not cope with control over their own pleasure would ultimately perish. Realizing this, many people have already figured out a strategic approach that allows them to live full lives, partake of mind-altering drugs, and not be reduced to victims or addicts.

    If everyone moralized about pleasure in the way you are doing, it would lead to political demands for another round of prohibitions against artificial brain stimulation that doesn’t meet “acceptable social criteria.” No thanks. We’ve had enough of that paternalistic drug-war folly. Decisions about how and when to stimulate our brains absolutely must be left up to the individual.

  • David Ellis


    The biggest value of an ethics system is not it’s elegance or ideals, but it’s predictive value over our behavior.If an ethics system is to guide our actions, then the real-life outcome of those actions better be predicted accurately by that framework. If, on the other hand, the ethical framework can be gamed and its adherents will be mistreated, then there is something fundamentally flawed with that system.

    One should not assume altruism means stupidity. Caring about others does not imply, as so many critics of altruism seem to assume, that one tolerate individuals “gaming” the kindness of others. In a society of smart altruists the egoist who tries to game them is not going to benefit—he will simply be a pariah.

  • http://www.causecast.org/ Ryan Scott

    Sounds like the Singularity http://singinst.org

    Lots of people will do this and theoretically not reproduce. Natural selection will favor those who don’t want to hook themselves up to the machine, and furthermore, the memes against it (oh its a *drug* and inherently evil or its Bad) will arise and get stronger and certainly prevent the entire population from using it. It doesn’t need to be outlawed for this to happen. Even if heroin was legal, you won’t get a 90% of the population using it. Society will not collapse. Its so debilitating that its usage is kept naturally low.

    That said, there are devices that can do things like this already. Transcranial Magnetic Stimulation has been shown to do everything from reduce depression to being able to induce euphoria. These machines can only get more effective. You’ll be able to stimulate any part of your brain in any way you wish, and there will be software you can download to induce different states. You’ll go to a YouTube of TMS and trip yourself out at will. We already see some of this with Light and Sound meditation machines (which proponents say have lots of beneficial effects)

    You’ll also be able to run programs to keep you alert, keep you focused, for peak physical performance, or to help you remember things. These are not far future devices. People will use them (and are already using them in their limited forms) to do all these things. It’s a continuum. We are merging with our machines and while some people will dive into them too deeply and get lost in what is essentially masturbation, others will use the tools to advance themselves in society, gaining some benefits other than a blissed out feeling.

    By the way, whoever said the happiness machine will not give you a deep intellectual satisfaction – well, of course it could. Its just a feeling after all, a particular combination of chemicals in your mind. The intellectually satisfied feeling can certainly be obtained without intellectual stimulation of any kind. It’s not ‘real’ to anyone but the person feeling it, but to him, it could be 100%. Just like deja-vu. Its not real. It’s a *feeling*. Feels real to the person experiencing it. Doesn’t mean it you saw the future or that something really is repeating. Just like alien abductions. They can be induced with flashing lights and guided imagery. They are totally real to the person experiencing them. You cannot convince them it didn’t happen yet we can induce it in a lab.

    People tend to think that feelings always come from something ‘real’ and in particular, external to the mind. They can. But they don’t always. Its a feedback loop – feelings color thoughts. thoughts trigger feelings. external input tweaks that cycle, and it goes around and round.

    This machine is coming. But I also agree that a philosophy doesn’t necessarily need to be able to address this ‘problem’.

    “CHEAP THRILLS FOREVER, is the best that can be said for such a machine – definitely NOT happiness.”

    not so. its actually ‘inexpensive happiness’. ‘Cheap’ is a negative value judgement. And there’s no way you can separate mental happiness from ‘true’ happiness (which is another value judgement). Inside the brain, they are the same combination of chemicals. Your argument its ‘its not real because it wasn’t earned or it doesn’t come from the person doing something the outside world values’. It can be very real. Lets take this to an extreme. Someone operating the gas chambers in Hitler’s Germany, if he truly believed the Jews were evil, could be extremely happy, fulfilled, satisfied, etc. You may not like his work but he believes 100% that he is doing great work for god and country. Is that not true happiness? For him, it easily could be.

    What about the people whose children die when they deny modern medicine to them? They really believe they are doing the right thing. They can even be happy when ‘god takes their baby to be with him’. These are people who killed their children. But, they are happy and satisfied and really believe they are doing the ‘right’ thing with all the attendant approval from their society.

  • M.

    Ok, perhaps this is the time for a neuroscientist to get involved. :) (BTW, love the blog.) There are several issues, both in how you define the problem (and by “you”, I mean both the author and the commenters), and in proposed solutions.

    “Happiness machine” is not an entirely theoretical possibility. A version of it could be made today, with minimum effort: take an electrode, stick it into the nucleus accumbens (a small bundle of neurons, near the juncture of the caudate and the anterior putamen), activate current. Weak, intermittent currents are currently tested as last-ditch cures for intractable depression.

    If you hook a rat to one of these, and let it turn the current on by pressing the lever, the rat will just sit on the lever until it dies from lack of food and water.

    A good death, no? Well, not quite, but not for the reasons you mentioned above (for example, as you can see, there is no habituation – the rat never becomes “used to the pleasure”). Let’s go through some basic problems here, although a full examination would require a book. What follows will meander a bit, but trust me, it is all relevant to the points I wish to make.

    Happiness is not a scalar, unidimensional measure. For example, there is a sort of happiness/pleasure that arises when we choose the correct action from several options available to us. But let us analyze this on the level of neurobiology: we have several choices; we make our selection; we execute it; we get feedback that confirms our choice was the “correct” one; this produces pleasure.

    The machine mentioned above partially works through affecting this system: it gives us “this is the right thing to do” feedback when we use the machine. It produces the “pleasure of being right” (something that any atheist who has debated dense theists should be familiar with).

    Nicotine is so enormously addictive because it also affects this system – it stimulates a different area of the brain (ventral tegmental area), which pumps dopamine into the nucleus accumbens. This “addicts” us to the *behaviors surrounding nicotine use* – a point lost on most people who try to abandon smoking. This is why nicotine gum works so rarely: the person isn’t addicted to nicotine itself, but has been wired to believe that the acts of preparing and lighting a cigarette, and inhaling the smoke. Every time a smoker does that, the nucleus accumbens sends the signal of “Happy! Happy! Joy! Joy! Inhaling smoke is the right thing to do! It produces the preferred result!”. When the smoker switches to the gum, he reinforces behaviors related to the gum – but still has to overcome the now-established networks that scream at him that he is not doing what he should be doing, the right thing to do: lighting a cigarette.

    Let’s bring this back to the point.

    Our will, free or not, depends on our brain circuits. Those circuits learn based on the outcome of the previous decisions, and incorporate that in the future decision-making. Thus, a nicotine addict *decides* to smoke: his decision, his will, is shaped by the drug-induced belief that smoking is the pleasurable thing to do. Leaving smoking requires going through a form of controlled psychosis, where one has to believe the medical information while one’s brain screams that “smoking is the correct choice”.

    (There’s an even starker example of this in anesthesiologists who get addicted to anesthesia drugs. If you talk to them, they will be *adamant* that they made the decision to use drugs freely, and will provide purely rational reasons. And it is true: their will is reshaped by the learned input, which forces them to believe that using drugs is the correct choice among available options.)

    With even stronger stimuli, it is impossible for this feedback mechanism to be overcome. A Happiness Machine would shape your will completely – as happens with the rat pushing the lever. No future inputs can overcome this programming, and make any other choice viable against the “obviously correct” one – losing yourself in the “happiness.”

    This is the basis for the essential argument against the “Happiness Machine”: you lose your ability to make free decisions (as free as they can be in this universe, which is a debate I don’t want to get into). If you hook yourself up, you will *not* be able to change your mind based on any new information you receive later on. That is it – it is an irrevocable choice to give up future decisions.

    You can ask your average heroin addict how well this approach worked for them.

    Other problems, I’ll cover only in outline, as they overlap with what the author already said:

    - since there are multiple forms of happiness, it is even theoretically impossible to make a machine to simulate them all concurrently. For example, there is the pleasure that arises from companionship; the pleasure that arises from long-term achievement; pure endorphin high after successful physical exhaustion; pleasure of being in love; etc. Even a theoretical machine that could “cycle” between stimulating all of these types of pleasure would not work – since stopping one kind of pleasure to move to the next one would cause horrendous pain and discomfort, even as the new pleasure is on the rise.

    - the problem of reality of the experience. Again, as more intelligent among heroin addicts can tell you, the pleasure is great – but there remains an awareness that the pleasure is artificially induced. It never satisfies our drive for achievement, which is why most addicts – who experience profound pleasure – are depressed much of the time. Even their pleasure is imperfect, tinged with the feeling of pointlessness. Now, it is possible to remove this, by removing some executive functions. But this boils down to lobotomizing ourselves in order to be able to not think too much about the pleasure we are experiencing.

    I’ll stop here, since this has gotten way too long.

  • mikespeir

    Even if it were true (which is far from obvious) we only have to ask ourselves whether we think our children should be raped or get leukemia in order to better appreciate the goods in life as opposed to say, experiencing a broken heart or getting a cold, to see how poorly that theodicy deals with the sorts of extreme suffering to be found in the world.

    David:

    Whenever some theist makes the assertion–contained in my comment–that prompted your response I simply ask whether if medical science eliminated all sickness and disease today a generation or two from now our progeny would miss it.

    But what I was driving at is the apparently similarity to what Ebon wrote: “If the Happiness Machine works as advertised – if it truly replaces all suffering with total contentment – then it will make you oblivious to your need for the necessities of life.” The answer, of course, is that here we’re talking about dealing with the universe as is. We don’t see any intelligence behind it. The theist needs to account for why the universe was constructed in such a way as to make good only obvious by contrast with evil.

  • Julia

    The problem I see with the happiness machine would be the potential for it to nullify motivation. If a person is happy all the time would s/he bother with work and/or volunteering (assuming we generally work to afford the things that make us happy and we volunteer to reduce our dissatisfaction with the world)? If the machine stops people from contributing then it is providing happiess for one at the expense of others, assuming things like infrastructure (as one example) improves out standard of living which makes people happy. Society can afford to carry a few non-contributers, but only if the majority are busy at least maintaining the status quo and at best innovating and improving our world.

  • Chet

    It does sound a lot like narcotics – and, note, a substantial number of human beings use narcotics and other psychoactive pleasure-causing drugs in a safe and controlled manner. People don’t starve to death because they take one puff on a joint.

    If Michael Phelps can be driven to become the greatest swimmer since Aquaman, and yet still go home and fire up a spliff, there’s considerable reason to suspect that the hypothetical Pleasure Machine could also be used, safely and responsibly, by adults.

    I guess there are some who believe that using drugs for pleasure is, by definition, illegitimate – I see no reason to believe this is true. If it’s ok to take drugs for anxiety, it cannot be wrong to take them for boredom, too.

  • Julia

    @Chet

    I totally agree that drugs can and are used by contributing members of society. Plenty of people can have a puff or pill or a line on the weekend and not have it affect their productivity come Monday morning. I was trying to think of the happiness machine in a more extreme light, as presented in the original essay. But now I’m looking at it from another angle.

    What if we’re reacting to the happiness machine in the same way theists react to the ‘no rules’ permissiveness of atheism? As atheists, just because we CAN be hedonistic without fear of being smote or sent to hell doesn’t mean with ARE, generally speaking. Maybe the happiness machine would be the same? Just because we CAN hook up to the happiness machine 24/7 doesn’t mean we WILL. Or is the happiness machine defined as something we cannot refuse?

  • Arkhitekt

    I think it’s best to keep thought experiments like this free of practical considerations like who controls the machine, can it really do its job properly, how will we stay alive and so on. The question here is not whether such a machine is technically possible but whether we would prefer a life hooked up to the machine if it was possible. So, at the least, we should consider penn’s Happiness Machine 2.0, in which the experience really is that good, and there are no nasty strings attached.

    This approach suits me well, because at least in my case, my intuitions about whether I or anyone else should use the machine have nothing to do with its anticipated side effects or poor quality. I think that a life hooked up to the happiness machine is not a good life for a human to lead because in such a life one accomplishes nothing of value, helps nobody, forms no friendships, never has any sort of relationship with another human or other living thing, never exercises any skill or virtue in the pursuit of any worthwhile goal whatsoever. These sorts of things (alongside happiness) I think are to be valued for their own sake, as constituents in a good human life – the subjective feeling of happiness makes them and life as a whole better, but is no substitute for them.

    This is in line with what a few other posters have said – that they value other things independently of happiness. Your final objection to the machine comes closest to this – that in using the machine people would be neglecting their moral responsibilities as per UU. I agree with the idea that we would be neglecting various moral responsibilities, but I’m not sure if UU would have anything to say against using the machine in a world where everyone had free, unrestricted access to one (of the 2.0 variety). I would still be against unlimited use of the machines in such a scenario – in my opinion there are aspects of living that characteristically cause happiness but for which this happiness is just an indicator of their value, not what their value consists in. We would be exchanging these for counterfeits.

  • Chet

    What’s the question here, precisely? Would I, individually, opt to spend my life at the electrode-end of such a device – or whether or not others should be allowed to do so?

    Sure, I probably wouldn’t want to spend my life that way, though I might occasionally partake; I would opt for a device with a built-in cut-out to help me moderate my use. When we say that it is bad for a human to spend their life that way, as Arkhitekt does, are we saying they shouldn’t be allowed to?

    I’m not comfortable with that. I’m comfortable in my decision in regards to this hypothetical, but I have no comfort at all in asserting that my decision is good enough for all other human beings.

    How about persons, for instance, who are dying of painful, terminal diseases? A man whose body is riddled with painful cancer, for whom consciousness is a torment in which hope can only be found in the release of death? Surely that guy lives a life much, much worse than the prospect of starving to death at the end of a pleasure machine? I mean, in the here and now, his plight might justify euthanasia. But surely filling his days with pleasure as he slips away by natural causes is preferable (especially to atheists, who might object to suicide on the grounds that life is all we have, and should not be squandered.)

    My life would not be improved by constant use of such a machine, for all the reasons Arkhitekt gives above. I’m just not comfortable saying that nobody’s life would be improved.

  • http://elliptica.blogspot.com Lynet

    When I saw this post, I thought I was going to have to provide rebuttal. Thanks to the wonder of Daylight Atheism commenters, however, I find that most of the complaints I wanted to make about you missing my point, etc, have already been made, along with a magnificent variety of other thoughtful perspectives to flesh out associated issues that I would not have thought of. I can thus confine myself to the thoughts I find most interesting :-)

    Notwithstanding the interesting issues that arise from practicalities and such, the point of the machine is to question whether it makes sense to say that happiness is the highest good; I argue otherwise. Thus, I see leanings toward my perspective in such comments as this one from David Ellis, echoed and/or commented on by others:

    True, but happiness, while one of the highest goods, is not the only one I value.

    I do not go so far as to declare a supreme value. I advocate a shared commitment to the various things we value, along with compassion and concern for others. My system of morality, although long-considered, is not so tidy as UU. I do not expect morality to be tidy. It is not written in the stars, but born in the minds of human beings. If it can be tidied, so be it, but tidiness should not be considered an imperative.

    Ultimately I find the somewhat untidy version to be more natural. UU is too ‘pat’, too removed, too much like a command on a stone tablet whose authorship we might question. Although it can be justified based on human motives, it seems detached from that justification, relying instead on its simplicity to keep it whole.

    Connecting morality directly to the ways in which people find value in life involves a more realistic confrontation of morality’s human origin, and, as a result, it can be a more powerful way to justify the notion.

  • http://www.patheos.com/blogs/daylightatheism Ebonmuse

    First off, I want to second Alex Weaver’s comment that in morality, as in law, extreme circumstances make for bad precedents. That said, I want to address this intuition pump proposed by penn:

    My Happiness Machine is given out for free and we have more than enough for the entire population of earth. It also supplies you with food, water, and exercise. Each individual user has sole control over it. It runs on Mr. Fusion, so that it never needs additional energy, and it never needs any repairs or maintenance. It is non-habit forming, and re-calibrates your pleasure awareness downward while you sleep, so that the subjective reward never diminishes.

    In a scenario like this, I agree that many of my original objections to the Happiness Machine would disappear. But take another look at just how drastically we would need to change the world to bring this scenario about. We’d have to suspend the laws of scarcity and of supply and demand, creating an economy that produces unlimited quantities of goods without requiring any person’s effort. We’d have to invent a completely clean, self-contained, and inexhaustible source of free energy. We’d need to be able to rewire human psychology at will so that we can produce unlimited pleasure without dependence or habituation. We’d even need to overcome the laws of thermodynamics to produce a machine that runs forever and never breaks down.

    Sure, maybe using the Happiness Machine is the best course of action in that world. But I think that world is so utterly and completely different from our own that we shouldn’t have any confidence that conclusions that are drawn there can be ported over to our world.

  • Pi Guy

    Kurt Vonnegut wrote a short story (can be found in the compilation Welcome to the Monkey House) called “The Euphio Question” where a scientist played the amplified or modified signal from some distant celestial body that made everyone get all gooey and happy. In the end, the device went on and a scientist, a radio talk show host, a businessman, some of their families, a patrol of Boy Scouts, and a police officer ended up spending several days living as though in a crack house (or, what I imagine that would look like). They only finally came down when someone accidentally knocked the device off the table and everyone came down to a storm-ravaged house with no food or heat. Everyone looked ugly to everyone else.

    I think that this is very much what a Happiness Machine would be like…

  • http://www.patheos.com/blogs/daylightatheism Ebonmuse

    I want to make one thing clear, in case there was any confusion:

    Clearly, you have a specific idea of what constitutes a valuable human life. I may tend to agree with you, but that gives me no right to declare that those values should govern everyone else’s experience.

    I’d like to emphasize that absolutely nothing in this post proposed banning Happiness Machines. This post was an exploration of whether a follower of UU would choose to use such devices; for the reasons I’ve proposed, I wouldn’t make such a choice for myself. I’m too concerned about the potential harm I might do myself through neglect, the potential for dependence and loss of autonomy, and the possibility of having my conscience anesthetized.

    But that doesn’t mean I think no person could ever choose to use the machine in any circumstance. (I particularly like the idea of using it on terminally ill patients, as some commenters proposed.) I also don’t use recreational drugs, and for very similar reasons; although as I’ve written in the past, I strongly believe they should be legal. I believe that UU supports the idea of self-determination as a value of supreme importance, one that can only be overruled by the most compelling circumstances.

    I also want to call attention to M.’s excellent and somewhat frightening comment, which makes it clear just how addictive a Happiness Machine would be:

    This is the basis for the essential argument against the “Happiness Machine”: you lose your ability to make free decisions (as free as they can be in this universe, which is a debate I don’t want to get into). If you hook yourself up, you will *not* be able to change your mind based on any new information you receive later on. That is it – it is an irrevocable choice to give up future decisions.

    You can ask your average heroin addict how well this approach worked for them.

    There’s good scientific reason to believe that anything that affects the brain’s reward pathways in this way would have similar results. Addiction is not confined to chemical means; it’s a property of the brain itself and the way it reorganizes itself in response to stimuli. I strongly value my rationality, and don’t want to lose it as a consequence of engaging in some behavior that causes dependency.

    A good death, no? Well, not quite, but not for the reasons you mentioned above (for example, as you can see, there is no habituation – the rat never becomes “used to the pleasure”).

    Isn’t it possible, however, that the rat simply dies of dehydration before habituation has time to kick in?

  • http://www.causecast.org/ Ryan Scott
  • http://verywide.net/ Moody834

    This has been quite an interesting read, comments included. I only have two words to add to what I’ve read so far. No: not “The Matrix”; already covered. But these two:

    INFINITE JEST

    …as in David Foster Wallace’s novel and what it’s about.

  • http://anexerciseinfutility.blogspot.com Tommykey

    Ebon, were you thinking of that scene from the Woody Allen movie ‘Sleeper’ where Diane Keaton has friends over her apartment and they all take turns holding some kind of pleasure ball? I tried to find the clip on Youtube but was not successful.

  • Brad

    Perhaps you utilize a reduction-to-absurdity argument on the Happiness Machine, Ebonmuse? You say it “produces pure pleasure for the user in unlimited quantities,” but then go to contradict this premise with the conclusion that this is impossible given the nature of the brain. If that’s the case, I tend to agree; there’s no salient reason to believe this machine is actually realizable in this world. However, I think you overlook one of the purposes of a thought experiment in your analysis of penn’s Hapiness Machine 2.0.

    I’ll let Wikipedia speak to this: “Given the structure of the proposed experiment, it may or may not be possible to actually perform the experiment and, in the case that it is possible for the experiment to be performed, there may be no intention of any kind to actually perform the experiment in question. The common goal of a thought experiment is to explore the potential consequences of the principle in question.” In short, this Happiness Machine idea is supposed to illustrate our ingrained intuitions about happiness and its context. Of course it’s an Intuition Pump, what else could it be! But that does not make it any less legitimate, because the subject matter in question are our intuitions themselves – as opposed to, say, Searle’s Chinese Room, which is supposed to slap an intuition on an objective process as a substitute for lucid understanding.

    So what does this intuition pump tell us about our intuition? I believe it tells us something along the lines of what Polly said – we cannot always or simply, fully believe, accept, approve of “happiness” that doesn’t have any, say, “intuitively appropriate” context behind it. We all have positive affects associated with many ideas, memories, and feelings, but when asked if we have a positive affect for a positive affect void of an association we presently like, we aren’t so sure, we are ambivalent. I don’t think our evolutionary heritage prepared us well for such a sophisticated recursive exercise in affectivity.

    Getting back on the track of discussion, I also notice that the conclusion in the original post is a personal one. To those who don’t value independence, autonomy, and self-determination as much as Ebonmuse or others, the Happiness Machine might be a viable option to impulsive people. I risk being risqué: slightly related to Lynet of Elliptica’s original Orgasmatron, this analysis of the Happiness Machine hypothetical scenario is reminding quite a lot of BDSM-type power play. Often in these fantasies, people consent to lose determination in their situation as a method to induce their own happiness. The idea seems relevant here.

    But still, how is happiness counted? Is it better to have 2n happy people than n equally happy people, all other things being also equal? If so, is it better to to have 2n happy (n large) and one unhappy person than n happy and none unhappy? Shall we care more about happiness of presently living people more than about the people who are yet not born, even when it’s still not known if they will be born? [-prase]

    This is the singlemost major pragmatic problem I see in all moral philosophy in general. It’s hard enough to compare the happiness(es?) of a specific person across a spectrum of different outcomes, but how do you count them up over a set of people? That seems to inevitably lose information in the process. Not to mention there is inherent uncertainty in trying to predict the happiness induced on a person by a specific outcome, as well as to predict the long-term or complicated outcomes of our available choices. It’s a nightmare trying to use moral philosophy with a reasonable amount of certainty to effect consequences with desired qualities. A subjective calculus is far harder to work with than an objective one.

    To be more concrete, I don’t believe that an alien given the UU’s axioms and otherwise lacking standard human values would arrive to the conclusion that people shall investigate nature, value science and knowlegde, think critically etc. I would more suspect it will come out with some version of happiness machine. And it seems to me that your derivation of the desirability of knowledge was possible only because you knew before what you wanted to derive, thus rationalising the argument. That is not, in my opinion, characteristic only for UU, all philosophies which build on single value have this fault. Happiness is not the only independent value I have. [-prase]

    I think there’s a thorn in judging from affectivity hidden here. How does one come to a conclusion when there’s an ambivalence rooted in clashing values, a dissonance stemming from conflicting valences? That’s another problem I see in any philosophy that claims to tell you what action you should take to get what you truly want. I second your second point, too. In fact, I think theoretical “justifications” of “objective” moral systems are ultimately only attempts to rationalize subjective feelings that form the bedrock of the moral systems. I think Nietzsche once wrote “morality is the sign-language of the emotions.”

    In some ways this is similar to the Matrix, and as cool as that movie is, I never really got wanting to leave it.[-Mathew Wilder]

    Neither did I, with the exception of the CONTROL issue.[-Polly]

    I had trouble with the IGNORANCE issue as well. Of course, if the machines tell everyone about humanity’s position, there might be revolt – so they don’t.

    You know, this “Happiness Machince” scenario resembles a short story I wrote in college for the campus magazine – in it, a man is diagnosed as being “overly-cynical” (a mental disease in this fictional society) and is given an implant that stimulates dopamine production whenever “negative” thoughts (read: any thought that something is absurd or unpleasurable) are detected.

    For a while, our nameless protagonist does well with the new implant, but becomes tired of this constant pleasure and begins mutilating himself to feel discomfort again – his sister tries to stop him and he ends up killing her whilst attempting to escape her. After this, he experiences a brief moment of terrible guilt and pain, followed by pure ecstasy due to the implants effects. The end result: he learns to associate death and suffering with pleasure and goes on a shooting rampage to get that high again – in which the police gun him down.

    I was inspired to write that as a commentary of society’s over-medication after a friend of mine began to experience thoughts of suicide from a medication that was supposed to relieve her depression symptoms – a case in which the “cure” was far more damaging than the disease: which is what I think such a technology as a “Happiness Machine” can easily become, especially if it falls into the hands of a deranged society that sees “unhappiness” as a mental illness… [-Christopher]

    That’s a genius allegory. Quite the tragic ending, and all just to avoid internal negativity. I think many of us can in small part relate to that last sentence; so many people tell each of us, in some way or other, that depression or melancholy is something to avoid and run away from at all costs. I can’t help but wonder, though, if you’re really the same Christopher who said most of his acquaintances were puppets and that he had few “real” friends.

    …I’m too concerned about the potential harm I might do myself through neglect, the potential for dependence and loss of autonomy, and the possibility of having my conscience anesthetized.

    But that doesn’t mean I think no person could ever choose to use the machine in any circumstance. (I particularly like the idea of using it on terminally ill patients, as some commenters proposed.) I also don’t use recreational drugs, and for very similar reasons; although as I’ve written in the past, I strongly believe they should be legal. I believe that UU supports the idea of self-determination as a value of supreme importance, one that can only be overruled by the most compelling circumstances. [-Ebonmuse]

    How concerned are you for the above potentials for another human being, hypothetically? I also think the HM on terminally ill patients is still dubious. First off, you’d need the patient’s consent. As for self-determination: I don’t think humans fully determine themselves, nor do they do no part in determining others. We all have some effect on each other; we are fundamentally connected by varying degrees.

    This thread has landed itself on my hard drive for sure.

  • Christopher

    That’s a genius allegory. Quite the tragic ending, and all just to avoid internal negativity. I think many of us can in small part relate to that last sentence; so many people tell each of us, in some way or other, that depression or melancholy is something to avoid and run away from at all costs.

    True – even if that cost is life-long dependence on a substance that can possibly make you even more depressed than you were before. It baffles me that people still continue to take such things even knowing that could happen…

    I can’t help but wonder, though, if you’re really the same Christopher who said most of his acquaintances were puppets and that he had few “real” friends.

    It is – well, it was a younger, more naive version of me that was quick to make friends that had the experience I related to you earlier and wrote that story (if you want, I could see if I still have a copy and post it). I’ve grown much more misanthropic since then and now see much of what we tend to call “humanity” as something to be overcome rather than celebrated. I wasn’t always who I am today…

  • Brad

    (if you want, I could see if I still have a copy and post it). I’ve grown much more misanthropic since then and now see much of what we tend to call “humanity” as something to be overcome rather than celebrated. I wasn’t always who I am today… [-Christopher]

    Suit yourself, at least you have an internet connection. I, for one, am intrigued by the story you mentioned and would like to read it.

  • http://www.blacksunjournal.com BlackSun

    I’d like to emphasize that absolutely nothing in this post proposed banning Happiness Machines. This post was an exploration of whether a follower of UU would choose to use such devices; for the reasons I’ve proposed, I wouldn’t make such a choice for myself. I’m too concerned about the potential harm I might do myself through neglect, the potential for dependence and loss of autonomy, and the possibility of having my conscience anesthetized.

    Ebonmuse, I agree you did not propose a ban. However, nearly all laws are foreshadowed by philosophical positions which argue for their rightness. For example, if the Happiness Machine really did pose the dangers you claim, it very possibly should be outlawed, as it would turn everyone into passive pleasure-seeking zombies. At the very least, if there were not enough people left to work and grow the food, humanity would perish. So once you accept that such devices posed an existential threat, legislation and a neo-drug-war would not be far behind.

    I think we can propose that humans are smart enough to deal with the threat electronic hedonism poses to our psychology and motivational system. The tone of your post presupposes (without any acknowledgement you might be wrong) that we are powerless to resist the siren song of direct brain stimulation.

    This is not an abstract issue. Beliefs create desires, intentionality, and action. Despite your earlier support of drug legalization, you are using the language of “reefer madness” and prohibition. What you are aguing for is at the very least regulation, and possibly a ban. Direct brain interface technologies will be increasingly commonplace in the next decades. I don’t want the government inside my head, period. I also don’t want people’s puritanical philosophical prejudices to get in the way of human progress. It’s already an uphill battle against religion and “playing God” in our brains. Please don’t lend your otherwise rational voice to the Luddite chorus.

  • Lise McDan

    The fact is that it is mainly up to us whether we are happy or not. Every daily activity can provide us with a pure state of happiness. However, when it comes to national happiness it becomes the responsibility of the government to develop the GNH. Pursuant to Med Yones, a happiness and life coach, “The role of government should shift from managing economic growth to socioeconomic development”. For further details on the socioeconomic development read the link below

    http://www.iim-edu.org/grossnationalhappiness/index.htm

  • http://www.asktheatheists.com bitbutter

    @ebonmuse

    Time travel, consciousness after death, aliens visiting earth, etc: The worlds of thought experiments are frequently very different from ours (as we know it).

    You wrote: “I intend to write some posts further detailing universal utilitarianism and how it can respond to difficult ethical dilemmas – not the practical dilemmas that we encounter in daily life, but thought experiments specifically dreamed up to stretch moral philosophies to the breaking point.[..]Intuitively, there’s something deeply repellent about this scenario, but what is it, and can UU justify this intuition despite its promotion of happiness as the highest good?”

    I think the comments showed that, for a suitably specified machine, UU can’t justify this intuition. Given your introduction to the OP would you agree that it looks like goalposts are being moved when you reply:

    “Sure, maybe using the Happiness Machine is the best course of action in that world. But I think that world is so utterly and completely different from our own that we shouldn’t have any confidence that conclusions that are drawn there can be ported over to our world.”

  • Chris

    I think it’s very telling that Ebonmuse can’t argue about the Happiness Machine without introducing the idea that it must have hidden flaws and downsides. Isn’t this a contradiction of the premises of the HM scenario? The drive to look for the “catch” says more about the human beings having the discussion than about whether or not flawless HMs can exist – whether or not they can exist, we may not be able to believe in them, if we can’t even discuss the hypothetical of their existence without undermining it with realism objections. (Which may or may not be well-founded, that’s not my point; my point is that this is not the discussion it was presented as being. Obviously a device that causes present happiness and future misery through neglect of practical reality is a lot less of a free win than a true HM, except for people with incurable terminal diseases, and even then their use of the machine has to be paid for by others building it, supplying it with electricity etc.)

    P.S. When you started the paragraph about dependency I was almost certain you were setting up to analogize religion as (flawed) HM. Feeling good about your prospects for the afterlife encourages lack of attention to the realities of *this* life and dependence on your priest for that next hit of righteousness. Some statistics show that religious people are happier, but if it’s this sort of happiness, is that really good for them or anyone? And the arguments about a life of misery and servitude, insulated from the real condition of others around you, have historical precedents too numerous to list.

  • Eric

    Eliezer covered this somewhat… the basic gist is that happiness isn’t the only value humans have (whether their own, or others).

  • John Gathercole

    All the arguments against the happiness machine seem to me like:

    1. Assume there is a machine that would make you happy.
    2. However, the machine wouldn’t really make you happy
    3. Therefore I wouldn’t use the machine

    Or sometimes:

    1. Assume there is a machine that would make you happy
    2. However, happiness is not the only value.
    3. For example, what about living a moral life? That’s what makes a lot of people happy.
    4. oops.

  • http://www.amareway.org/holisticliving/08/stephen-hawking-on-happiness-remember-to-look-up-at-the-stars/ frank

    We are trying to summarize the atheist view on happiness with an Atheist happiness formula. What would you suggest to use? Thanks!

    frank


CLOSE | X

HIDE | X