To Be As Gods

I have to admit, I cringe when I read quotes like this:

Max may be a long way from his old home, but he plans on going a lot further than America. Extropianism is a “rational transhumanism”, he explains. There may not be any supernatural force in the universe, but pretty soon, suggests More, once we get our brain implants and robot bodies working, we will be as gods.

The linked article is about Max More, a philosopher who advocates transhumanism – the idea that we can use technology to transcend the present limits of human biology. Like most transhumanists, More advocates a potpourri of wildly optimistic ideas: freeze ourselves through cryogenics, make our bodies immortal, digitize and upload our minds to live in virtual worlds or robot bodies.

As far as I’m concerned, most of these speculations so far outstrip the limits of what is currently possible that there’s little point even thinking about them. In the very distant future, perhaps, these will be issues to seriously consider. For now, I think we should be concentrating on the many more pressing problems that can be alleviated by current technology. Once people are no longer dying from malnutrition or malaria, maybe then we can start considering how to make them immortal. In the meantime, most of this is just unconstrained fantasizing that distracts us from the things that are truly important.

However, it was something else about this article that bothered me more – the throwaway line about how “we will be as gods”. Nothing could appeal to me less. Frankly, I don’t want to be like the gods.

Consult just about any piece of mythology you wish, and you’ll find that gods are generally not very nice creatures. They’re jealous, sadistic, manipulative, capricious, petty, possessing overdeveloped egos and hair-trigger tempers, and hateful toward those who are different. They’re swift to anger, slow to forgive, and perpetually obsessed with whether people are groveling enough or paying them sufficient tribute. When it comes to dealing with those who disobey, violence is typically their first, last, and only resort. In short, they exemplify all the worst traits of the humans that created them, and few if any of our best traits. Why on earth would we want to be like them?

We are human beings. No matter how much knowledge we gain, no matter how much power we gain, we will always be human beings. We should not aspire to be gods, or anything else that we are not. We should aspire, instead, to be the best human beings we possibly can be – to cultivate what is best in our nature and encourage it to flourish. For all the evil that we have done, human beings are also capable of astonishing acts of mercy and benevolence. These are traits that are conspicuously absent in most of the stories of gods we read. We do not need to be forever aping our old mythologies; we have the ability to transcend their narrow perspective, and in many ways, we already have.

About Adam Lee

Adam Lee is an atheist writer and speaker living in New York City. His new novel, Broken Ring, is available in paperback and e-book. Read his full bio, or follow him on Twitter.

  • http://anexerciseinfutility.blogspot.com Tommykey

    It sounds like we already are like them, so being like gods would not be much of a change for us.

    What Max is advocating also sounds like something that would only be handful to the wealthy elite that could afford it. Talk about widening the gaps between the haves and have nots!

  • Petrucio

    these speculations so far outstrip the limits of what is currently possible that there’s little point even thinking about them. In the very distant future, perhaps, these will be issues to seriously consider

    I don’t think the Technological Singularity is an entirely implausible event in a not so distant future, so I do think these speculations are an interesting topic of discussion since they are possible consequences of a TS – at least the nature of the TS itself should be an interesting topic.

    But I do agree with all “be as gods” part.

  • http://www.blacksunjournal.com BlackSun

    Ebonmuse, if you haven’t read “The Age of Spiritual Machines,” or “The Singularity is Near,” then you don’t fully understand the implications. They really are both worth your time. Though both cover much of the same ground, the later book lacks some of the power and imagination of the first, while the earlier is somewhat outdated technologically.

    Briefly, Kurzweil’s premise is that we are on an exponential growth curve, not a linear one. So expecting future advances to take as long as past ones is not realistic.

    Check Kurzweil’s record of predictions since 1990. You’ll be amazed.

    Also, our global problems will only be solved by advanced genetics, nanotechnology and robotics. If malaria is cured, it will be through sequencing its DNA, after all. Old ways of doing things just won’t be able to cope with feeding and caring for 9 billion plus people’s needs in a warming world. We will need every development in the pipeline–and then some.

    Being as “gods” is simply a metaphor for huge increase in information and connectedness we will experience in the next 20-30 years. I know transhumanist claims sound somewhat hubristic, but please read the books–it will all make sense.

  • http://www.eunomiac.com Eunomiac

    I anticipate you’re about to receive a bit of flak from those with a soft spot in their hearts for transhumanism. I’m one of those people, actually — I’ve been tentatively persuaded by Singularity theories that the idea is not necessarily as far into our future as you suggest. In fact, it seems almost inevitable once we discover a way to create intelligences greater than our own (either via augmentation, or artificial intelligence).

    I suggest checking out this scene from the movie Waking Life, if you haven’t seen it already. Ditto for everyone else — it was what introduced me to the concept, and it’s a fairly mindblowing piece of speculative futurism: http://www.youtube.com/watch?v=saxX-Z6w3p4

  • Jim Baerg

    Vernor Vinge is one of the major proponents of the Singularity idea, see much of his science fiction & this essay http://mindstalk.net/vinge/vinge-sing.html but he has also considered non-Singular futures.
    Eg: this essay http://www-rohan.sdsu.edu/faculty/vinge/longnow/ & the novel _A Deepness in the Sky_

  • Purple

    Applause! Well-said, I say.

  • chronomitch

    Ebonmuse, I think you took the “we will be as gods” line a little too literally. I think Max is trying to say that future technological and biological innovations will greatly expand the abilities of humans and may even redefine what it means to be human. While I doubt such things will happen in my lifetime, they are no longer solely dreams of science fiction writers.

  • SteveC

    This sounds like it could be the seed of an intersting Sci-fi novel. In the future, a Bill Gates type character, as old age approaches, slowly replaces bits and pieces of his aging brain with new, electronic equivalents. Likewise with his ailing body, his knees, arthritic fingers, cancerous colon — all replaced with robotic equivalents. All the while, during this transformation, his mind is preserved, he remains “himself”, even though the physical means by which “himself” is maintained is slowly and completely replaced by machine. A triumph of science. But, he finds himself lonely, alienated from the rest of humanity, unloved, viewed as “a robot” by others…

    From there, the novel elaborates…

  • Kevin Morgan

    One thing I think is important to keep in mind is that we still don’t understand consciousness. Most will agree that consciousness is what the brain does, but that doesn’t mean we will be able to upload or transfer our “self” into a computer brain. What would be required to successfully maintain our self inside an electronic brain? Synthetic neurons? Are the various neuronic connections in our own brains, that is, the patterns the neurons form, coupled with our genes, reproduceable in a machine? What about memories? I think that unless we learn the answers to these questions we won’t be able to transfer into a robot body.

    That said, it would be pretty cool to be the ghost in the machine (or shell for you manga fans). Then there’s the reality that you wouldn’t really be transferring consciousness, just copying it. So your original self would still have to die. So “you” would never know extended life, your “copy” would. Of course your copy would think it was the original since it is a copy of the original, but that’s little solace to the “me” that’s dying!

  • Dutch

    Ebon, you said,

    “…human beings are also capable of astonishing acts of mercy and benevolence.”
    This especially true when disaster strikes.

    Dutch

  • http://artofdbellis.blogspot.com/ David Ellis

    Ebon, I have to agree with some of the others here that you need to read more about the technological singularity. It seems to me that your comments about their ideas are more knee-jerk reaction than informed criticism.

  • OMGF

    SteveC,

    This sounds like it could be the seed of an intersting Sci-fi novel.

    There’s already an anime based on this idea called “Ghost in the Shell.” But, it seems that Kevin Morgan already mentioned it. Anyway, it’s not just a manga, there are also 2 movies and 2 tv series (all of them are good stories BTW).

  • http://www.blacksunjournal.com BlackSun

    Kevin Morgan, what you are saying would be true for “uploading” of consciousness. Your mindfile would disintegrate with your physical brain, and your synthetic copy would live on thinking it was you, with all of your memories.

    But there is another way to approach this, assuming that direct brain interface and augmentation can be worked out. In that case, your brain would undergo various forms of technological enhancement or repair, including the installation of a direct interface (data port) to the outside world. This would indeed require synthetic neurons, memory units, etc. As more of your cognition was taken over by the replacement parts, gradually your consciousness would begin to inhabit more of the synthetic substrate, until everything was replaced and there was nothing left of the original brain. Through all this, you would (theoretically) continue to have the subjective experience of being the same person, albeit with enhanced mental capabilities.

    Of course if brain cell repair could be perfected, the biological brain’s life could be extended dramatically and it might not be necessary to completely replace it.

    But as machines continue their doubling in capability every 18 months, it’s only a matter of time before one passes the Turing test (becomes functionally equivalent to a human brain). Estimates of this have been put in the early 2020s. At that point with continued doubling, machine intelligence will soon inevitably far outstrip human intelligence. So if humans want to keep up or even be able to participate in the new “man-machine hybrid” civilization, they will have no choice but to augment their biological brain capabilities. Or…so goes the argument.

    For all you people who are scared spitless even contemplating this, I didn’t come up with this scenario. But I do think it’s highly plausible and I think most people will eventually participate. Kurzweil has a set of appendices in “The Singularity is Near” that anticipates and refutes pretty much every argument against the scenario.

    He covers them all, from the pro-death “death is what gives meaning to life” crowd, to the “timeline is hopelessly optimistic” to “computers will never be able to think” or “computers will never match human creativity” to “no one would want to become a robot” to “we can’t even get our software right now” to John Searle’s “Chinese room” argument, to the idea that consciousness relies on more than the physical brain, to “the brain is a quantum computer” etc. etc. I’ll say it again, the book is really worth the read.

  • Goyo

    I also agree Ebon, that you should consider this further. As you know, regenerative medicine is occuring now, and it is obvious that we are evolving to incorporate machinery in our bodies: knee replacements, artifical limbs, etc.
    I believe that immortality is within our grasp as a species.

  • jack

    Get a grip, folks…

    It looks like I’m one of a minority who will be taking Ebon’s side on this issue. I love good, entertaining and mind-expanding sci-fi, but I don’t confuse it with reality. I have some formal training in neuroscience, so I’m not completely ignorant on the subject. The “transhumanism” idea is nonsense. It has a snowball’s chance in hell of ever becoming reality. Kurzweil is brilliant, but even brilliant people can be wrong.

    Our minds, our memories, our consciousness, our selves are all inextricably bound to the living tissue we call the brain. Its complexity is truly staggering. How our consciousness emerges from trillions of synapses is a mystery. I hope and expect that we will someday solve that mystery, but it probably won’t happen in any of our lifetimes. If and when it does happen, it will be even more obvious that transhumanism is not in the cards.

  • Andreas

    I would also recommend Kurzweil’s The Singularity is Near. It’s a fantastic read even if you’re skeptical of the timeline he presents.

  • Gary

    I have to be very skeptical of transhumanists’ claims about what the future will be like, so I agree with Ebon and Jack. While various implants and artificial body parts are becoming more advanced and more common, these do not grant us anything like immortality. They should continue to be researched and developed, but I doubt that such prostheses will be able to directly plug ourselves into a virtual reality-type universe any time soon, as the transhumanists seem to hope for. Even if computers are built to invent things for us, we have a long way to go towards understanding how the brain works.

    We would do well to remember what past generations thought our time might be like, before making predictions. There’s a blog for this very purpose:

    http://www.paleofuture.com/

  • http://www.blacksunjournal.com BlackSun

    jack,

    The “transhumanism” idea is nonsense. It has a snowball’s chance in hell of ever becoming reality.

    All of it? There’s nothing whatsoever possible in terms of life-extension, mind-expansion? That statement is overbroad, categorical, and unscientific.

    Work toward the solving the mystery of consciousness is already underway with fMRI and the modeling of the cortical column. And work continues on the computer side with AI and robotics. “Probably won’t happen in our lifetimes” sounds like you’re dangerously close to admitting it might eventually happen.

    As far as when? Why don’t you admit you’re just wild-ass-guessing like the rest of us? Or you could look at the trendlines like Kurzweil has and make some educated guesses. And he’s already dealt with your arguments and many others in the book. You should read it before commenting.

    Predicting something will never happen is a sucker bet. If you really want to, take your crack at longbets.org. Even if the timeline is wrong, we’re still in for some serious redefinition of what it means to be human in the not-too-distant future.

  • Jim Coufal

    I was at a party recently and heard my first ever “American joke.” It was told by a naturalized Polish American. This thread reminded me of the joke.

    Imagine it is the year 2050 and medical technology is truly advanced. If one is not satisfied with his/her brain, they go to a shop, pick out a new one and have it installed. So a fellow goes to the shop for a new brain. On the bottom shelf is a “Czech brain” for $1000. Nearby is a “Polish brain” for $1100. He also see an “Italian brain” at one shelf higher, for $1500. A little further up is an “English brain” for $2500. And so it goes until all alone, on a tiny shelf, at the highest point of the room is an “American brain” for $25000.

    “The customer asks, “Why the hell is the American brain so expensive.”

    The shop keeper replies, “Never been used.”

    On a more serious note (I think), let’s not forget how far we have surpassed what, at their writing, seemed to be the outlandish ideas of George Orwell and Thomas Huxley.

    Jim

  • http://www.eunomiac.com Eunomiac

    Imagine the following scenario:

    You watch your friend plug his brain into a computer, push a button, and then his body goes limp. His heart stops beating. He dies right in front of you. Then, the computer avatar of your friend appears, claiming that it worked perfectly and he’s actually inside the computer.

    Would you want to be next?

    I know I’d be plagued with existential doubts: Maybe my friend really did die, and this is only a copy sufficiently advanced to pass the Turing test. I’d worry that my existence would end when I pushed that button, and some conscience-free automaton would take my place.

    Thus, I can’t imagine the actual act of uploading one’s brain into a computer will ever occur en masse without a genuine change in how we perceive ourselves introspectively, likely as a result of Sam Harris-style meditative/contemplative research. No matter how good science gets at unraveling conscience from the outside in, there’ll always be that experiential divide in which doubt could flourish.

    As for Jack’s assumption that the Singularity Is Not Near, I think you have to ignore biological augmentation to say that. Think of it this way: Someone invents a pill that increases his ability to make such a pill. Hello runaway recursive snowball effect ;) I’m surprised more transhumanist speculation doesn’t run in this direction; I too am more skeptical about the computer/AI side of things.

  • http://mindstalk.net Damien R. S.

    jack: “f and when it does happen, it will be even more obvious that transhumanism is not in the cards.” So, we don’t know how consciousness works, and when we do you’re confident it’ll support your position?

    eunomiac: “Thus, I can’t imagine the actual act of uploading one’s brain into a computer will ever occur en masse”
    Imagine the following scenario: you’re on your deathbed, dying beyond the abilities of current medicine. You know your friend uploaded, dying in his body but producing something that seemed a lot like him, to you. Do you cling to your last few months or weeks in your body, gamble on uploading?

    Or you’ve been diagnosed with Alzheimer’s or Parkinson’s. Wait for years of mental degeneration or leap to an ageless computer state?

    ebonmuse: “Once people are no longer dying from malnutrition or malaria, maybe then we can start considering how to make them immortal.”

    This sounds like saying we shouldn’t work on cancer or heart disease or stem cells while people are starving.

  • Christopher

    I think that you have your priorities backwards Ebonmuse: we should focus on improving ourselves and our own, not solving all the world’s problems. There will always be people dying somewhere of some plague or starvation and there’s not much we can (or should, for that matter) do about it, so why focus our energies into enhancing our own capabilities to fulfill our own purposes instead of trying to save those who can’t save themselves?

    When you get down to the core of human nature, it’s selfishness. So we might as well channel those selfish tendencies into improving our ability to meet our desires instead of squandering them in ridiculous gestures of altruism.

  • http://www.kellygorski.com Kelly Gorski

    gods are generally not very nice creatures.

    Ask a person how he/she thinks God judges, and you’ll have a one-way ticket to that person’s psyche.

  • bbk

    Ebon, there’s a good mix of comments here about Singularity, but I would like to add 2 more points that haven’t been made very clearly.

    Firstly, Singularity is in general based on themes of technological convergence and Moore’s Law. Nothing really out of the ordinary, just extrapolated ad absurdum. The idea is that humans will be able to solve ever more complex problems by mixing different technologies in completely unexpected ways. The other idea is that a certain technological threshold is required for certain feats and things that seem impossible now will become entirely real before we know it. Yes, it is a matter of science fiction what the outcome will be, but the premise is entirely rational.

    Secondly, there is a very sound premise for looking into the implications of a technological singularity right now. Society at large defines humanity on some very Luddite terms. The opposition to stem cell research and cloning because it is “playing god” is the first thing that comes to mind. Moreover, the Singularity is actually a rational critique of the mysticism about human consciousness that has been pervasive through psychology and even neuroscience. There are all matter of mystic arguments, from the idea of a soul to the Chinese Room experiment that are based in mysticism and unproven pseudo-science. Take the quantum-mechanical brain, for starters.

    I’m sure that some people get carried away and start thinking of technology as the answer to the age-old quest for the fountain of youth and other such mystical nonsense. But I think you should look into this area of science fiction, if you’re interested in science fiction at all. I recomend Charles Stross in addition to what the other comments mention.

  • http://michaelgr.com/ Michael G.R.

    I *highly* suggest that you watch this M.I.T. video:

    http://mitworld.mit.edu/video/327/

    See if that piques your curiosity..

  • http://michaelgr.com/ Michael G.R.

    This one by biogerontologist Aubrey de Grey is probably also a good idea:

    http://www.ted.com/index.php/talks/view/id/39

  • http://wildphilosophy.blogspot.com Mathew Wilder

    While I would applaud the day we achieve immortality, I do not see it as an overriding concern. I am skeptical of utopian ideas, transhumanist no less than any other.

    The “we will be as gods” line calls to mind a favorite quotation from Camus, regarding how we must always strive “to learn to live and to die, and, in order to be a man, to refuse to be a god.” Read Camus’ The Rebel for an insightful look at how utopian ideologies lead all to easily to horrifying conclusions.

  • http://mindstalk.net Damien R. S.

    But why should I want to be a “man” instead of being a “god”, if I can?

    And what’s more of a utopian ideology, Ebonmuse’s universal utiliarianism or a hope that we’ll live to see technological fixes for aging and degeneration?

  • Petrucio

    Not to mention that if the Singularity actually happens, “people dying from malnutrition or malaria” would likely be an easy problem to fix indeed.

    And to say that only the rich few would be able to benefit is failure to grasp the concept behind the Singularity. I think in such a cenario, even capitalism as we know it is unlikely to exist.

  • http://www.eunomiac.com Eunomiac

    Damien R. S.: “Imagine the following scenario: you’re on your deathbed …”

    Ah, good point. That obvious point somehow escaped me — you’re right, of course.

    Michael G.R.: “I *highly* suggest that you watch this M.I.T. video

    AWESOME video! Thanks for posting it. And yeah, I second this recommendation — it’s much more persuasive than the Waking Life video I posted, if a little longer. I hadn’t realized Kurzweil had been making valid predictions based on his Singularity theory for decades.

    Petrucio: “And to say that only the rich few would be able to benefit is failure to grasp the concept behind the Singularity.”

    Precisely. The entire point of the Singularity is that it increases our capabilities so immensely that we can’t make any predictions past it. Holding such an incomprehensible future to the meager limits of our current spot on the curve is like cave men doubting the possibility of flight because they lack feathers. Only, instead of reality taking millennia to prove us wrong, it’ll take decades.

    If anything, this hypothesis underscores the need for a rallying call to secularists and freethinkers — if our rate of development is accelerating, so to is our development of destructive technologies that might find their way into the hands of Islamic death cultists. That would suck (though we’d at least have a bittersweet ‘I told you so’ moment to look forward to ;-) ).

  • hb531

    The singularity is expected to be so profound in how it affects humanity that it could be mistaken for a ‘second coming’ or some other biblical prophesy (despite it being man-made). The impact on our culture and humanity, in my opinion, would be akin to a visit by space aliens. It would be prudent to put in place a framework for handling this event, especially since religious reactions to this would invariably contribute to global hysteria.

  • Petrucio

    @hb531:

    Sure, since “Any sufficiently advanced technology is indistinguishable from magic.” – Arthur C. Clarke

    But I don’t think planning for the event would be very fruitfull now. But you can be sure that the religous would be trying everything they can to stop the Singularity from happening at first. Can’t really imagine their reaction after it, I think it would be all over the place.

  • mackrelmint

    Although this has been an interesting thread, I think that many of you have missed the point of Ebon’s post (see the title of it) and perhaps forget one of the many accusations religious people make to atheists. I know from experience that secular humanism in particular, is misprepresented in many Christian circles as being all about worshiping humans in place of God. It’s phrases like the one Ebon pointed out in this post that ring the warning bells to these people who then say to themselves “I knew it! We were right. They DO want to be gods”.

    I’m in complete agreement and cringing along with Ebon too.

  • http://artofdbellis.blogspot.com/ David Ellis

    In addition to nonfiction works on the singularity already mentioned I’d like to recommend three post-singularity novels dealing with a future in which uploading has become a reality:

    PERMUTATION CITY, DIASPORA and SCHILD’S LADDER by Greg Egan

    Whether uploading will ever become a reality is, of course, an open question and will remain so unless it does become a reality. Personally, I’m inclined to think it’ll happen—probably not as fast as the more enthusiastic singularitarians would like to think—but also sooner than the nay-sayers would have us think either.

    Regardless, its a fascinating topic for exploration and raises many interesting questions concerning the nature of mind and identity—the sorts of philosophical issues science fiction is better suited to examining than any other kind of fiction.

  • bbk

    mack, I think this is one of those places where truth is stranger than fiction. I would say that “we will be as gods” is a rather apt explanation of Singularity to people who are accustomed to that kind of magical thinking. Imagine living forever, being able to mitigate risk to the perpetuation of your consciousness to almost 0. Imagine traveling anywhere in the world at the speed of light. Being able to communicate with every sentient being in existence almost immediately and with just a thought – and at the same time having enough mental capacity to process all of those communications from everyone else, too. Then imagine having a manufacturing capacity that is so limitless that market economies as we know them no longer exist.

    Sounds pretty god-like to me. I don’t think the implication of this science-fiction scenario is that either technology or humanity is to be worshiped as gods are, which the typical accusation thrown around by theists. The key word is “as”, which makes the statement an allegory and implies nothing further about the divinity of humans in the future.

  • bbk

    Theists historically offer the greatest resistance to science out of any group, rallying against everything from Galileo to Darwin to Dawkins while themselves contributing so much to the discourse: thanks to religion, we have the Amish. I really don’t care what they think about technological change. They’re anti-technology for the most part, unless it’s something really good at killing lots of people and then they are all about funding it with higher taxes. But birth control, stem cell research? That’s all playing God to them! I don’t know… it’s just… who cares? Theists have a stupid world view.

  • http://gretachristina.typepad.com/ Greta Christina

    “I think that you have your priorities backwards Ebonmuse: we should focus on improving ourselves and our own, not solving all the world’s problems. There will always be people dying somewhere of some plague or starvation and there’s not much we can (or should, for that matter) do about it, so why focus our energies into enhancing our own capabilities to fulfill our own purposes instead of trying to save those who can’t save themselves?

    “When you get down to the core of human nature, it’s selfishness. So we might as well channel those selfish tendencies into improving our ability to meet our desires instead of squandering them in ridiculous gestures of altruism.”

    I’m not even going to bother with the obvious moral arguments with this. I’m simply going to point out that it’s factually inaccurate.

    There is, in fact, increasing evidence that altruism is an essential part of human nature. Literally. It seems to be hard-wired into us genetically. As it is in other social species.

    As is selfishness, of course. Both qualities exist, in pretty much everyone.

    I never cease to be amazed by people who insist that everyone else really experiences life exactly the way they do, if only they’d be honest and admit it. And in particular, I never cease to be amazed by selfish people who insist that everyone else is fundamentally selfish, too, and just won’t admit it.

    If you don’t personally experience altruism, that doesn’t mean that nobody else experiences it, either. It means that other people have an experience that you seem to be missing.

    Plus I’d like to point out that, in all likelihood, you, and I, and everyone else reading this blog, are alive today because of people who cared about the great mass of humanity they’d never met. The existence of vaccines for smallpox and polio; the existence of germ theory; of methods to provide clean drinking water; yada yada yada… all of these are evidence for the reality of the altruistic part of human nature — and its value to our survival as a species.

  • Mrnaglfar

    Greta Christina,

    Indeed, there is such as thing as “selfish alturim” (reciprocial alturism), and while it is hardwired into us, I think that I should be addressed that there’s nothing wrong with being selfish, it just all depends on the degree. Those who created the vaccines, while I’m not sure how much money there is to made of such things, also come with a great deal of social status. It’s not as if those who created the vaccines mailed it in to a medical center in annonimity; they gave others what they wanted and benefitted as well.
    Everyone is selfish to a certain degree, and that can do a whole lot of good for others, but pure alturism would never survive genetically. Any organism that constantly operates at a loss while others operate at a gain are likely to be bred out of the population in a real hurry.

    Just wanted to add that.

  • Christopher

    Greta,

    I’ve heard the arguments for altruism, and have found them unconvincing as they all draw the same conclusion: the source of the “altruistic” behavior evolved as a mechanism through which one might preserve his genes (ex. one man sacrificing himself to save family) – but this is, in and of itself, a selfish motive! Even though the indivual in question may sacrifice something, he does gain something else in return for the “selfless” act.

    While contemporary society considers selfishness a vice, I see it as a redeeming virtue to our nature – something to be proud of rather than hide in shame! As I see it, the transhumanist movement is merely the logical conclusion to a life lived selfishly and I say more power to them – as I ultimately wish to reap the benefits of such a society myself…

  • mackrelmint

    bbk,
    yeah I understand what you mean and hadn’t taken the phrase literally myself. I had just wanted to point out how the “as gods” kind of phrase is frequently misunderstood and misused by theists.
    That said, the ideas discussed above regarding transhumanism and being “as” gods by making ourselves entirely dependant on or part of our own technology gives me the creeps and to me sounds most unlike being gods, knowing how unreliable and impermanent our technology can be. Simply thinking of wires corroding, optic cables breaking, etc.. makes the whole idea of living forever and mitigating risk in this way seem laughable.

  • http://gretachristina.typepad.com/ Greta Christina

    No, I’m not trying to argue for “pure altruism.” I agree with Mrnaglfar that a completely self-sacrificing impulse would have been selected out in a hurry (except in the case of parents sacrificing for their children). I’m just trying to argue this:

    a) It’s absurd to say that, because people can sometimes be selfish, therefore altruism isn’t real;

    b) it’s absurd to argue that, because motives are often mixed, with both selfish and altruistic impulses behind them and with altruism having a selfish component to it, therefore the selfish part of that motivation is the “real” one and the altruism is false;

    c) it’s absurd to argue that, because eveyone has at least somewhat selfish motivations and it can be difficult to distinguish between selfishness and altruism, therefore there is no difference in selfishness between, say, Albert Schweitzer and Donald Trump.

    And I think that’s my main point. If you’re going to define the word “selfish” as any and all behavior that benefits you even in the slightest — even if that gain is only that you get a marginal increase in social status, or that you get to privately feel like a good person — then that makes the word “selfish” pretty much meaningless. It’s basically re-defining the word “selfish” as “voluntary.”

    And I think there is a difference between the selfishness of Albert Schweitzer and the selfishness of Donald Trump. Look at definition of the word “selfish” (here’s Merriam Webster):

    1: concerned excessively or exclusively with oneself: seeking or concentrating on one’s own advantage, pleasure, or well-being without regard for others

    2: arising from concern with one’s own welfare or advantage in disregard of others (a selfish act)

    This is a useful word. It’s a useful idea, a useful distinction to make. And please note that the definition doesn’t say “concerned with oneself; seeking or concentrating on one’s own advantage, pleasure, or well-being; arising from concern with one’s own welfare or advantage.” The key words are “excessively or exclusively,” “without regard for others,” “in disregard of others.”

    That’s what makes the difference between Albert Schweitzer and Donald Trump. And it’s absurd to act as if, because Schweitzer did what he did for reasons of his own, because he found it satisfying, therefore there is no useful distinction to be drawn between his motivations and Trump’s.

    If people want to behave selfishly – i.e., concerned excessively or exclusively with themselves without regard for others — I doubt that I can argue them out of it. I just wish they’d stop fooling themselves into believing that everyone else is really just like them and simply won’t admit it. Everyone else is not just like you. There are people in the world who care about other people, who have empathy for them, who want to make the world better for everyone and not just for themselves. And the world is a better place because of it. Yes, the care for other people is mixed with self-care. But that doesn’t negate it. The fact that you are missing out on a fundamental human experience is no reason to deny that experience’s very existence.

  • Entomologista

    Philosophers are usually full of shit. But there isn’t anything at all wrong with dreaming or thinking “wouldn’t it be neat if…”. Somebody has to think it up so that other people can figure out later if it’s possible or a good idea. Obviously I’m biased, but I think that this century is going to be defined by advances in biotechnology.

  • http://www.daylightatheism.org/ Ebonmuse

    Greta Christina’s comment, I feel, makes an excellent point:

    If you’re going to define the word “selfish” as any and all behavior that benefits you even in the slightest — even if that gain is only that you get a marginal increase in social status, or that you get to privately feel like a good person — then that makes the word “selfish” pretty much meaningless. It’s basically re-defining the word “selfish” as “voluntary.”

    What this shows is that there’s no way to deny the existence of altruistic behavior in humans – unless you redefine the word so that nothing, by definition, could count as altruism.

  • bbk

    mack: When it comes to the reliability of technology versus the reliability of my own biology, I find it hard to pick favorites. At least with technology, we can build in redundancy and design systems that guarantee nearly 100% up-time. We can also design technology that travels to mars, reprograms itself when it malfunctions, and then runs for years in extreme environments. I write logistics software that, even though it is not nearly 100% reliable (voice recognition), the net effect is that distribution centers can fill orders nearly perfect 100% accuracy by eliminating human error. If I was in old, it would be a “no brainer” to use technology to extend my life. We already do – from high tech surgical tools to drugs designed on supercomputers. And we trust our lives to “fly by wire” avionics to take us across continents at hundreds of miles an hour, thousands of feet above the ground. I think when we see the next big thing, we’ll make a rational choice to either use it or send it back to the drawing board.

  • bbk

    What this shows is that there’s no way to deny the existence of altruistic behavior in humans – unless you redefine the word so that nothing, by definition, could count as altruism.

    How about this. Selfish behavior can at times maximize mutual benefit and altruistic behavior can at times cause mutual hardship. Neither one is necessarily better than the other. Natural selection balances out the two memes in a given population to best cope with environmental factors. But at heart, they are both driven by the reward of being favored by natural selection. At heart they are both selfish memes, apart from how the meme expresses itself.

    CS Lewis struggled with these concepts and it led him to conclude, wrongly, that there must be higher morality that comes from an external source. But more than a century before him, David Ricardo (an economist) came up with the theory of comparative advantage that neatly explains this whole conundrum. The mistake we make is to think that selfishness means there is no cooperation. But as Ricardo demonstrated, 2 selfish agents can reach mutual benefit via trade. On the other hand, altruistic cooperation does not guarantee reciprocity. In fact it’s kind of silly to think about that – doing something altruistically and hoping for something in return doesn’t make sense. If we operated solely on altruism, then would we ever have a workable concept of justice?

    Sometimes, altruism is needed. Many economists believe that we stand to gain the biggest economic gains by investing money in developing countries rather than industrialized ones. In other words, selfishly investing on Wall Street gives us a lower benefit overall than investing in places like Kenya. But this is a short run / long run issue. It’s altruistic to invest in Kenya in the short run, but in the long run the benefit in trade will justify the altruism in selfish terms. Pretty much all the altruism that I can think of can be explained in selfish terms. It’s almost like polar and cartesian coordinates are useful in their own ways but really they’re interchangeable systems that can describe the same things.

  • Petrucio

    I agree with you when we are talking about evolution, and there’s no real altruism in the gene level, which are the real entities behind evolution.

    But extrapolating that logic to the individual level is completely bogus thinking. It does not follow that altruistic behavior does not exist in the individual level (even if that behavior is selfish on the gene level) and that it’s not better then selfishness and not something to strive for.

    But even if your thinking was all correct – which I don’t think it is – your comment about the Singularity also is incorrect when you say it’s all about selfishness and we should channel our selfish tendencies. If you understand the implications of the Singularity, it follows that it will probably the ultimate altruistic event, and work out the solution to the problems Adam pointed to, whether he likes and/or believes in the event or not.

    PS: Go get yourself a copy of the Selfish Gene. I you have already read it, go read it again.

  • bbk

    There is a disconnect between altruism existing as a cultural meme but not existing on the gene level. You have to be able to explain one in terms of the other or else you’re failing to fully explain anything. You should at least have a very good reason for why altruism doesn’t come from a selfish genetic basis and why, then, you can’t explain it in completely selfish terms. Either way, you can’t just dismiss it altogether by saying that one is “evolutionary” and so it doesn’t matter. Biologists and Economists are able to model entire systems, including altruism and cooperation, based on purely selfish agents. Why should a sociological discussion be exempt from explaining something just because it’s inconvenient to a particular point of view?

  • http://mindstalk.net Damien R. S.

    The point is that we are not our genes. Empathy may have evolved because it was of benefit to the genes responsible for it. Altruistic memes may evolve because by playing on empathy and status, they can spread faster than they hurt the people who hold them. That doesn’t mean the altruist is acting out of selfish conscious motives. I think we’ve reached the point where the “selfish gene” metaphor is causing more confusion than light, as people get confused between the metaphor of a selfish gene and the ‘real’ selfishness (or not) within an actual brain.
    Schweitzer’s genes are as “selfish” — selected for their propagation — as Trump’s. Schweitzer may even be acting on what makes him feel good as much as Trump. That doesn’t mean we can’t call Trump selfish.

    Hell, that’s arguably the whole evolutionary point. By denigrating self-concerned behavior as selfish, and praising other-considerate behavior (altruism, or even just decency and politeness) we produce the incentives to behave in other-considerate ways. By making Schweitzer feel good when he helps people, his genes or upbringing makes him act naturally to help people, rather than scheming for his benefit. He might behave as if (the great economic caveat) he was scheming for his enlightened self-interest, but his motivations as a person are basically altruistic.

    Where’d this discussion come from? Oh right, Christopher saying the core of human nature is selfishness. Wrong, and Greta’s right. The core of genetic nature is selfishness. But those genes have produced mechanisms which produce a mix of selfish and altruistic behavior. The fact that the altruistic mechanisms actually — back when we were surrounded by close relatives — were of selfish genetic benefit, is irrelevant. We still have the mechanisms, even when surrounded by distant relatives, or unrelated animals, and they still work, thus are as much part of human nature as our greed and territoriality.

    As for the callousness of there always being someone dying of plague or starvation — yeah, how many Americans or Europeans are dying of those? Hardly any. They can be beaten, and it’s an odd combination of pessimism and optimism to say “those are unsolvable problems, let’s work on beating death instead.”

    And, well, there’s self-interest. Non-desperate people are less likely to steal your wealth. Educated and healthy people part of the trade and science networks can speed up development and thus bring your transhumanist or Singularity future faster. What if we had six times as many scientists and engineers working on problems?

  • Petrucio

    What if we had six times as many scientists and engineers working on problems?

    That would probably happen many times over if the Singularity ever happens. Not six, maybe six thousand, likely more. Not real scientists literaly, but science output.

    I’m with you on the selfish/altruistic part.


CLOSE | X

HIDE | X