Transhumanism: The World’s Most Dangerous Idea? (Nick Bostrom)

Transhumanism: The World’s Most Dangerous Idea? (Nick Bostrom) October 31, 2013

So tonight I am attending a Tippling Philosophers get-together in Fareham, UK. We are discussing transhumanism. For those who don;t know, this is the idea (or movement for the idea) that we can adapt our bodies and cognitive abilities using technology to prolong our lives, choose our babies genotype and phenotype etc etc – the harnessing of technology to change and progress what we might (erroneously?) define as our humanity, biological or otherwise.

Here is a really interesting essay from some years ago from Nick Bostrom to whet the appetite:


Transhumanism: The World’s Most Dangerous Idea?

Nick Bostrom (2004)

[Short version: Foreign Policy, in press; Full version: Betterhumans]

“What idea, if embraced, would pose the greatest threat to the welfare of humanity?” This was the question posed by the editors of Foreign Policy in the September/October issue to eight prominent policy intellectuals, among them Francis Fukuyama, professor of international political economy at Johns Hopkins School of Advanced International Studies, and member of the President’s Council on Bioethics.

And Fukuyama’s answer? Transhumanism, “a strange liberation movement” whose “crusaders aim much higher than civil rights campaigners, feminists, or gay-rights advocates.” This movement, he says, wants “nothing less than to liberate the human race from its biological constraints.”

More precisely, transhumanists advocate increased funding for research to radically extend healthy lifespan and favor the development of medical and technological means to improve memory, concentration, and other human capacities. Transhumanists propose that everybody should have the option to use such means to enhance various dimensions of their cognitive, emotional, and physical well-being. Not only is this a natural extension of the traditional aims of medicine and technology, but it is also a great humanitarian opportunity to genuinely improve the human condition.

According to transhumanists, however, the choice whether to avail oneself of such enhancement options should generally reside with the individual. Transhumanists are concerned that the prestige of the President’s Council on Bioethics is being used to push a limiting bioconservative agenda that is directly hostile to the goal of allowing people to improve their lives by enhancing their biological capacities.

So why does Fukuyama nominate this transhumanist ideal, of working towards making enhancement options universally available, as the most dangerous idea in the world? His animus against the transhumanist position is so strong that he even wishes for the death of his adversaries: “transhumanists,” he writes, “are just about the last group that I’d like to see live forever”. Why exactly is it so disturbing for Fukuyama to contemplate the suggestion that people might use technology to become smarter, or to live longer and healthier lives?

Fierce resistance has often accompanied technological or medical breakthroughs that force us to reconsider some aspects of our worldview. Just as anesthesia, antibiotics, and global communication networks transformed our sense of the human condition in fundamental ways, so too we can anticipate that our capacities, hopes, and problems will change if the more speculative technologies that transhumanists discuss come to fruition. But apart from vague feelings of disquiet, which we may all share to varying degrees, what specific argument does Fukuyama advance that would justify foregoing the many benefits of allowing people to improve their basic capacities?

Fukuyama’s objection is that the defense of equal legal and political rights is incompatible with embracing human enhancement: “Underlying this idea of the equality of rights is the belief that we all possess a human essence that dwarfs manifest differences in skin color, beauty, and even intelligence. This essence, and the view that individuals therefore have inherent value, is at the heart of political liberalism. But modifying that essence is the core of the transhumanist project.”

His argument thus depends on three assumptions: (1) there is a unique “human essence”; (2) only those individuals who have this mysterious essence can have intrinsic value and deserve equal rights; and (3) the enhancements that transhumanists advocate would eliminate this essence. From this, he infers that the transhumanist project would destroy the basis of equal rights.

The concept of such a “human essence” is, of course, deeply problematic. Evolutionary biologists note that the human gene pool is in constant flux and talk of our genes as giving rise to an “extended phenotype” that includes not only our bodies but also our artifacts and institutions. Ethologists have over the past couple of decades revealed just how similar we are to our great primate relatives. A thick concept of human essence has arguably become an anachronism. But we can set these difficulties aside and focus on the other two premises of Fukuyama’s argument.

The claim that only individuals who possess the human essence could have intrinsic value is mistaken. Only the most callous would deny that the welfare of some non-human animals matters at least to some degree. If a visitor from outer space arrived on our doorstep, and she had consciousness and moral agency just like we humans do, surely we would not deny her moral status or intrinsic value just because she lacked some undefined “human essence”. Similarly, if some persons were to modify their own biology in a way that alters whatever Fukuyama judges to be their “essence,” would we really want to deprive them of their moral standing and legal rights? Excluding people from the moral circle merely because they have a different “essence” from “the rest of us” is akin to excluding people on basis of their gender or the color of their skin.

Moral progress in the last two millennia has consisted largely in our gradually learning to overcome our tendency to make moral discriminations on such fundamentally irrelevant grounds. We should bear this hard-earned lesson in mind when we approach the prospect of technologically modified people. Liberal democracies speak to “human equality” not in the literal sense that all humans are equal in their various capacities, but that they are equal under the law. There is no reason why humans with altered or augmented capacities should not likewise be equal under the law, nor is there any ground for assuming that the existence of such people must undermine centuries of legal, political, and moral refinement.

The only defensible way of basing moral status on human essence is by giving “essence” a very broad definition; say as “possessing the capacity for moral agency”. But if we use such an interpretation, then Fukuyama’s third premise fails. The enhancements that transhumanists advocate – longer healthy lifespan, better memory, more control over emotions, etc. – would not deprive people of the capacity for moral agency. If anything, these enhancements would safeguard and expand the reach of moral agency.

Fukuyama’s argument against transhumanism is therefore flawed. Nevertheless, he is right to draw attention to the social and political implications of the increasing use of technology to transform human capacities. We will indeed need to worry about the possibility of stigmatization and discrimination, either against or on behalf of technologically enhanced individuals. Social justice is also at stake and we need to ensure that enhancement options are made available as widely and as affordably as possible. This is a primary reason why transhumanist movements have emerged. On a grassroots level, transhumanists are already working to promote the ideas of morphological, cognitive, and procreative freedoms with wide access to enhancement options. Despite the occasional rhetorical overreaches by some of its supporters, transhumanism has a positive and inclusive vision for how we can ethically embrace new technological possibilities to lead lives that are better than well.

The only real danger posed by transhumanism, it seems, is that people on both the left and the right may find it much more attractive than the reactionary bioconservatism proffered by Fukuyama and some of the other members of the President’s Council.

[For a more developed response, see In Defense of Posthuman DignityBioethics, 2005, Vol. 19, No. 3, pp. 202-214.]

Browse Our Archives

Follow Us!

What Are Your Thoughts?leave a comment
  • ThePrussian

    Speaking as a fully blown transhumanist myself, I find Fukuyama a deeply silly man.

  • labreuer

    Star Trek explores this idea extensively through its Prime Directive, which:

         (1) Disallows contact with pre-warp civilizations.
         (2) Disallows ‘interference’ with post-warp civilizations.

    Here’s Jean Luc Picard, from the episode Symbiosis:

    The Prime Directive is not just a set of rules; it is a philosophy… and a very correct one. History has proven again and again that whenever mankind interferes with a less developed civilization, no matter how well intentioned that interference may be, the results are invariably disastrous.

    C.S. Lewis explores this idea in That Hideous Strength, although it has angels and demons in it so hey.

    I think this idea raises interesting questions vis á vis the Problem of Evil, and how we think God ought to interfere and ‘fix’ things. We like to posit that because of God’s omniscience and omnipotence, he could instantaneously fix things (I’m ignoring the debate about how he would have let things get this way). Consider the following mathematical limit:

         (3) lim i → ∞: K(i), P(i)

    K(i) is one’s knowledge at some level i, and P(i) is one’s power at some level i.

    What I’m trying to get at is: how would beings act as they have more and more knowledge and more and more power? Can we somehow compare the actions as i increases more and more? Suppose that K(i) includes scientific and moral knowledge.

    Does (3) converge on an entity fixing everything that is wrong with the world, instantaneously? Or, weakening that question, would such an entity fix a lot of what is wrong, instantaneously?

    I posit that if the idea is to get all beings to increase in knowledge and power, they need to do it together without too much disparity between them, and the steps they take have to be sufficiently small. This, however, says that God wouldn’t just fix everything/most things all at once, if he were to suddenly show up. But this contradicts my current model of the best atheists’ thinking on the matter. So I need help to improve my model. :-)

    • There is possibly the need to draw what would arguably be an arbitrary and subjective line. We are already doing transhumanist things. We are already desiring and seeking to prolong life and knowledge and what have you with technology, transforming the ‘natural’ (I use that word with an awful lot of caveats because I think there is no such thing as unnatural, by definition) human body into something ‘greater’, overcoming illnesses and defects etc.

  • Pingback: The necessity of transhumanism | The Prussian()

  • Pingback: The necessity of transhumanism | Skeptic Ink()