Bracing for an Intelligence Explosion [Guest Post]

Bracing for an Intelligence Explosion [Guest Post]

While I’m on vacation in Ireland, I’ve invited Unequally Yoked readers to assemble a wunderkammer of guest posts on books, practices, ideas, etc that particularly delight them.  Claire began this series with her post on prayers following communion, K.Chen followed on the virtues of YouTube covers, Ben Conroy talked about how stories enchant us, and today’s post is from James D. Miller, an associate professor of economics at Smith College and the author of Singularity Rising.

Mankind is going to become a lot smarter through creating brilliant machines and by augmenting our own intelligence. In my book Singularity Rising I define a singularity as a threshold of time at which enhanced intelligence radically remakes human civilization. Our primary moral goal should be to achieve a good singularity.

The many independent paths to intelligence enhancement combined with the massive economic and military incentives to boost human and machine intelligences make me confident that we will reach singularity, even if it’s a destination we should want to avoid. I don’t have the space here to adequately explain the numerous ways we might increase intelligence, but the two main paths are through the exponentially growing power of Moore’s law which causes the speed of computer chips to regularly double and the rapidly falling price of gene sequencing.

Computers are doing more and more tasks, and once they master computer design and programming they will likely explode in intelligence as Moore’s law will supercharge the growth in computer capacity. And since intelligence is a reflective super power, able to turn in on itself to discover its own workings, as computers get smarter, they will figure out better ways of making themselves even smarter.

After we learn the genetic basis of intelligence we will likely be able to birth brilliant children, possibly by (as I wrote about here) eliminating mutational load. Every human has lots of bad mutations, many undoubtedly damaging cognition. Harmful mutations tend to be rare because evolution works against their spread, so if you have a gene variant that almost no one else has, the odds of the gene being harmful are much greater than it being beneficial. Once we learn to edit an embryo’s genome we will have the capacity to create children much smarter than have ever existed by banishing mutational load. When you combine eugenics with likely advances in smart drugs, brain training, brain implants, and neurofeedback the future will give us the capacity to reliably produce hyper-geniuses as above Einstein in mathematical ability as he was from me.

Even if the United States rejects eugenics for moral reasons, not all other countries will. There doesn’t exist a world body strong enough to stop eugenics. And a world in which Chinese, but only Chinese, eight-year-olds are regularly mastering calculus is one in which the United States will almost certainly embark on an anything goes effort to increase the IQ of its next generation.

The post-singularity future will belong to the most intelligent, and they will likely soon acquire the means of colonizing the universe. For every person alive today, a trillion more might yet live meaningful lives if the singularity goes well. But a bad singularity could result in mankind’s extinction or in the resources of the universe being used for some pointless (to us) purpose of the ruling powers.

Three tech titans: Peter Thiel, Elon Musk, and Larry Page have by their words, deeds, and donations shown that they believe in the plausibility of a singularity. Given these men’s skill at predicting and shaping technological trends, their authority makes a powerful argument for the likelihood of the singularity. Even if you assign only a ten percent chance of the singularity occurring, because of the massive stakes involved analogous logic to Pascal’s wager imposes a moral duty on us, perhaps greater than any other secular one, to work towards a utopian singularity.

 

Leah again: There’s plenty to debate in James’s pitch, and I hope he’ll stick around in comments, but, personally, I’m most curious about the assumption that (leaving ethics and feasibility entirely aside) eugenic-assisted children would necessarily become dominant in a dangerous way over everyone else.  Smart can certainly smooth your way in the world, in the same way that whiteness or maleness or attractiveness or good family connections can, but it doesn’t necessarily mean that people will want their way smoothed toward power.  

Plenty of Ivy League grads are quite happy to take their smarts and advantages to Wall Street and investment banking, picking money and (eventual) leisure over anything more disruptive or exciting.  Many mathematics wunderkinds are aiming for grad school and a life of research, for example, so a country shooting for 8-year olds enraptured with calculus might find they’d bred a class of Ferdinand the Bulls, enchanted by contemplation of the world and not that responsive to your desires.

 


Browse Our Archives