Richard Carrier and Luke Muehlhauser on time to superhuman AI

A few weeks ago my new boss, Luke Muehlhauser, did an AMA on Reddit. I highly recommend this post by Geek Dad for a full summary, but one thing that stuck out is when Luke said, “I have a pretty wide probability distribution over the year for the first creation of superhuman AI, with a mode around 2060 (conditioning on no other existential catastrophes hitting us first).”

Translation for people who don’t speak statistics: “I don’t really know, I’d guess maybe in 50 years, could be more, could be less.”  I’d have leaned closer to saying 100 years, but on reflection 50 years sounds frighteningly plausible. Now until 2060 is roughly the same amount of time as the first supercomputer (not very “super” by today’s standards, given that it was less powerful than many smartphones) to IBM’s Watson.

(By the way, while Luke does mention that there are smart people who expect superhuman AI in less than 50 years, he also says 10 years is extremely unlikely.)

I’m posting this now because just recently, I stumbled across a quote from Richard Carrier’s essay “Are We Doomed?” (a long piece I’d read before, but which has too many interesting bits to catch all at once). There, Carrier flatly states, “machines will outthink humans (and be designing better versions of themselves than we ever could) within fifty to a hundred years.”

Doesn’t that pretty much make Carrier a crazy Singularity fan just as much as Luke? Well, okay, Carrier goes on to say this “doesn’t predict anything remarkable.” But wouldn’t machines that out-think humans obviously be remarkable, with huge social impacts? Even if the process of self-improvement is slow (contrary to what Eliezer Yudkowsky thinks), we’re talking about robots potentially replacing most or all human workers, and also taking over and doing horrible things to us if we screw up on programming them.

I suspect a large part of what sets apart people like me and Luke from the general population is not differing predictions about the basic technology, but the fact that most people aren’t good at thinking through the implications of certain technologies. In Star Trek: The Next Generation, the AI on the holodeck is fantastic; in one episode they tell the computer to create an opponent capable of defeating Data and it does. 

But they never think to tell it, “create a holo-officer capable of defeating the Romulans.” The fact that the audience is able to buy that for a minute is indicative of how bad we are at thinking about the implications of new technologies.

Bertrand Russell wrote great prose, predicted Cold War and America's victory in it
Alvin Plantinga, Michael Behe, and Paul Draper
So I've been flipping through The Transhumanist Reader...
No scientific evidence for that

CLOSE | X

HIDE | X