Richard Carrier and Luke Muehlhauser on time to superhuman AI

A few weeks ago my new boss, Luke Muehlhauser, did an AMA on Reddit. I highly recommend this post by Geek Dad for a full summary, but one thing that stuck out is when Luke said, “I have a pretty wide probability distribution over the year for the first creation of superhuman AI, with a mode around 2060 (conditioning on no other existential catastrophes hitting us first).”

Translation for people who don’t speak statistics: “I don’t really know, I’d guess maybe in 50 years, could be more, could be less.”  I’d have leaned closer to saying 100 years, but on reflection 50 years sounds frighteningly plausible. Now until 2060 is roughly the same amount of time as the first supercomputer (not very “super” by today’s standards, given that it was less powerful than many smartphones) to IBM’s Watson.

(By the way, while Luke does mention that there are smart people who expect superhuman AI in less than 50 years, he also says 10 years is extremely unlikely.)

I’m posting this now because just recently, I stumbled across a quote from Richard Carrier’s essay “Are We Doomed?” (a long piece I’d read before, but which has too many interesting bits to catch all at once). There, Carrier flatly states, “machines will outthink humans (and be designing better versions of themselves than we ever could) within fifty to a hundred years.”

Doesn’t that pretty much make Carrier a crazy Singularity fan just as much as Luke? Well, okay, Carrier goes on to say this “doesn’t predict anything remarkable.” But wouldn’t machines that out-think humans obviously be remarkable, with huge social impacts? Even if the process of self-improvement is slow (contrary to what Eliezer Yudkowsky thinks), we’re talking about robots potentially replacing most or all human workers, and also taking over and doing horrible things to us if we screw up on programming them.

I suspect a large part of what sets apart people like me and Luke from the general population is not differing predictions about the basic technology, but the fact that most people aren’t good at thinking through the implications of certain technologies. In Star Trek: The Next Generation, the AI on the holodeck is fantastic; in one episode they tell the computer to create an opponent capable of defeating Data and it does. 

But they never think to tell it, “create a holo-officer capable of defeating the Romulans.” The fact that the audience is able to buy that for a minute is indicative of how bad we are at thinking about the implications of new technologies.

  • Alexander Kruel
    • Chris Hallquist

      Thanks!

  • JHendrix

    I had no idea that’s what Luke went off to do, though I only found his atheism website well after he left it.

    As a computer engineer, I’m still leary of predictions like that about AI. The computers will only be as good as their interface to the external world. In terms of making something that could sit in a massive server (farm) and just think/produce designs, maybe you’re right.

  • qbsmd

    Star Trek is probably a bad example to use: the government\society in their universe seemed to have an extreme taboo against robots in general. Why didn’t they ever use drones or ground robots when officers on away missions get caught in fire fights? Why weren’t some of their science missions conducted entirely by long-range probe? Why don’t they have autonomous starships for fighting space battles? All of these things would have been well within their technology, and could have saved lives. They had problems with their technology getting out of their control all the time, but most of that could be solved if they made them less advanced and less intelligent.

    • Randomfactor

      Well, as to the autonomous starship question, there’s the M5 debacle serving as an object lesson. And Nomad didn’t work out so well either…

  • Dylan

    Perhaps the reason why the government in StarTrek didn’t use robots is because StarTrek is a Sci Fi TV show made in ’66, not the real world.

    • Chris Hallquist

      Well, that explanation is correct as far as it goes, but surely it’s not a complete explanation.

  • Pingback: In a non-futurismic world, human-level AI changes everything forever


CLOSE | X

HIDE | X