Will human-level AI be created this century? Nobody really knows

That’s the take-away from MIRI executive director Luke Muehlhauser’s recent blog post, “When Will AI Be Created?” He writes:

We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.

How confident is “confident”? Let’s say 70%. That is, I think it is unreasonable to be 70% confident that AI is fewer than 30 years away, and I also think it’s unreasonable to be 70% confident that AI is more than 100 years away.

Luke has made similar comments before. In his Reddit AMA, he said he has “a pretty wide probability distribution over the year for the first creation of superhuman AI,” and in Intelligence Explosion: Evidence and Import he and his coauthor Anna Salamon write:

So, when will we create AI? Any predictions on the matter must have wide error bars. Given the history of confident false predictions about AI (Crevier 1993), and AI’s potential speed bumps, it seems misguided to be 90% confident that AI will succeed in the coming century. But 90% confidence that AI will not arrive before the end of the century also seems wrong, given that: (a) many difficult AI breakthroughs have now been made (including the Gödel machine and AIXI), (b) several factors, such as automated science and first-mover incentives, may well accelerate progress toward AI, and (c) whole brain emulation seems to be possible and have a more predictable development than de novo AI. Thus, we think there is a significant probability that AI will be created this century. This claim is not scientific—the field of technological forecasting is not yet advanced enough for that—but we believe our claim is reasonable.

This might surprise some people. Given MIRI’s mission, they might expect it would led by people who claim to be sure human-level AI will be created this century if not sooner. But that’s thinking about the issue the wrong way. While it might be hard to predict the details of how human-level AI would play out, there are good reasons to think that regardless of the details, the size of the impact would be huge. And given that, plus even a 10% chance of human AI being developed this century, it seems like we should be putting some effort into preparing for it.

It seems like people have a really hard time dealing with this kind of uncertainty. They want to round all probabilities up to one or down to zero. On the “down to zero” size, here’s Luke again, from his post “Overconfident Pessimism”:

I am blessed to occasionally chat with some of the smartest scientists in the world, especially in computer science. They generally don’t make confident predictions that certain specific, difficult, insight-based technologies will be developed soon. And yet, immediately after agreeing with me that “the future is very hard to predict,” they will confidently state that a specific, difficult technology is more than 50 years away!

Side note: I know a lot of people don’t like MIRI (as was demonstrated in the comments on Luke’s guest post). On the “should you donate to MIRI” question, I’ll say this: they are, unfortunately, one of the very few organizations working on these issues, the other main one being the Future of Humanity Institute, who they frequently partner with; in fact leaders of both organizations have said their work is complimentary.

That means that if you think what they’re doing is helping at all, there’s a pretty good case for donating to them, given the importance of the issues. On the other hand, you might concede that we should be working on these issues, but their particular approach is so unlikely to be effective that you’re not going to donate, even though you wish there were a better organization for you to donate to.

That suggests there might be advantages to having a plurality of organizations working on dealing with the impact of AI. On the one hand, if you think you know the best approach to that issue, it makes sense to put all your money on the best approach (taking diminishing returns into account). On the other hand, if there’s unresolvable disagreement about what approach will work best, it might help to have several organizations around with different approaches, so people can donate in accordance with their opinions on which approach is best.

  • http://kruel.co/ Alexander Kruel

    If there is a good case for donating to them, why do organizations that evaluate charities disagree? GiveWell disagrees, Peter Singer’s ‘The Life You Can Save’ disagrees and neither does ‘Giving What We Can’ recommend MIRI.

    As far as I know there is also very little agreement among AI researchers that researching AI risks is very important.

    If you are right then the only reasons I can see is that you are either (1) much smarter than all those people, (2) more rational or (3) know something that all those people do not know.

    If you wonder why many people don’t like MIRI then reflect on the above reasons and how they are perceived by people who don’t find their arguments at all convincing. Especially given that the majority of MIRI supporters has never done any AI research themselves and that the issue in question is more complicated than other issues that you can only begin to judge after many years of hard research.

    Another perplexing issue is how people who actually believe what MIRI claims are willing to support it when MIRI is the the only organisation whose mission implies the necessity to take over the world by means of a superhuman artificial general intelligence, before anyone else can do it (because everyone else is wrong). An organisation which does not feature any peer review and which is lacking any transparency.

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      >If there is a good case for donating to them, why do organizations that evaluate charities disagree?

      “If A, how come B?” is generally an invalid argument form; it slips in the assumption that A&B is somehow unlikely without bothering to justify that assumption. If that was a serious question and not a rhetorical one, I’m sure you already know about Holden Karnofsky’s article on the Singularity Institute (when that was what they were called), and about Luke’s reply.

      And I wouldn’t be surprised if Peter Singer simply hadn’t thought very hard about the issue when he wrote The Life You Can Save. Googling, I see he’s written this article about AI where he claims, “For the moment, a more realistic concern is not that robots will harm us, but that we will harm them.” I agree that the ethics of how we treat robots is potentially very important (and figuring out what to do about that is part of “preparing ourselves for the arrival of human-level AI”), but once we get to the point where it’s plausible that robots deserve ethical consideration, we’ll also be at least close to the point where they could potentially do great harm to us.

      As for what AI researchers in general think about this, a survey of participants at FHI’s Winter Intelligence Conference found that, “Views on the eventual consequences [of AGI] were bimodal, with more weight given to extremely good and extremely bad outcomes. This was not due to polarization between two views, but that many gave strong weight to both extremes simultaneously: AGI poses extreme benefits and risks.” Of course, the attendees of this particular conference may not be representative of AI experts generally, and if you have better data you’re welcome to share it (from what I can tell, though, not many studies have been done of the opinions of AI researchers on these kinds of issues).

      Other than that, I’m not sure what part of this post you actually disagree with. Do you disagree with the claim that, “we ought to be working on preparing ourselves for the arrival of human-level AI”? Or are your issues just with MIRI specifically?

      • http://kruel.co/ Alexander Kruel

        A&B is unlikely in this case. Whether you should contribute money to MIRI is a very complex problem involving questions concerning artificial general intelligence, ethics and rationality.

        If the only evidence in favor of climate change was an informal sequence of blog posts and a handful of papers that have not been reviewed by experts, would you consider contributing money to the same organization who wrote those papers in order to help them with their potentially dangerous and unnecessary geoengineering project?

        I have seen the replies to Holden Karnofsky’s post. And as all of the evidence in favor of AI risks together, at this point in time, they reduce an incredible complex math problem that nobody understands to an appeal to intuition whose argumentative strength can be reduced to a tautology about how something that is better at an activity is better at at it.

        If you believe that Peter Singer simply hasn’t thought about it very hard yet, did you consider asking him?

        One of the first and most important actions one could possibly take before contributing money to MIRI is asking independent experts. But instead it seems that most AI risk advocates have withdrawn from real world feedback loops and instead concentrate on echo chambers like LessWrong.

        I don’t disagree with existential risk research. I disagree with the predominating black and white position hold by most people, when there are many reasonable levels of disagreement.

        • http://patheos.com/blogs/hallq/ Chris Hallquist

          >If you believe that Peter Singer simply hasn’t thought about it very hard yet, did you consider asking him?

          How is Peter Singer about answering e-mail? My guess is he gets a ton and my chances of getting a reply are low, but if he’s the type to try to reply to as much of his mail as possible, I may try it.

  • Alexander Johannesen

    Well, I’m one of those who think the risks associated with the emergence of AI a bit on the funny side. Not that risks shouldn’t be thought about as that’s exactly what smart people all over the place should be doing all the time, of course. No, I worry more that the current set of eyes are *not* AI practitioners or developers or researchers, but people on the edge looking in and speculating wildly about future possibilities we know almost nothing about, linking such things as philosophy and reason to a future AI that most definitely absolutely will not see it their way.

    My criticism comes not at all through a negative lens on MIRI or LessWrong; I follow both with interest, and occasionally join in. And, heck, I support what it is that they’re doing for the most part, I think their goals are probably worth pursuing. However, I feel that some assertions made by some people associated with them are somewhat hyperbolical and leaning towards the fantastical, especially when they justify their purpose and direction (hence my criticism in said previous post). Sure, LessWrong suffer under a heavy load of smart people asserting mere possibilities as certainty, but that happens in a lot of places where smart people who can understand the basic concepts gather who are not doing the actual work as such. If nothing else, that’s my main criticism; the lack of a strong link between MIRI and its representatives, and with actual AI research.

    PS. I should also point out that I see much, much more naivety (the good kind) and humbleness in some of the more serious papers that’s come out of MIRI (even though I could pick nits about most assumptions made), so I would encourage people to follow those more than skimming blog posts about it.

    • hf

      See, you talk like you should already know that people from Google worked with MIRI on this and the follow-up.

      • Alexander Johannesen

        Is there some significance to Google having worked on this paper, or the paper itself, I seemed to have missed going by what I’ve written? Please explain.

        • hf

          You tell me, I don’t understand your objection.

          I do think the paper shows that the self-modifying AI approach can work in principle. Separately, I think if you want to argue people at MIRI would change their positions given exposure to AI research, their exposure to people from Google should give you pause. Perhaps you should ask if this quote accurately represents the motives of the latter:

          If we hadn’t been trying to solve this problem (with hope stemming from how it seemed like the sort of thing a reflective rational agent ought to be able to do somehow), this would be just another batch of impossibility results in the math literature.

          • Alexander Johannesen

            Hmm, you seem rather persistent in not explaining what it is that I said that you’re reacting to? Are you referring to my criticism of a lack of a strong connection between MIRI people and AI research?

            If so; Why should I pause at people from Google getting in on that paper? Is Google doing something specific you know about in AI that should make me change my mind? Is Google to be considered the crux of AI research these days and I somehow missed the memo? And does their involvement with one paper MIRI has created regarding something very abstract from the reality of AI somehow cast things in a new light? Can you please explain the relevance, and how it goes against something I’ve said?

          • http://patheos.com/blogs/hallq/ Chris Hallquist

            Without commenting on the rest of this, yeah Google is throwing a lot of money at AI research. As I say in this post, I don’t know if we’ll develop human level AI any time soon, but I think Google belongs on a list of top 5 groups most likely to do it. (A list which I actually wouldn’t put MIRI on.) Their director of research, Peter Norvig, is one of the top AI guys on the planet.

          • Alexander Johannesen

            I’m not sure Google is after the kind of AI that we’re talking about here, nor are in the running for one? They’re in the same boat as NASA and other major operators, somewhat removed from the AI scene that works towards an artificial mind. Back in my days we drew a distinction between the function of applicability and the reasoning centre, kinda similar to the deep / shallow distinction sometimes used. I of course disapprove of both of these on principle, even when they are useful such as now. :)

            Google and others very often use mathematical / logical reasoners, often skipping long neural paths through a ton of trickery (and not trickery as in ‘bad’, but as in ‘quick function that gives equal result as slow reasoning’), and this was also the crux of what I was into as well; you can simulate complex reasoning through, well, cheating; derivative functionality, histographic statistics, cached paths, various short-cutting algorithms over large graphs, and the like. None of this is AI; it’s simulation.

            Don’t get me wrong, I would love AI to emerge, and definitely from Google, but I don’t agree that they’re in this game at all. Simulation will take us quite far and solve huge swaths of problems, for sure, but it ain’t AI; it ain’t a mind.

            (Should also mention that I reject that math and / or logic will be the language of AI, or even stand as a basis for it, but that’s a somewhat different discussion)

          • http://patheos.com/blogs/hallq/ Chris Hallquist

            What are you assuming “AI” means here? For many purposes, it doesn’t really matter how something is done as long as it’s done. Unless your point is that the methods Google is working on now aren’t remotely sufficient for doing everything humans do.

            See here for my attempt to avoid some of the conceptual/definitional issues.

          • Alexander Johannesen

            Strong AI, I suppose, although I have problems with the categorisation that’s in use that are mostly technical rather than philosophical (Turing’s rules, for example). I’m supposing the quest of AI is consciousness (simulated or otherwise), however that quest is tighter linked to academia than any commercial application of such. Or, to put it differently, I doubt very much that MIRI feels threatened by the sort of AI that passes specific Turing Tests, the stuff we’re currently involved in, that Google is developing; it is so far removed from thinking machines that I’m starting to wonder how MIRI actually categorises these threats / AIs? What I’ve read so far is rather loose, although I liked your paper quite a lot, and to be honest, I think we for the most part agree.

            However, given the consciousness goal, I think the sum of all Turing tests that amount to consciousness is still very much a dark field (ie. we know nothing), and my pet theory thinks current mathematical and / or logical models are red herrings, at best. There’s something … imprecise about consciousness. We need to break *free* from the constraints of the mathematical models we create in order to get there; we have created mathematics as an artificial language that helps us do calculations and deal with the complexities of the universe, but our brains are not mathematical at all. While I agree with the principle of naturalism in this regard (ie. physicalism), and I kinda agree with the hypercomputing argument (which these days amounts to quantum computing) until ;

            “So there’s a limit to how much detail a simulation of an actual brain would need”

            I’m not sure we can make statements this certain. It might just be *exactly* the number of levels of information down to the atomic (or even sub-atomic) level that is required for consciousness. We don’t know, because we’re so very, very far away from it at the moment.

            “However, there is the possibility that humans are a fluke.”

            Exactly! And this is my current thinking, something akin to a pattern recognising machine trapped in an echo chamber, or some form of feedback loop that creates this concept we call consciousness.

            You might be right that to get there might not require too much information or simulation, however I posit that the tools we use and models we create are red herrings in getting there.

          • http://patheos.com/blogs/hallq/ Chris Hallquist

            Google seems to have some much more ambitious goals than the applications you currently see, FYI. I’ve had a very partial draft post sitting in my drafts for awhile; I really need to push that out the door.

            Re: hypercomputation, note that what people talk about today when they talk about quantum computing is mostly not hypercomputation. Quantum computing, as normally conceived, is still limited to the domain of what Turing machines can do, the idea is just to do those things much faster.

          • Alexander Johannesen

            Re: Google: Like, what? I’m genuinely interested. You mean an AI that goes dramatically beyond a fully automated car, or a complex Siri, or full-graph searching, or intelligent semantic traversal of graphs/models/whatever? Something akin to deep AI or the making of a mind? I trust Google more than the average punter, so I would be thrilled if Google has turned to serious AI in later years (I got out of the industry about 5 years ago).

            Re: hypercomputation, well, for normal qubit quantum computing, maybe, but then I don’t think hypercomputation is well-defined as no one seems to agree to what it actually is supposed to be able to do. And then, there’s a lot more to quantum computing than adiabatic models, we’re simply just in early, early, early stages. The drive in quantum computing is to get to a place where we can make a real computer that uses aspects of quantum physics to go beyond Turing machines. But time will tell.

          • hf

            Didn’t realize you meant the fools who promise AGI in N years.

            Perhaps I’ll pretend you meant the academically respectable version, and point out that the paper’s main author studies theoretical computer science (quantum and all).

  • Pyrrhus

    Oh it’s this guy again…

    I’ve listened to a few skeptical podcast episodes about the singularity where they interview AI experts and the like, and so far not a single one has shared the MIRI viewpoint.

    I’m still waiting for Luke to discuss Roko’s basilisk. Anyone who is not familiar with it should google it. It will be worth your time.

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      Can you point me to the episodes you’re referring to?


CLOSE | X

HIDE | X