Will human-level AI be created this century? Nobody really knows

That’s the take-away from MIRI executive director Luke Muehlhauser’s recent blog post, “When Will AI Be Created?” He writes:

We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.

How confident is “confident”? Let’s say 70%. That is, I think it is unreasonable to be 70% confident that AI is fewer than 30 years away, and I also think it’s unreasonable to be 70% confident that AI is more than 100 years away.

Luke has made similar comments before. In his Reddit AMA, he said he has “a pretty wide probability distribution over the year for the first creation of superhuman AI,” and in Intelligence Explosion: Evidence and Import he and his coauthor Anna Salamon write:

So, when will we create AI? Any predictions on the matter must have wide error bars. Given the history of confident false predictions about AI (Crevier 1993), and AI’s potential speed bumps, it seems misguided to be 90% confident that AI will succeed in the coming century. But 90% confidence that AI will not arrive before the end of the century also seems wrong, given that: (a) many difficult AI breakthroughs have now been made (including the Gödel machine and AIXI), (b) several factors, such as automated science and first-mover incentives, may well accelerate progress toward AI, and (c) whole brain emulation seems to be possible and have a more predictable development than de novo AI. Thus, we think there is a significant probability that AI will be created this century. This claim is not scientific—the field of technological forecasting is not yet advanced enough for that—but we believe our claim is reasonable.

This might surprise some people. Given MIRI’s mission, they might expect it would led by people who claim to be sure human-level AI will be created this century if not sooner. But that’s thinking about the issue the wrong way. While it might be hard to predict the details of how human-level AI would play out, there are good reasons to think that regardless of the details, the size of the impact would be huge. And given that, plus even a 10% chance of human AI being developed this century, it seems like we should be putting some effort into preparing for it.

It seems like people have a really hard time dealing with this kind of uncertainty. They want to round all probabilities up to one or down to zero. On the “down to zero” size, here’s Luke again, from his post “Overconfident Pessimism”:

I am blessed to occasionally chat with some of the smartest scientists in the world, especially in computer science. They generally don’t make confident predictions that certain specific, difficult, insight-based technologies will be developed soon. And yet, immediately after agreeing with me that “the future is very hard to predict,” they will confidently state that a specific, difficult technology is more than 50 years away!

Side note: I know a lot of people don’t like MIRI (as was demonstrated in the comments on Luke’s guest post). On the “should you donate to MIRI” question, I’ll say this: they are, unfortunately, one of the very few organizations working on these issues, the other main one being the Future of Humanity Institute, who they frequently partner with; in fact leaders of both organizations have said their work is complimentary.

That means that if you think what they’re doing is helping at all, there’s a pretty good case for donating to them, given the importance of the issues. On the other hand, you might concede that we should be working on these issues, but their particular approach is so unlikely to be effective that you’re not going to donate, even though you wish there were a better organization for you to donate to.

That suggests there might be advantages to having a plurality of organizations working on dealing with the impact of AI. On the one hand, if you think you know the best approach to that issue, it makes sense to put all your money on the best approach (taking diminishing returns into account). On the other hand, if there’s unresolvable disagreement about what approach will work best, it might help to have several organizations around with different approaches, so people can donate in accordance with their opinions on which approach is best.

New Wordpress blog
Effective Altruism and feminism
Effective Altruism and the LessWrong-o-sphere: an observation
Self-interested activism and gay rights