Experts expect AI around mid-century, significant risk of bad outcomes

From a new paper by Nick Bostrom:

These results should be taken with some grains of salt, but we think it is fair to say that the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-50, certainly (with 90% probability) by 2075. From reaching human ability, it will move on to superintelligence in 2 years (10%) to 30 years (75%) thereafter. Nearly one third of experts expect this development to be ‘bad’ (13%) or ‘extremely bad’ (18%) for humanity.

…We could also put this more modestly and still come to an alarming conclusion: We know of no compelling reason to say that progress in AI will grind to a halt (though deep new insights might be needed) and we know of no compelling reason that superintelligent systems will be good for humanity.

How selfish are voters?
Notes on Robert Fogel’s Without Consent or Contract
Avoiding divorce doesn’t make you a traditionalist
Harry Potter and the problem with genre deconstructions
  • http://kruel.co/ Alexander Kruel

    Kind of weird how certain LW members are skeptical when it comes to climate change and nearly certain when it comes to the motivations and capabilities of superhuman AI.

    There exists a nearly unanimous consensus when it comes to the former, which
    is based on empirical evidence. The latter is merely based on armchair
    theorizing about vague concepts. And rather than a consensus, a third
    expect the outcome to be bad. The latter being people who can’t even
    build insect level robots, yet speculate about superhuman intelligences.

  • MNb

    Call me skeptic too. The AI project in chess has quite a long history and brute force totally has beaten simulating human algorithms.

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      I think you’re confused about the question they’re asking. They’re interested in whether AI will be able to match human performance in various areas—not whether that performance will necessarily be achieved in the same way as humans do.

      • MNb

        I might. But that answer whether computers will be able to match human performances has been provided for all kind of brain games since two decades or so: yes. Then the question becomes if using brute force and nothing but brute force still can be called AI. Even about 20 years ago, when Kasparov lost a match to Deep Blue, AI-experts asked themselves that question.
        Other ways are by far not as successfull in silicon chess. I’d be interested in your view on this topic.

  • Alex SL

    “Overall human ability” – in what? I can easily believe that AI will be developed that will be hugely superior at playing chess or at steering a car than any human, but that does not mean it will be able to tie shoelaces. And it certainly does not mean that it will have either the capability or the motivation to actively cause harm. Because it was written to play chess or steer a car, respectively. Intelligence isn’t magic pixie dust giving you additional capabilities. Heck, it isn’t even properly defined in most of these discussions.

    So even under the most generous assumptions this would only work as a doomsday scenario if the hypothetical AI were not as cripplingly overspecialised as all existing real life AIs created by humans are. But then the question is, even assuming that it is possible (and there is no reason to assume it is), why would anybody create something like that in the first place? Seems pretty pointless because due to the inherent trade-offs it would actually be worse than a specialised algorithm at doing any given task we would want it to do.

    “Out of scientific curiosity” seems to be the only plausible answer, but then you don’t hand your lab rats the keys to the nuclear arsenal. You keep them in a box and experiment on them. And no, AI box experiment is irrelevant unless we once more treat it like some magical entity. A mind flitting through internet cables and behaving like a computer virus while maintaining conscious control and somehow (?) effortlessly learning how to use all the devices it accesses is the stuff of C-rated 1980ies SF movies, not a remotely plausible scenario.

    Also, should an AI ever cause trouble, hitting the power switch or cutting a cable solves the problem in about half a second. Compared to climate change, soil erosion, groundwater depletion, peak oil/coal/phosphate/etc., rampant poverty and starvation, biodiversity loss, overpopulation, antibiotics becoming useless and, well, every other real world problem, AI is trivial. It doesn’t even register as a blip on the radar of things we should give a damn about. Perspective.

    • http://thephyseter.wordpress.com The_Physeter

      “Why would anybody create something like that in the first place?”

      Does anybody else remember that old cartoon that only ran about one season, the Secret Files of the Spydogs?

      Scientist 1: Hi Frank, what did you do this weekend?
      Scientist 2: I gave this blowfish an IQ of 3,000.
      Scientist 1: Why?
      Scientist 2: Felt like it. Oh, I also made him incredibly evil.
      Scientist 1: What for?
      Scientist 2: Because it’s funny! He’s an evil fish! Haha! *giggles* It’s funny.

    • Drake Arron

      Well, I saw a news article a while ago about a robot that passed azimovs laws of artificial intelligence… And I doubt that was by accident.

      So as to the question of why someone would make that, I guess go ask them yourself? :P

      (by the way, ot saying Asimov knew what intelligence was. However someone tried to match his laws, so…)

      • Alex SL

        Maybe it is because I am not a native speaker, but I’d understand “passing a law” to be something that is done by a parliament. And the only AI laws of Asimov that I am aware of are unfortunately the ones from his SF novels, which you probably are not referring to. Because “not harming a human being” is something that even my pocket calculator routinely manages to do, so it is not clear how that would be a great achievement heralding the imminent arrival of self-aware, superhuman AI.

        • Drake Arron

          Ummmm…..
          Even you say I m probably not referring to them.
          Sow what the hell gave you the inclination I was?

          Also, you obviously did not read my post.
          I said the mere fact that people where trying to make AI meant they were, well, trying. So you could ask them why. Silly face.

          Seriously, unless your someone who has a ton of trouble interpreting intent in online messages, I’m not getting why
          A: Your acting so stern and unfeeling about it.
          And B: how you completely missed half my post…

          • Alex SL

            Sorry, I didn’t mean to sound stern – I am merely confused. Could you point me at the laws you were referring to?

            As for asking the people who are trying to make AI, I don’t know any personally but most of them seem to be making highly specialised software for doing things for us and a minority seem to be making highly specialised software for mimicking a human being in conservation, which is generally out of curiosity or to be able to say that one passed the Turing test. But it would not actually appear to have any utility for anything.

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      >And it certainly does not mean that it will have either the capability or the motivation to actively cause harm.

      In terms of motivation, see Nick Bostrom, “The Superintelligent Will”

      >Intelligence isn’t magic pixie dust giving you additional capabilities. Heck, it isn’t even properly defined in most of these discussions.

      I actually agree with this.

      >But then the question is, even assuming that it is possible (and there is no reason to assume it is), why would anybody create something like that in the first place?

      The potential economic gains from automating more and more professions is huge. And in some cases, doing so will require much more flexible AIs than we have now.

      >A mind flitting through internet cables and behaving like a computer virus while maintaining conscious control and somehow (?) effortlessly learning how to use all the devices it accesses is the stuff of C-rated 1980ies SF movies, not a remotely plausible scenario.

      Please look up botnets. Now imagine what happens if hardware and bandwidth continues advancing at its current pace, to the point that AIs are as easy to run as those bots are today (granted, hardware progress may stall).

      >Also, should an AI ever cause trouble, hitting the power switch or cutting a cable solves the problem in about half a second.

      Are you aware that some people have trouble staying off Facebook, even if they want to? Now imagine decades worth of improvement in Facebook’s algorithms for manipulating human behavior.

      • Alex SL

        > In terms of motivation, see Nick Bostrom, “The Superintelligent Will”

        Admittedly I do not have the time to do it justice and carefully read through all of it, but browsing that paper I am still left wondering where the hostile motivations would come from. He mentions (1) being put there by the programmer, (2) inheritance, which in AI would reduce to option #1, and (3) “instrumental reasons”, which basically means that secondary goals follow logically from the primary ones which, again, derive from option #1.

        So worst case I can see something happening like a complex AI built for paperclip production objecting to being shut down because that would keep it from fulfilling its objective of producing more paperclips. That is a far shot from any scenario that is an actual worry.

        I see nothing that would convince me that MIRI style fearmongering is based on anything beyond a combination of unwarranted anthropomorphising and magical thinking. The only serious intelligences we know have got autonomous motivations because they evolved to have them. Conversely, AI won’t even have survival instinct unless a programmer deliberately puts it there or the AI was evolved in some survival of the fittest death match scenario as we were. And that would be a rather ludicrous way of building an AI not least because you would get an AI out of it that isn’t optimised for whatever you want it do.

        > The potential economic gains from automating more and more professions
        is huge. And in some cases, doing so will require much more flexible AIs
        than we have now.

        The problem is that most people don’t grasp the concept of trade-offs. An AI that is good at managing an automated factory and complex enough to care about art history, other people’s opinions or even only its own survival will not be as good at managing the factory as it could be if it didn’t. So it would be pointless. Why introduce precisely the downsides of having a human run the factory?

        > Please look up botnets.

        Why do you expect that an AI would be like a botnet? Just don’t give it that capability, problem solved. There, that saved us a donation to MIRI.

        > Are you aware…

        Again I may be missing something here, or perhaps I am ignorant of some background information, but as in the case of Drake I do not understand these remarks. You seem to be implying that nobody would pour some water onto the motherboard of a computer that started hurting people because they’d be too fond of using Twitter…?

        All the above is assuming that the relevant kind of AI can even be built outside of a biological system, and there is currently no evidence for that assumption anyway.

        I have a feeling that the following will sound a bit nasty, but I have to ask. What exactly qualifies Nick Bostrom – or anybody at MIRI for that matter – to make pronouncements on these issues? How many AIs have these people built?

        And why should I assign high relevance to the opinion of people whose pay check pretty much depends on claiming that AI is the greatest danger humanity is facing? Imagine, for comparison, if the only people in the world claiming that climate change is happening were three research groups living predominantly off donations from and book sales to the people they managed to convince… would you then take it seriously?

    • Deanjay1961

      Because why on earth would I want a robot capable of taking on a wide variety of tasks? No, wait, that would actually be very useful, never mind.

  • Luke Breuer

    I wonder whether we are grossly limiting human abilities by resting on arguments like this one:

    Mike D: • It’s actually the case that awareness of our biases does not improve our ability to circumvent them.

    New Yorker: Why Smart People are Stupid

    It is a meme these days to cite stuff like Wikipedia’s List of cognitive biases, as if Eric Schwitzgebel’s 2008 The Unreliability of Naive Introspection was actually titled: “The Unreliability of ‘spection”. Now, it is generally admitted that the scientific endeavor is a way to [slowly?] overcome these biases; indeed, it is often claimed that the scientific endeavor is the only way to overcome these biases. What if this is false?

    What if we are comparing what AI could be, in the future, to what humans are, right now, failing to understand that humans could become much better, merely from better understanding of how our minds work such that we can better combat the various cognitive biases and achieve stuff like much better affective forecasting? What if we could understand how to generate scientific hypotheses much better than we do now (Karl Popper famously punted in how this is done in his seminal The Logic of Scientific Discovery)?

    Now, who benefits if most humans do stay at their current cognitive levels?

  • DJMankiwitz

    Defining intelligence is hard enough, super intelligence a bit more so. There’s some obvious factors, like better memory recall, more thorough ability to correlate all it’s contents, and quicker general “processing” (maybe that results in some Lovecraftian realization that ANY excuse we make for having meaning in our lives is a self deluding lie), but beyond that what would “super intelligence” mean in terms of cognitive abilities that are entirely unique to such an intelligence?

    However, whenever someone says it would be a “bad thing” if that happened, their reasoning always seems to be selfish, myopic, and “projecting” a little. They tend to assume that the good isn’t worth it. Humans are all that matter, who cares about the good these new machine minds could do for themselves, or of the goal of just furthering conscious existence? No, what matters is that it threatens humanity’s dominance. I’ve always thought that if a war between humans and machines took place, it seems to me that taking the side of the machines was a given, morally. I certainly would.

    Some object to the notion of emulating a human mind. In terms of feasibility, biologists rightly point out just how incredibly daunting a task it is. A digital emulation would be INCREDIBLY hard, not least because a lot of the chemical processes (not just electrical) are so interdependent and contingent with so much feedback on every scale (comparable perhaps to planetary weather systems, only since we’re dealing with a person’s consciousness, one can’t settle for simulations “only” to certain grid size that are “only” accurate for the next week at best, as in current weather projections) that one either has to fully understand the workings of the brain enough to figure out what can be safely reduced or expunged without hurting the resulting process, or they have to simulate the brain down to the atomic level (but not further, as quantum effects wash out at the scale the brain works at). Doing so also results in needing to digitize some fundamentally analog components. This is possible, but massively increases the needed processing speed (as in the case of digitizing range finding and aiming systems on warships). As a result, a huge number of futurists come off as hopelessly naive when they assume that brain emulation is “just around the corner”.

    On the other hand, one other objection is purely philosophical, the suggestion that an emulated brain “isn’t” the person being emulated, as that person’s own awareness stopped existing the moment their brain died, and didn’t “magically” hop into the simulation. This is an age-old debate which biologists presume too much to answer (I saw one even make the argument that a copy of the painting isn’t the same as the original, though as a digital enthusiast my position is that if they are atomically identical, they ARE the same picture, and I really don’t care if they get mixed up), but I have age old arguments to make some of these protesters rethink their assumptions.

    What if someone got a brain implant? Let’s say the upcoming deployment of memory management implants work out great, and Alzheimer’s is treated by using these implants to manage creation of new memories when the brain’s own mechanism fails. (I take a very personal interest in this, as my family has a history with this disease, and I have good reason to suspect this is my own otherwise unavoidable fate.) Did the person pre-operation die, and the person post-operation is, for all intents and purposes, a whole new consciousness? For the sake of argument, let’s go further. Let’s replace a neuron with a synthetic neuron that performs the same functions, reacts to both electrical and chemical stimulation in the same way as the previous original neuron, can even reproduce the same way as the original neuron. The only difference is that it’s internal mechanisms are someone different, but the input/output is indistinguishable as far as the rest of the brain is concerned. From that SINGLE neuron replacement, did the person die? Is there a whole new consciousness? If you believe the answer is yes, well there’s no point going on. If you believe the answer is no, as most people would, then we go further. (Yes, this is the brain version of the philosophical Ship of Theseus argument, and it’s basically no different here. You can skip ahead if you already know where this is going.) Going on, let’s replace more neurons. At what point does the original person “die” and a “new” consciousness come into being? At 2 cells? 2 million? 51% of the neural mass of the brain? (Say, when more than half get replaced, but even then, what’s the reasoning there?) Only at 100%? (At which point, someone’s consciousness is still alive and kicking so long as a SINGLE original neuron is still there, but suddenly becomes a different person if that neuron dies or is replaced, which seems just as ridiculous to me as the initial “if even a single cell is replaced they are a new person” argument, only at the other end of it.) Of course, there’s another possibility. One can see “selfness” as a sliding scale. Perhaps at one cell, the original consciousness was diminished ever so slightly, and a new consciousness just ever so slightly started to form, so that at every point one can say “the old consciousness has been reduced by this percentage, and the new one has strengthened by this much”. However, what would that actually FEEL like to the consciousnesses involved? Would they “feel” like two distinct individuals? I doubt it, so long as the input/output was identical, I see no reason why they wouldn’t.

    My position on consciousness isn’t that there’s a soul, but it also isn’t that it is LITERALLY the atoms of the brain itself that ARE the consciousness (otherwise, the cells wouldn’t actually need to do anything, and brain death would require destroying the actual components of the brain and not just stopping the reactions). Rather, it’s the ongoing processes, the system, that “is” the consciousness. A computer operation isn’t the RAM or processor or storage space by themselves, but only when they are actively working do you get to call it an operation. Work has to actually be done, and if it stops, anything you could call an operation also stops. Brains and computers are very different creatures, more so than many futurists naively understand, but in at least this respect, there’s some common ground I think.

    So, if at no point in a slow replacement one can say the old consciousness has died, then a “full and instant replacement” shouldn’t be considered such either. It should still be the same self-identity, so long as the entire process is being fully emulated. Further, considering that total unconsciousness, as in induced sleep for surgery, isn’t considered the “death” of someone’s awareness, nor should a delay between brain death and turning on the simulation. What I’m saying is, if everything else is the same, same processes, same memories, same everything, then the same thing should be emulated, the “same” person, and that person themselves wouldn’t be able to tell you otherwise, so who are you to judge? (This is ignoring the fact that the brain you had 20 years ago literally isn’t the same brain you have now, as all the chemicals within have been replaced, possibly multiple times, by that point.)

    However, there’s one factor. I think that the act of living as a conscious person is a little deceptive. Perhaps it’s more accurate to say our minds “flow” from one state of awareness to the next, a constant string of brief moments “at the speed of awareness” that die forever every time our minds develop a bit, with every new thought or experience. The “you” of today isn’t in any meaningful sense the “you” of 20 years ago. One could say then that emulated versions of you are just another progression, connected purely by memory (some have suggested this in the past, that every time you go to sleep you “lose” your old self, but I suggest it’s far finer than that, with every “moment” being a distinct person, and sleep being nothing special). Still, it “feels” like a life lived as a cohesively whole person from moment to moment, and maybe that’s enough.

    The only wrinkle in this is if you do your full brain emulation but leave the original intact, so that you now have two CLEARLY distinct individuals. At first glance, this creates a contradiction. Resolving it to mean that the original brain hardware itself is what matters still results in all the logical problems I outlined above though. I think I’ve resolved it another way though. From where I’m sitting, at that very first moment, the two brains ARE the same person, as they are both simulating the exact same initial moment. As time progresses and their senses feed them two different things, they will diverge into two distinct “branches” of that original person. Both would be just as “deserving” of their history, and that would be incredibly complicated morally and socially, but an inescapable consequence of this position. The resolution to this potential for an infinitely “diverging” consciousness is “converging” them at a later date. Let’s say they haven’t had too long a time to really distinctly become two very different people, say they’ve both only been walking around for a day, and neither experienced anything particularly life changing in that time. Both some be agreeable to some sort of “combining” of the two day’s experience back into a single consciousness, thus validating both lived experiences, and the combined version going on from that point could be said to now be “both” of those diverged, AND the same person, all in one go. I think that if too much time passes though, they’d probably change so much in relation to the people in the outside world that it would really muck things up.

    Anyway, yeah, that’s my thoughts. I see no philosophical problems with brain emulation, and would LOVE to do that (I’m the sort that’s more or less uncomfortable with my own body anyway, and not in the way that slightly altering a couple cosmetic features of this meat-bag would resolve, I mean I’m fundamentally disgusted with my own guts, which sorta makes sense since at any one point in the day I’m literally carrying around my own feces in a tube inside my body), but I do accept that it’s basically impossible for the foreseeable future, and SO incredibly demanding a prospect that I’m unlikely to live long enough to see anything more than rudimentary steps towards it. My only hope is living just long enough to EXTEND my life, and that it’ll be extended just long enough that THAT future may see a way to get my brain emulated.


CLOSE | X

HIDE | X