What should skeptics believe about the singularity?

This is a guest post from Luke Muehlhauser, author of Common Sense Atheism and Worldview Naturalism, and Executive Director of MIRI. Luke does not intend to persuade skeptics that they should believe everything he does about the technological singularity. Rather, he aims to lay out the issues clearly so that skeptics can apply the tools of skepticism to a variety of claims associated with “the singularity” and come to their own conclusions.

 

Does God exist? Nope.

Does homeopathy work? Nope.

Are ghosts real? Nope.

In a way, these questions are too easy. It is important to get the right answers about the standard punching bags of scientific skepticism, because popular delusions do great harm, and they waste time and money. But once you’ve figured out the basics — science trumps intuition, magic isn’t real, things are made of atoms, etc. — then you might want to apply your critical thinking skills to some more challenging questions.

Anthropogenic global warming (AGW) is a good case study. Most skeptics now accept AGW, but it wasn’t always so: see Michael Shermer’s story about his flip from AGW skeptic to activist. The argument in favor of AGW is, one must admit, more complicated than the argument against homeopathy.

And what about those predictions of what Earth’s climate will look like 3-4 decades from now? Do we have good reasons to think we can predict such things — reasons that hold up to a scientific, skeptical analysis?

What about the Search for Extraterrestrial Intelligence (SETI)? Does it count as science if we haven’t heard from anyone and we’re not sure what we’re looking for? And, can we make any predictions about what would happen if we did make contact? Maybe SETI is a bad idea, because alien civilizations with radio technology are probably much more advanced than we are, and probably don’t share our weird, human-specific values. Is there any way to think rationally about these kinds of questions, or is one person’s guess as good as another’s?

Thankfully, skeptics have discussed subjects like global warming and SETI. See, for example, Massimo Pigliucci on global warming and CSI’s Peter Schenkel on SETI.

But my purpose isn’t to launch new debates about global warming or SETI. I’m not an expert on either one.

I do, however, know a thing or two about another hard, fairly complicated problem ripe for skeptical analysis: the “technological singularity.”

In fact, the singularity issue has much in common with global warming and SETI. With global warming it shares the challenge of predicting what could happen decades in the future, and with SETI it shares the challenge of reasoning about what non-human minds would want, and what could happen if we encounter them.

How can we reason skeptically about the singularity? We can’t just assume the singularity will come because of Moore’s law, and we can’t just dismiss the singularity because it sounds like a “rapture of the nerds.” Instead, we need to (1) replace the ambiguous term “singularity” with a list of specific, testable claims, and then (2) examine the evidence for each of those claims in turn.

(If we were discussing AGW or SETI, we’d have to do the same thing. Is the Earth warming? Is human activity contributing? Can we predict what the climate will look like 50 years into the future? Which interventions would make the biggest difference? Could SETI plausibly detect aliens? Can we predict anything about the level of those aliens’ technological development? Can we predict anything about what goals they’re likely to have? These questions involve a variety of claims, and we’d need to examine the evidence for each claim in turn.)

A long book would be required to do the whole subject justice, but for now I can link interested readers to some related resources for several different singularity-related propositions:

  1. The Law of Accelerating Returns. In 2001, Ray Kurzweil wrote: “An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense ‘intuitive linear’ view. So we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate). The ‘returns,’ such as chip speed and cost-effectiveness, also increase exponentially…” Is this true? Starting points include Béla Nagy et al. (2010) and Scott Alexander.
  2. Feasibility of AGI. Artificial General Intelligence (AGI, aka Strong AI or human-level AI) refers to systems that excel not merely on narrow tasks like arithmetic or chess, but to “systems which match or exceed the cognitive performance of humans in virtually all domains of interest” (and also are not whole brain emulations; see below). One singularity-related claim is simply that AGI is technologically feasible. To investigate, start with Wikipedia’s article on Strong AI.
  3. AGI timelines optimism. Sometimes, people claim not just that AGI is feasible, but that humans are likely to build it relatively soon: say, within the next 50 years. Let’s call that “AGI timelines optimism.” Is this claim justified? Some good starting points are Armstrong & Sotala (2012) and Muehlhauser & Salamon (2013).
  4. Feasibility of whole brain emulation. A whole brain emulation (WBE) would be a functional computer simulation of an entire human brain. Another singularity-related claim is simply that WBE is technologically feasible. Here, you might as well start with Wikipedia.
  5. Feasibility of indefinite survival. Some claim not just that WBE is feasible, but that WBE will enable minds to make backup copies of themselves, allowing them to survive indefinitely — until the heat death of the universe approaches, or at least for billions of years. Here, Sandberg & Armstrong (2012) is a starting point.
  6. WBE timelines optimism. Some forecasters think not just that WBE is feasible but also that we are likely to build it relatively soon: say, in the next 50 years. Let’s call that “WBE timelines optimism.” To examine this claim, you can start with Sandberg & Bostrom (2012).
  7. Feasibility of superintelligence. For our purposes, let’s define “superintelligence” as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills” (either AGI or WBE). Another singularity-related claim is that superintelligence is technologically feasible. Chalmers (2010) has a good preliminary analysis (he refers to superintelligence as “AI++”).
  8. Intelligence explosion. My description: “Once human programmers build an AI with a better-than-human capacity for AI design, [its] goal for self-improvement may motivate a positive feedback loop of self-enhancement. Now when the machine intelligence improves itself, it improves the intelligence that does the improving. Thus… [a] population of greater-than-human machine intelligences may be able to create a… cascade of self-improvement cycles, enabling a… transition to machine superintelligence.” Once again, Chalmers (2010) is a good starting point.
  9. Slow takeoff vs. fast takeoff. Some AI theorists predict that if an intelligence explosion occurs, the transition from human-level machine intelligence to machine superintelligence will occur in years or decades. This would be a “slow takeoff.” Others predict the transition would happen in mere hours, days, or months: a “fast takeoff.” A good place to start on this subject is Yudkowsky (2013).
  10. Doom by default. Some theorists predict that the default outcome of an intelligence explosion is doom for the human species, since superintelligences will have goals at least somewhat different from our own, and will have more power to steer the future than biological humans will. Is that likely? A good place to start is Muehlhauser & Helm (2013).

Also note that later this year, Oxford University Press will be publishing a scholarly monograph on intelligence explosion and machine superintelligence, written by Oxford philosopher Nick Bostrom. For now, you can hear an overview of Bostrom’s views on the subject in this video interview.

A more accessible book on the subject, also forthcoming this year, is James Barratt’s Our Final Invention: Artificial Intelligence and the End of the Human Era.

  • Koray

    As a skeptic I am completely ignorant of the Singularity. It’s all vapor, much like AI research in general, or String Theory. Decades and decades of intellectual self gratification with absolutely no tangible results.

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      First, is “ignorant” the word you’re really looking for there? If you’re ignorant, why are you pronouncing on it so confidently?

      Second, you’d have to have a pretty funny definition of “AI” to claim we’ve gotten no tangible results there. Do you not consider machine translation, voice recognition, facial recognition, and driverless cars to be “tangible results”? We’re a ways away from completely replacing humans in any of those areas (we may be closest on driverless cars), but there’s certainly been enough progress in all those areas that they’re starting to see commercial applications.

      • Koray

        Of course I can pronounce my ignorance. From Luke’s post itself: “Instead, we need to (1) replace the ambiguous term “singularity”…” I think I may actually remain ignorant for a long time at least until people stop expressing doubts about basic terminology.

        I don’t think my definition of AI is funny. What I consider AI is what is usually referred to strong AI. All other “activities” involve so much fine tuning by humans (I’ve seen this personally) that I don’t see how they deserve the label artificial at all. What is worse is that they carry the undesirable properties of both human designed algorithms and strong AI.

        The bad thing about an algorithm is that somebody has to design it, which is work and error prone (1). The good thing is that when it doesn’t work, we can fix it. The good thing about strong AI is that we don’t need to do any work. But, when it’s broken, we don’t know why and possibly can’t fix it (2). All the progress you mentioned suffers from both (1) and (2) [insert joke about Siri].

        Yes, there’s commercial success because commerce already is based on delivering value with some statistical confidence: the CPUs you buy from Intel work out of the box less than 100% of the time. You can return a defective cpu for a refund. So, you can make money as long as only a fraction of your customers complain and want refunds (and with AI, who knows whether they MAY to do that at all; check the liability section of your software license or terms of agreement.)

        • Alexander Johannesen

          I agree, and I worked in strong AI for a long time; tweaking neural nets and processes to mimic humans to the point where it’s not an AI as it is a human simulator. But, maybe that’s the best we can do, and, just possibly, the only thing we can do?

          • Louis Burke

            What is a human simulator and how is it distinct in function from an artificial intelligence? Also what do you mean you worked in strong AI? Since when are neural nets anything more than, at best, good cognitive models and powerful machine learning techniques.

            My two cents on the lack of progress question are that it seems like researchers believe they have it sown up on the theory of mind and cognition side of things. This can be seen with the overlap of AGI researchers and misguided belief in mind uploading. To clear I mean uploading in the sense of being able to transfer subjective experience to raw computation substrates which is not even wrong.

          • Alexander Johannesen

            Hiya,

            “What is a human simulator and how is it distinct in function from an artificial intelligence?”

            The answer to your question is basically this; a simulator can fake intelligence by looking up responses to stimula in enough lookup-tables to fool us into thinking this is intelligence (which basically means they pass some Turing Test) as opposed to an AI that have learnt what the stimula mean and created these responses itself without human interference or intervention.

            “What do you mean worked in strong AI?”

            I used to create systems for high-security (military protection, nuclear security systems, mass-object tracking, distributed traffic and transport systems, etc.), using different kinds of neural nets and cumulative signal processing and radial loops and whatsnot in order to create systems that could recognize patterns that we humans couldn’t teach them.

            “Since when are neural nets anything more than [...]”

            Neural nets are so diverse as to make your question a bit redundant, but we can go further and ask; what is AI anything more than [...] ?

            We’ve just passed the threshold of having an AI win over humans in chess … an 8×8 playing field with a limited set of dogmatic rules. And most people think this is an achievement of some magnitude. *sigh*

          • Louis Burke

            Everything you’ve described there could be described as narrow AI applications. Have you done any cognitive science or cognitive modelling. That is to say have you done any work on building minds?

          • Alexander Johannesen

            Narrow AI? I guess that’s a nice cop-out of the argument, I guess. So, are you saying that because my experience is in creating AI that has a practical use, my argument isn’t valid?

            narrow, wide, deep, augmented, broad … it doesn’t make much of a difference to the concept of AI. “Making minds” is more like code for academic work, and I don’t buy it. If you have experience with AI in which you can make distinctions between system A and system B and proclaim one to be narrow and one to be deep, by what criteria do you do this? How can you tell the difference, apart from upfront calling it so?

            (And, yes, I’ve made minds. How does that change anything?)

          • Louis Burke

            You mentioned strong AI first buddy, and then proceeded to list of things that aren’t strong AI technologies. Mimicking humans isn’t the goal of strong AI, it’s about building minds, which you’ve never done any meaningful work in, unless you’re omitting some experience?

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      See here: http://www.patheos.com/blogs/hallq/2013/04/tracking-progress-in-ai-is-really-hard/

      Progress in AI is hard to quantify, even harder to project into the future, but it is real.

    • http://deusdiapente.blogspot.com/ JQuinton

      Also, check out Richard Carrier’s recent post about it: http://freethoughtblogs.com/carrier/archives/3195

  • staircaseghost

    In what sense might one be an “expert”, as Luke claims to be, in something which is currently speculative and not amenable to empirical verification or falsification? One’s first thoughts are: by having a formal degree in computer science, engineering etc. or, in the alternative, having a track record of extra-academic accomplishment in these fields.

    Since he lacks this kind of verifiable expertise, what should I as a skeptic conclude about the likely truth of his employer’s assertions? Doesn’t it seem rather to be the case, given the incestuous nature of his citation habits, that he is an “expert” in “what other people at the Singularity Institute say about The Singularity”, a comparatively uninteresting anthropological expertise?

  • Alexander Johannesen

    The foremost problem with this stuff is in the silly name; singularity, implying that many becomes one (ala the Borg), and it is very hard to take it seriously.

    But beyond that, we as mere humans have no idea how anything other intelligent than us is going to react to and interact with its environment, and given the fragile nature of modern high-tech computer systems I’d say it’s highly unlikely that transgression between environments is going to be feasible; all we can point to is replication and serving of decidedly artificial stupidity, and tons of possibles and speculation. And who is to say that any AI even *can* transgress its original form? Intelligence doesn’t imply self, personhood, consciousness or even sensory bindings back to reality in ways that are useful for its purpose, even if code is written (by us or by AI itself) in order to *try* to do so. It’s pure speculation.

    I’ve worked in / with AI for over 20 years (and sometimes program AI for a living), and the stuff we’ve got today is more computationally intensive, for sure, and we’ve got systems that are better at passing stupid Turing tests, but as to smarts about how to actually create falsified neural nets or reflection or feasible memory footprints or self-training and Bayes-in-all-the-gaps and whatsnot … I’m just very skeptical about all this wonderful feasibility, no matter how much I’d love to see it myself (trust me; I want to see it! Why else did I go into AI?). The more distributed systems of today are intriguing in this respect, for sure, but self-learning systems can only do things within the constraints of the systems we build, and that means the constraints of very limited systems. There’s a great deal of intelligence entropy which is linked to Moore Law, and that law alone! You can get systems that pass more and better Turing tests, but remember that the *test* is the constraints of the system you just passed, not some generic human AI constraints we might think of. Passing that threshold is, well, probably not very feasible, certainly not as feasible as some (*cough*Kurzweil*cough*) think.

    Sure, you say, as soon as QC comes along, the analog computer will prove us right! Have you programmed one lately? Or will you in the next 20 years? Feasible, sure. Probably, not so much. The lure of the qubit is there for all to understand, but again; there’s constraints in the system, and just like in the Hitch-hikers Guide you need to create more and more complex systems to have them move the technology forward, beyond mere human understanding of what that cycle of AI looks like, no matter with very strong ties right back to the Turing tests and original hardware. When you lose control of that process, you still can’t call it a singularity. You have no idea what that is. But I’m very certain it won’t be whatever people thought it would be, good or bad.

    A few more points:

    1. In what sense has Ray Kurzweil, the synthesizer version of Steve Jobs, done an analysis of technology through history? What criterion, beyond what I see as mere speculation and vapor? Anybody who actually studies this sort of stuff knows technology happens in leaps and bounds, and not by any “common-sense ‘intuitive linear’ view” which I suspect is a figment of a straw-man if I ever saw one. The introduction of Science has certainly jumped us very high indeed … but as high as you think we have jumped?

    2, 4, 5, 7. Feasibility is speculation.

    10. This is probably the silliest one, but by what means do we judge these things? That different is bad for us? How is this more than fear of the unknown?

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      Re: 10: Do you agree that turning the entire Earth into gray goo would be bad? Or paperclips, or computer chips being used to do nothing but calculate the digits of pi?

      • schmidtleb

        Is this seriously what you worry about? Living organisms have tried for more than three billion years to turn all available matter into copies of themselves. So far they haven’t succeeded.

      • Alexander Johannesen

        The problem isn’t that we can think of a million different things in which ways would be bad for the human specie (thanks, Hollywood!), but more that someone rates doom a higher probability than, say, something neutral or positive. Again, I ask; by what means? It is pure and utter speculation about something we know absolutely nothing about.

        Here’s how I see it; we humans more or less dominate the planet, we evolved an intelligence that so far looks superior to all other species on this planet, and we have wreaked havoc on it for at least the last 500 years or so, and the shit we do is not fueled primarily by our intelligence, but more by our ignorance, greed and hunger. It is by virtue of our intelligence we’re trying to stop it, to try to set things right again for all species on the planet, and not just ourselves. If we can figure out these dependencies seeing them as important rather than a threat, and if we can have sympathy and / or empathy with other creatures we share the planet with, why is it then assumed that the outcome of something more intelligent than us should be bad for us? (Apart from the fact that most of humanity are pretty vile creatures with no interest in actually using their proclaimed intelligence *grumble*) The necessity of something does not feed the line of keeping things alive except for ignorance or wilful acts that our modern societies have deemed wrong.

        Again, by what means do we judge what a future intelligent being might do? The only reason to fear an AI is if we successfully made a human replica and deliberately gave it some shitty values and goals, and frankly that will never happen in a machine by virtue of it not ever going to be human in any shape or form. It will be something different. We have no idea what it will be. No idea. None. Zilch. Nada. Nothing. Heck, we don’t even know what to call it, and even worse, we don’t even know when anything passes some threshold between “simulator” and “intelligence” as there is no dogmatic constraints at all surrounding the concept of intelligence. (Just like an IQ test doesn’t say anything concrete about your mental abilities except how good you are at taking IQ tests)

        Will humanity be wiped out by future AI? First we need to define what the “A” means (remember the synthetic/analytic divide? Yeah, it’s still there leading philosophers down the garden path …), and then we need to define what the “I” means (and the debate rages on), and then we have to define what those two together “AI” mean (and here the debate gets even crazier), because right now – whether you go for machine learning and science or just philosophy of the mind – there is no consensus on what either of those terms mean, little less the combination. And then you have to postulate that something far more intelligent than us and dramatically different in almost all ways will somehow behave like the worst of us?

        I find it baffling that anyone would be stupid enough to proclaim that *anything* could possibly be default about some possible future AI they know nothing about. There are no defaults; only speculation and dreaming and making stuff up based on vague notions that most certainly can’t be pinned down in easy ways.

  • schmidtleb

    Being a skeptic here…

    1. Every saturation curve looks like an exponential increase at its start.
    2. It seems pretty obvious that the feasibility of general AI must follow logically from naturalism.
    3. No idea. Most importantly, no idea what it would be good for. Highly specialized AI steering your car is much more useful than AGI that would suddenly start wondering what it is all good for if you think about it, and why should I not just drive into the lake?
    4. Unless you believe in magical souls that can be transferred into a computer, simulating a brain will not make you any more immortal than taking a picture of you would. It’s a copy, not you. If it is not about immortality then the question reduces to #2.
    5.&6. Irrelevant, see #4.
    7. No idea but I expect trade-offs between the different characteristics to be involved.
    8. See #7.
    9. Fast take-off is the most ludicrous singularity claim. It would only work if all the necessary steps before implementation can be made entirely in the head, without trying out if any of it works in real life. When you build something, it is not enough to have an idea; there are experiments and prototypes involved. And even if a super-AI came up with a great idea that would actually work, it might turn out that you’d have to first produce a new tool or material to make it happen, and that would take five years to produce, and it depends on another tool that has to be built before and takes another 3 years… Imagine somebody coming up with blueprints for a working nuclear reactor in 1200 CE and you should get the picture.
    10. Why the hell would anybody be so stupid as to give a super-AGI power? If we do that we deserve to die. More likely they will be a curiosity or proof of concept in a lab, and that is that. Again, specialized AI is much more useful.

    • hf

      Why the hell would anybody be so stupid as to give a super-AGI power?

      He won’t tell us.

      More seriously, humans as we know them will make any possible mistake, given enough time. Do you believe we’ll destroy technological civilization another way before it comes to that?

      Do you disagree with what Chris said in a recent post about the human brain being a kludge? (This would include programmers’ brains. Again, the people making your “specialized AI” could have data that the AGI would take a while to acquire. But their programming choices, per Neal Stephenson, might be harder to keep secret from a super-intelligence.)

      Or do you think we’ll somehow make ourselves smarter without AGI? (The faith you state in the objective existence of personal identity could get in the way there.)

      I suppose there’s always the list of alternatives I mentioned in that other thread.

      • schmidtleb

        I do not quite understand the part about my faith in the existence of personal identity but your second sentence is right on spot. The idea that some people waste their time worrying about evil artificial intelligence in the face of our completely unsustainable overuse of fertile soils, freshwater, fossil fuels, phosphate, etc. and the looming crisis of climate change is truly stunning. We are using more than two planets worth of resources at the moment, and it is a physical impossibility for that to continue forever. When at some point hundreds of millions are starving to death while a few more hundreds of millions need to migrate to areas that are not turning into either ocean or desert electronic gimmicks are not going to be our highest priority.

        Honestly, I would count it as a win if something like the internet still existed in 150 years. Whatever governments are left might need most of the energy they can get their hands on to feed, clothe and house the survivors.

        • hf

          Ah! Then you should donate to, or raise political support for funding, the Center for the Study of Existential Risk. If you think your position is a slam dunk, you should expect the people at Cambridge to come to the same conclusion after careful examination. This would either divert money from MIRI, or produce enough public support for issues like climate change as to make MIRI’s supporters irrelevant. (The latter already seems true to me, at least monetarily, but we’ll ignore that.)

          I do not quite understand the part about my faith in the existence of personal identity

          I said objective existence. This was in response to your authoritative-sounding claim that “It’s a copy, not you.”

          My definition is a little more complicated than the subjective one, which would say, ‘Anyone who thinks of themselves as a continuation of me is right.’ But if they also possess a lot of my memories, and my experiences inform their values, I’d feel much happier with that outcome.

          • schmidtleb

            If that is what heats your burrito, okay then. Is that any more of an immortality than having children, influencing the society around you, or writing a book? The point is still that after the brain scan (assuming it is non-destructive) one would get up, wonder why one is still stuck in this squishy body, grow old and die.

          • hf

            Well, no. I wouldn’t wonder why that happens to one of my continuations in the case you describe, any more than I’d wonder why one of me sees a dead cat in the following scenario:

            “This is the world where my good friend Ernest formulates his Schrödinger’s Cat thought experiment, and in this world, the thought experiment goes: ‘Hey, suppose we have a radioactive particle that enters a superposition of decaying and not decaying. Then the particle interacts with a sensor, and the sensor goes into a superposition of going off and not going off. The sensor interacts with an explosive, that goes into a superposition of exploding and not exploding; which interacts with the cat, so the cat goes into a superposition of being alive and dead. Then a human looks at the cat,’ and at this point Schrödinger stops, and goes, ‘gee, I just can’t imagine what could happen next.’ So Schrödinger shows this to everyone else, and they’re also like ‘Wow, I got no idea what could happen at this point, what an amazing paradox’. Until finally you hear about it, and you’re like, ‘hey, maybe at that point half of the superposition just vanishes, at random, faster than light’, and everyone else is like, ‘Wow, what a great idea!’”

  • Pingback: The Inaugural Patheos Atheists Blog Carnival!

  • http://www.patheos.com/blogs/crossexamined/ BobSeidensticker

    Nice summary. If I could focus on one point, however, I’m skeptical of Kurzweil’s Law of Accelerating Returns. It’s easy to get excited about where exponential change is happening (computers, cell phones, internet) and ignore where it’s not (transportation, civil engineering, energy). Sure, things are changing, just not exponentially.

    Solar cells and battery powered cars (to take 2 example), at the leading edge of technology change in their own fields, are ancient history. The photovoltaic effect was discovered in 1839, and the first car to exceed 60mph was a battery-powered car, in 1899.

    I’m not trying to rain on anyone’s parade, just remind us of what we all know, that technology in the past couple of centuries–rotary printing press, steam ships, telegraph and telephone, car and truck, airplane, nuclear power, and so on–were pretty big deals that we often ignore, fascinated as we are by what’s changing in our own world.

    If I may plug a book of mine, my Future Hype: The Myths of Technology Change preaches some common sense about technology change past and present.

  • David Pinsof

    Luke,

    What if human level intelligence is not “one thing,” but rather a complex and coordinated bundle of precisely those kinds of “narrow” programs that AI researchers have already been building? This is what Cosmides and Tooby argue in “Unraveling the Enigma of Human Intelligence,” and it is one of the reasons I’m skeptical of the singularity. If it is true that human level intelligence is merely hundreds or thousands of narrow specializations working together, then AGI becomes possible only if it is a massively-funded program with thousands of scientists working on separate modules, and perhaps another set of scientists working on how to connect and coordinate those modules. Such an enormous project is unlikely to be initiated by accident and without thinking through all the ramifications. In fact, given the huge success of “narrow AI” compared AGI, we have good reason to wonder whether such a project will ever be initiated at all. More likely, it seems to me, is the continued proliferation ever-more-sophisticated narrow AIs and the continued marginalization of the AGI approach.


CLOSE | X

HIDE | X