What should skeptics believe about the singularity?

This is a guest post from Luke Muehlhauser, author of Common Sense Atheism and Worldview Naturalism, and Executive Director of MIRI. Luke does not intend to persuade skeptics that they should believe everything he does about the technological singularity. Rather, he aims to lay out the issues clearly so that skeptics can apply the tools of skepticism to a variety of claims associated with “the singularity” and come to their own conclusions.

 

Does God exist? Nope.

Does homeopathy work? Nope.

Are ghosts real? Nope.

In a way, these questions are too easy. It is important to get the right answers about the standard punching bags of scientific skepticism, because popular delusions do great harm, and they waste time and money. But once you’ve figured out the basics — science trumps intuition, magic isn’t real, things are made of atoms, etc. — then you might want to apply your critical thinking skills to some more challenging questions.

Anthropogenic global warming (AGW) is a good case study. Most skeptics now accept AGW, but it wasn’t always so: see Michael Shermer’s story about his flip from AGW skeptic to activist. The argument in favor of AGW is, one must admit, more complicated than the argument against homeopathy.

And what about those predictions of what Earth’s climate will look like 3-4 decades from now? Do we have good reasons to think we can predict such things — reasons that hold up to a scientific, skeptical analysis?

What about the Search for Extraterrestrial Intelligence (SETI)? Does it count as science if we haven’t heard from anyone and we’re not sure what we’re looking for? And, can we make any predictions about what would happen if we did make contact? Maybe SETI is a bad idea, because alien civilizations with radio technology are probably much more advanced than we are, and probably don’t share our weird, human-specific values. Is there any way to think rationally about these kinds of questions, or is one person’s guess as good as another’s?

Thankfully, skeptics have discussed subjects like global warming and SETI. See, for example, Massimo Pigliucci on global warming and CSI’s Peter Schenkel on SETI.

But my purpose isn’t to launch new debates about global warming or SETI. I’m not an expert on either one.

I do, however, know a thing or two about another hard, fairly complicated problem ripe for skeptical analysis: the “technological singularity.”

In fact, the singularity issue has much in common with global warming and SETI. With global warming it shares the challenge of predicting what could happen decades in the future, and with SETI it shares the challenge of reasoning about what non-human minds would want, and what could happen if we encounter them.

How can we reason skeptically about the singularity? We can’t just assume the singularity will come because of Moore’s law, and we can’t just dismiss the singularity because it sounds like a “rapture of the nerds.” Instead, we need to (1) replace the ambiguous term “singularity” with a list of specific, testable claims, and then (2) examine the evidence for each of those claims in turn.

(If we were discussing AGW or SETI, we’d have to do the same thing. Is the Earth warming? Is human activity contributing? Can we predict what the climate will look like 50 years into the future? Which interventions would make the biggest difference? Could SETI plausibly detect aliens? Can we predict anything about the level of those aliens’ technological development? Can we predict anything about what goals they’re likely to have? These questions involve a variety of claims, and we’d need to examine the evidence for each claim in turn.)

A long book would be required to do the whole subject justice, but for now I can link interested readers to some related resources for several different singularity-related propositions:

  1. The Law of Accelerating Returns. In 2001, Ray Kurzweil wrote: “An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense ‘intuitive linear’ view. So we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate). The ‘returns,’ such as chip speed and cost-effectiveness, also increase exponentially…” Is this true? Starting points include Béla Nagy et al. (2010) and Scott Alexander.
  2. Feasibility of AGI. Artificial General Intelligence (AGI, aka Strong AI or human-level AI) refers to systems that excel not merely on narrow tasks like arithmetic or chess, but to “systems which match or exceed the cognitive performance of humans in virtually all domains of interest” (and also are not whole brain emulations; see below). One singularity-related claim is simply that AGI is technologically feasible. To investigate, start with Wikipedia’s article on Strong AI.
  3. AGI timelines optimism. Sometimes, people claim not just that AGI is feasible, but that humans are likely to build it relatively soon: say, within the next 50 years. Let’s call that “AGI timelines optimism.” Is this claim justified? Some good starting points are Armstrong & Sotala (2012) and Muehlhauser & Salamon (2013).
  4. Feasibility of whole brain emulation. A whole brain emulation (WBE) would be a functional computer simulation of an entire human brain. Another singularity-related claim is simply that WBE is technologically feasible. Here, you might as well start with Wikipedia.
  5. Feasibility of indefinite survival. Some claim not just that WBE is feasible, but that WBE will enable minds to make backup copies of themselves, allowing them to survive indefinitely — until the heat death of the universe approaches, or at least for billions of years. Here, Sandberg & Armstrong (2012) is a starting point.
  6. WBE timelines optimism. Some forecasters think not just that WBE is feasible but also that we are likely to build it relatively soon: say, in the next 50 years. Let’s call that “WBE timelines optimism.” To examine this claim, you can start with Sandberg & Bostrom (2012).
  7. Feasibility of superintelligence. For our purposes, let’s define “superintelligence” as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills” (either AGI or WBE). Another singularity-related claim is that superintelligence is technologically feasible. Chalmers (2010) has a good preliminary analysis (he refers to superintelligence as “AI++”).
  8. Intelligence explosion. My description: “Once human programmers build an AI with a better-than-human capacity for AI design, [its] goal for self-improvement may motivate a positive feedback loop of self-enhancement. Now when the machine intelligence improves itself, it improves the intelligence that does the improving. Thus… [a] population of greater-than-human machine intelligences may be able to create a… cascade of self-improvement cycles, enabling a… transition to machine superintelligence.” Once again, Chalmers (2010) is a good starting point.
  9. Slow takeoff vs. fast takeoff. Some AI theorists predict that if an intelligence explosion occurs, the transition from human-level machine intelligence to machine superintelligence will occur in years or decades. This would be a “slow takeoff.” Others predict the transition would happen in mere hours, days, or months: a “fast takeoff.” A good place to start on this subject is Yudkowsky (2013).
  10. Doom by default. Some theorists predict that the default outcome of an intelligence explosion is doom for the human species, since superintelligences will have goals at least somewhat different from our own, and will have more power to steer the future than biological humans will. Is that likely? A good place to start is Muehlhauser & Helm (2013).

Also note that later this year, Oxford University Press will be publishing a scholarly monograph on intelligence explosion and machine superintelligence, written by Oxford philosopher Nick Bostrom. For now, you can hear an overview of Bostrom’s views on the subject in this video interview.

A more accessible book on the subject, also forthcoming this year, is James Barratt’s Our Final Invention: Artificial Intelligence and the End of the Human Era.

Alvin Plantinga, Michael Behe, and Paul Draper
So I've been flipping through The Transhumanist Reader...
The ignorance and dishonesty of Christian apologetics, part 1: anti-evolutionism
Russell Blackford on human enhancement

CLOSE | X

HIDE | X