“The Singularity” and why the world will change more than (almost?) anyone can imagine in the next century or two

This post is apropos of nothing, aside from being something I wish were more widely understood. It has to do with AI and uploading and “the Singularity” (a confusing term that can actually mean several different things). I’m going to focus on uploading because in some ways it’s the most straightforward thing.

If you really want to understand the uploading issue, I recommend Anders Sandberg and Nick Bostrom’s “Whole Brain Emulation: A Roadmap.” The thing is over a hundred pages, long, though, so here’s a summary. The version of the uploading idea: take a preserved dead brain, slice it into very thin slices, scan the slices, and build a computer simulation of the entire brain.

If this process manages to give you a sufficiently accurate simulation, the implications are huge. For starters, given the right interface (which would have to include a lower-resolution body and environment simulation), you could have a conversation with the simulation that would, from an outside point of view at least, be indistinguishable from a conversation with the simulated person. Perhaps more importantly, such simulations could potentially replace humans in a huge range of fields.

For uploading to be successful in the “can have conversations and replace human workers” sense, surprisingly few assumptions are required. As Sandberg and Bostrom say, “Physicalism (everything supervenes on the physical) is a convenient but not necessary assumption.” Epiphenomenal dualism would actually be just as good an assumption, and uploading could even work on interactionist dualism, if the non-physical part of the mind is also possible to simulate.

Furthermore, the question of whether uploads could replace human workers is independent of philosophical questions like “would an upload really be conscious?” or “would a simulation of a dead person be a form of survival for that dead person?” (Sandberg and Bostrom call these three questions the 6a, 6b, and 6c success criteria, respectively.) And a “yes” to the first question is all it takes for uploads to radically change the world.

Why? Because digital minds would have some immediate advantages over the made of meat minds we have right now. Luke Muehlhauser and Anna Salamon’s excellent paper “Intelligence Exploision: Evidence and Import” explains a number of these advantages in the section on “AI advantages.” Here are two that are especially relevant to uploads:

Communication speed. Axons carry spike signals at 75 meters per second or less (Kandel et al. 2000). That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly. (Of course, this also depends on the efficiency of the algorithms in use; faster hardware compensates for less efficient software.)

Duplicability. Our research colleague Steve Rayhawk likes to describe AI as “instant intelligence; just add hardware!” What Rayhawk means is that, while it will require extensive research to design the first AI, creating additional AIs is just a matter of copying software. The population of digital minds can thus expand to fill the available hardware base, perhaps rapidly surpassing the population of biological minds.

Duplicability also allows the AI population to rapidly become dominated by newly built AIs, with new skills. Since an AI’s skills are stored digitally, its exact current state can be copied, including memories and acquired skills—similar to how a “system state” can be copied by hardware emulation programs or system backup programs. A human who undergoes education increases only his or her own performance, but an AI that becomes 10% better at earning money (per dollar of rentable hardware) than other AIs can be used to replace the others across the hardware base—making each copy 10% more efficient.

I think it’s safe to say that being able to make copies of our most talented and skilled folk (from R&D and marketing departments to scientists and intellectuals), life on Earth will be changed forever. Oh, theoretically we could decide not to use the technology, but I suspect we’ll be unable to resist the temptation of such potentially huge benefits. (For more on this, see the blog and papers of economist Robin Hanson.)

How long until this happens? Well, as Yogi Berra once said, making predictions is hard, especially about the future. But on the one hand, I’d be surprised if getting working human whole brain emulations only took a couple of decades, as some have predicted. Neuroscience is messy, messier than many people think.

On the other hand, the fact that we’re at the point where we can say, in some detail, what further technological advances are necessary to make whole brain emulation feasible (as Sandberg and Bostrom do in their paper) makes me think the technology isn’t too far off. I wouldn’t be surprised if we had it in another century. I would be surprised if it took too long. I can’t say anything for sure, but a couple centuries seems like a safe bet for when we’ll have uploading.

Of course, it’s at least theoretically possible that some kind of catastrophe could greatly damage scientific and technological progress in all areas, including towards uploading. But such a catastrophe would, in itself, but a huge change from “life on Earth as we know it.” So, while I wouldn’t bet too much on specific scenarios (specific scenarios always have a lower probability than more general predictions), it’s a safe bet that some kind of huge changes are in humanity’s future.

How big is huge? I think we can get an idea by comparing likely futures to the visions of science fiction. In most popular science fiction (like Star Trek), humanity has gotten all kinds of cool new toys several centuries from now, but the vast majority of the characters aren’t that much different from folk today. Data and Voyager’s The Doctor are in the minority.

And that kind of Star Trek model of how technological progress works is largely faithful to what’s happened in the past. In reality, though, it seems likely that technological advance will ultimately mean changes in what it is to be a person–or, perhaps I should say, a “person-like entity,” since it will be controversial whether many of the things that are likely to exist in the future will even be people.

Update: I wrote a follow-up post with clarifications to some of the above.

Russell Blackford on human enhancement
Alvin Plantinga, Michael Behe, and Paul Draper
The ignorance and dishonesty of Christian apologetics, part 1: anti-evolutionism
No scientific evidence for that

CLOSE | X

HIDE | X