Would you bet your life on your philosophy of mind?: Some thoughts on uploading

The overall reaction to this thread was not quite what I was hoping for, but I’m going to press ahead with some thoughts on consciousness anyway. Specifically, I want to talk about how the problem of consciousness relates to the issue of uploading.

Uploading, as readers of this blog probably know, is the idea of taking a human brain, scanning it in great detail, and then creating a very precise computer simulation of the original brain. In their Whole Brain Emulation: A Roadmap, Anders Sandberg and Nick Bostrom do a nice job of spelling out some possible success criteria for uploading, including:

  • 6a: “Social-role fit emulation”/”Person Emulation”: The emulation is able to fill and be accepted into some particular social role, for example to perform all the tasks required for some normally human job. (Socio‐economic criteria involved)
  • 6b: “Mind emulation”: The emulation produces subjective mental states (qualia, phenomenal experience) of the same kind that would have been produced by the particular brain being emulated. (Philosophical criteria involved)
  • 6c: “Personal identity emulation”: The emulation is correctly described as a continuation of the original mind; either as numerically the same person, or as a surviving continuer thereof. (Philosophical criteria involved)

I’m convinced that uploading at what Sandberg and Bostrom call the 6a level will very likely be possible one day, and that claim doesn’t rest on any controversial philosophical claims. But whether or not emulation at the 6b and 6c levels is possible even in principle is, for now at least, a philosophical question.

The 6b criterion correspond to the philosophical problem of consciousness, while the 6c criterion corresponds to the philosophical problem of personal identity. Here, I’m going to focus on the 6b criterion, in part because 6c success without 6b success (if that even makes sense) seems pointless. What’s the point in living forever if you’ll never have any experiences ever again–no pleasure, no experience of a beautiful landscape, no nothing?

Now it seems like there are some fairly compelling arguments that an emulation that met the 6a criterion would also meet the 6b criterion, that is, be conscious. David Chalmers gives some in section 9 of his paper ">“The Singularity: A Philosophical Analysis,” arguments that derive from his earlier “fading qualia” and “dancing qualia” arguments. (Yes, you read that right: being a dualist doesn’t stop Chalmers from thinking a machine could be conscious.)

Here’s another line of argument: suppose we upload Dave, giving us (to borrow a moniker from Chalmers) DigiDave. If the uploading works as intended, then by hypothesis DigiDave’s outward behavior will be indistinguishable from Dave’s behavior.

That means that if you pinch DigiDave, he’ll react the same way Dave would have–at least, he will if the environment/body simulation is sophisticated enough to accommodate pinching. The same goes for any other experience DigiDave might have–or at least, any experience that be simulated for him.

And DigiDave will even (again assuming the upload worked as intended) be able to engage philosophical conversation, and will say the same kinds of things about philosophy that Dave would have said. This applies to conversations involving things like subjective experience and introspection.

Given all that, I think conversing with DigiDave for an extended length of time would create a very powerful impression that he was just as conscious as Dave was. To deny that DigiDave was conscious under those conditions would seem like outrageous chauvinism.

The argument from there to thinking uploads would be conscious is just this: if we already know that successful uploading would result in an emulation that could convince us it’s conscious, we shouldn’t need to wait around for this to actually happen. We should be willing to grant, conditionally, that if there were ever a successful emulation of a human being, it would be conscious.

But the purpose of this post isn’t to convince you to accept that conclusion. Rather, it’s to offer a word of caution: would you bet your life on the correctness of such arguments? If uploading becomes a reality, you’ll get the opportunity to make such a bet when you decide whether or not to upload.

For myself, while I think it’s highly plausible that emulations would be conscious, I’d probably want to put off uploading until I knew I was going to die soon anyway if I didn’t. I’m more confident that chimpanzees are conscious than I am that uploads would be conscious. The problem is that uploads would be very similar to us in some ways, very different in other ways, and consciousness seems too puzzling for us to be able to say for sure which of those ways would matter.

If you respond that you don’t think there’s anything mysterious about consciousness, I’d point out that what Dennett calls the “B team” (scientists and philosophers who, broadly speaking, agree with Chalmers that there is something deeply puzzling about consciousness) boasts many prominent names among its ranks, most of them people who will have no truck with dualism, including Steven Pinker (known forhis reductionistic views on other matters). How sure are you that they’re wrong?

Of course, I’m not suggesting you agree with the B team just because they’ve got a bunch of prominent names, since there are an equal number of prominent names on the other side. What I am suggesting is that it’s a reason not to be too sure we’ve got a handle on consciousness–especially when we’re faced with having to bet our lives on our views.

And if the thought of betting your life isn’t enough to deter you, what about betting the entire future of humanity? Think of a scenario where a superintelligent AI forcibly uploads everyone, or less dramatically where everyone faces extreme economic pressure to upload. If emulations are not conscious, such scenarios would be a disaster.

Eliezer Yudkowsky himself, in his essay “Value is Fragile,” gives the following as one of several possible disaster scenarios:

…an agent that contains all the aspects of human value, except the valuation of subjective experience. So that the result is a nonsentient optimizer that goes around making genuine discoveries, but the discoveries are not savored and enjoyed, because there is no one there to do so. This, I admit, I don’t quite know to be possible. Consciousness does still confuse me to some extent. But a universe with no one to bear witness to it, might as well not be.

Eliezer, again, has no truck with views’ like Chalmers, but even he hesitates here. Will you be so sure of your views as to not hesitate too?

And note that if you think emulations wouldn’t be conscious, that doesn’t get you out of potentially having to make some horrible bets. Because if they aren’t conscious, it might seem we can do whatever we want to them without any moral qualms. But if we were wrong about that…

The broader point here is that I’m generally willing to trust human common-sense to an extent, as long as we’re dealing with familiar cases. But technology may rapidly push us into situations where a lot hinges on our ability to correctly make such judgments about wildly unfamiliar cases.

And the problems here don’t seem much like standard scientific puzzles, where it’s at least clear how we go about getting the answer. Here, not only do I not know the answers, I can’t claim to know how to get them.

Paper on Plantinga and classical foundationalism
So I've been flipping through The Transhumanist Reader...
No scientific evidence for that
Warren Buffett's son is almost completely wrong about charity

CLOSE | X

HIDE | X