Would you bet your life on your philosophy of mind?: Some thoughts on uploading

The overall reaction to this thread was not quite what I was hoping for, but I’m going to press ahead with some thoughts on consciousness anyway. Specifically, I want to talk about how the problem of consciousness relates to the issue of uploading.

Uploading, as readers of this blog probably know, is the idea of taking a human brain, scanning it in great detail, and then creating a very precise computer simulation of the original brain. In their Whole Brain Emulation: A Roadmap, Anders Sandberg and Nick Bostrom do a nice job of spelling out some possible success criteria for uploading, including:

  • 6a: “Social-role fit emulation”/”Person Emulation”: The emulation is able to fill and be accepted into some particular social role, for example to perform all the tasks required for some normally human job. (Socio‐economic criteria involved)
  • 6b: “Mind emulation”: The emulation produces subjective mental states (qualia, phenomenal experience) of the same kind that would have been produced by the particular brain being emulated. (Philosophical criteria involved)
  • 6c: “Personal identity emulation”: The emulation is correctly described as a continuation of the original mind; either as numerically the same person, or as a surviving continuer thereof. (Philosophical criteria involved)

I’m convinced that uploading at what Sandberg and Bostrom call the 6a level will very likely be possible one day, and that claim doesn’t rest on any controversial philosophical claims. But whether or not emulation at the 6b and 6c levels is possible even in principle is, for now at least, a philosophical question.

The 6b criterion correspond to the philosophical problem of consciousness, while the 6c criterion corresponds to the philosophical problem of personal identity. Here, I’m going to focus on the 6b criterion, in part because 6c success without 6b success (if that even makes sense) seems pointless. What’s the point in living forever if you’ll never have any experiences ever again–no pleasure, no experience of a beautiful landscape, no nothing?

Now it seems like there are some fairly compelling arguments that an emulation that met the 6a criterion would also meet the 6b criterion, that is, be conscious. David Chalmers gives some in section 9 of his paper ">“The Singularity: A Philosophical Analysis,” arguments that derive from his earlier “fading qualia” and “dancing qualia” arguments. (Yes, you read that right: being a dualist doesn’t stop Chalmers from thinking a machine could be conscious.)

Here’s another line of argument: suppose we upload Dave, giving us (to borrow a moniker from Chalmers) DigiDave. If the uploading works as intended, then by hypothesis DigiDave’s outward behavior will be indistinguishable from Dave’s behavior.

That means that if you pinch DigiDave, he’ll react the same way Dave would have–at least, he will if the environment/body simulation is sophisticated enough to accommodate pinching. The same goes for any other experience DigiDave might have–or at least, any experience that be simulated for him.

And DigiDave will even (again assuming the upload worked as intended) be able to engage philosophical conversation, and will say the same kinds of things about philosophy that Dave would have said. This applies to conversations involving things like subjective experience and introspection.

Given all that, I think conversing with DigiDave for an extended length of time would create a very powerful impression that he was just as conscious as Dave was. To deny that DigiDave was conscious under those conditions would seem like outrageous chauvinism.

The argument from there to thinking uploads would be conscious is just this: if we already know that successful uploading would result in an emulation that could convince us it’s conscious, we shouldn’t need to wait around for this to actually happen. We should be willing to grant, conditionally, that if there were ever a successful emulation of a human being, it would be conscious.

But the purpose of this post isn’t to convince you to accept that conclusion. Rather, it’s to offer a word of caution: would you bet your life on the correctness of such arguments? If uploading becomes a reality, you’ll get the opportunity to make such a bet when you decide whether or not to upload.

For myself, while I think it’s highly plausible that emulations would be conscious, I’d probably want to put off uploading until I knew I was going to die soon anyway if I didn’t. I’m more confident that chimpanzees are conscious than I am that uploads would be conscious. The problem is that uploads would be very similar to us in some ways, very different in other ways, and consciousness seems too puzzling for us to be able to say for sure which of those ways would matter.

If you respond that you don’t think there’s anything mysterious about consciousness, I’d point out that what Dennett calls the “B team” (scientists and philosophers who, broadly speaking, agree with Chalmers that there is something deeply puzzling about consciousness) boasts many prominent names among its ranks, most of them people who will have no truck with dualism, including Steven Pinker (known forhis reductionistic views on other matters). How sure are you that they’re wrong?

Of course, I’m not suggesting you agree with the B team just because they’ve got a bunch of prominent names, since there are an equal number of prominent names on the other side. What I am suggesting is that it’s a reason not to be too sure we’ve got a handle on consciousness–especially when we’re faced with having to bet our lives on our views.

And if the thought of betting your life isn’t enough to deter you, what about betting the entire future of humanity? Think of a scenario where a superintelligent AI forcibly uploads everyone, or less dramatically where everyone faces extreme economic pressure to upload. If emulations are not conscious, such scenarios would be a disaster.

Eliezer Yudkowsky himself, in his essay “Value is Fragile,” gives the following as one of several possible disaster scenarios:

…an agent that contains all the aspects of human value, except the valuation of subjective experience. So that the result is a nonsentient optimizer that goes around making genuine discoveries, but the discoveries are not savored and enjoyed, because there is no one there to do so. This, I admit, I don’t quite know to be possible. Consciousness does still confuse me to some extent. But a universe with no one to bear witness to it, might as well not be.

Eliezer, again, has no truck with views’ like Chalmers, but even he hesitates here. Will you be so sure of your views as to not hesitate too?

And note that if you think emulations wouldn’t be conscious, that doesn’t get you out of potentially having to make some horrible bets. Because if they aren’t conscious, it might seem we can do whatever we want to them without any moral qualms. But if we were wrong about that…

The broader point here is that I’m generally willing to trust human common-sense to an extent, as long as we’re dealing with familiar cases. But technology may rapidly push us into situations where a lot hinges on our ability to correctly make such judgments about wildly unfamiliar cases.

And the problems here don’t seem much like standard scientific puzzles, where it’s at least clear how we go about getting the answer. Here, not only do I not know the answers, I can’t claim to know how to get them.

  • Chris

    “6c” emulation could be useful as a backup, right? In that, if 6c is true but 6b is false, then at least in principle you could rebuild a conscious person from the information you had stored and have the person thus rebuilt be a valid continuation of the original person.

    Which is FWIW (which is nothing) the view I actually hold.

  • eric

    The problem is that uploads would be very similar to us in some ways, very different in other ways, and consciousness seems too puzzling for us to be able to say for sure which of those ways would matter.

    Why is ‘very different in other ways’ a problem? Right now I have back pain. I’m sure this influences my state of mind. But there are times in the past when I didn’t have it and there will hopefully be times in the future when I don’t have it. At that future time, that ‘me’ will be different from current ‘me’ because I underwent some procedure (drugs, operation, exercise) to alleviate my back pain. So if downloading eliminates it, and my personality changes because of that, so what?

    There will always be personality changes brought on by, lets call them ‘substrate changes,’ whether that substrate is organic or silicon. I don’t see a reason to treat the latter as any more philosophically upsetting than the former. To use a math analogy, our worry should be discontinuous changes to our functioning. Continuous ones are not (or should not be) an issue – we already experience those, and they don’t cause any sort of philosophical or existential crises

  • trucreep

    I always thought that the brain did not store information in ways we traditionally think – meaning you can’t just take an exact image or copy of the brain – a lot of what we consider as our “self” consists of chemical and electrical reactions or uhh something!

    • Darren

      As I understand it, our memories and personality and what not do have a physical component. Each neuron has from, oh, I don’t know, 1 to 20 ‘ish connections to other neurons. Some of those connections are to fellow neurons close by, others are to neurons further away. Also, each of those connections has a certain ‘strength’. It is this pattern of neurons connected to other neurons, and the strengths of those connections, that turn the electrolytic On/Off blinking of any particular neuron into actual thought. It is also this pattern that could, some day, presumably be measured and replicated in software.

    • Chris Hallquist

      It seems unlikely that momentary fluctuations in neurotransmitter and ion levels would matter much here. After all, those go crazy during epileptic seizures, and people come out of those OK.

      • eric

        I think they matter very much.
        If the stable neural structure was the mind, then I’d remain consciously me when I sleep, and alcohol would not change my personality, because neither of those events changes my stable neural structure, it only affects the activity within the brain. Yet I lose my concsiousness when I sleep, figuratively and literally. Equally clearly, after these things happen I come back to a conscious state of mind so close to the pre-event one that nobody can tell the difference.
        So it seems to me that there is enough information in our stable neural structure that it allows the brain to essentially rebuild or recreate the same mind every morning, but that that structure is not the mind per se. The activity is the mind, and if you have the right structure but wrong activity, you can easily end up with a downloaded person that is perpetually asleep, effectively drugged, in a constant state of epileptic seizure, etc.
        In terms of our self-awareness, we ARE those momentary fluctuations. When they stop or change, we can lose awareness or have it changed. If you like computer analogies, the brain is the hardware or maybe the OS, but not the program we’re really interested in preserving. If you prefer your characterizations more philosophical, you might instead say that our stable neural structures appear to be a necessary but not sufficient component of mind.

  • hf

    I hereby give any Friendly AI permission to advance me towards godhood faster than Mitchell Porter by functionalist means.

  • Darren

    This discussion reminds me of my long ago ponderings as to the spiritual implications of that bastion of nerd-dom, Start Trek. I always wondered about the transporter, as the description was that it disassembled one’s atoms, then reassembled them at some remote location. Was not the transporter, in effect, killing the original by disassembling them at the atomic level, then creating an exact duplicate at the destination?

    Being a Theist at the time, and thus believing in souls and the continuation of consciousness after death, I imagined a Star Trek afterlife, with hundreds of James T. Kirks, and Spocks, and Scotties all milling about, and only a few McCoys, what with his distrust of transporters…

    Every now and then a new Kirk would pop into being, and turning to the closest Scotty, “Dammit Scotty! You never told me this thing was killing me each time I used it!”

    So, uploads may well be conscious; should I upload then that construct will likely think it is me, but it will not be the same me as the one residing in the meat-body nearby.

    But does that matter?

    If an upload exists in a computer processor, if that upload has conscious experience, then is it still conscious ‘between’ processor cycles? Since its consciousness is defined by a near infinity of discrete moments of thought interspersed with equal moments of oblivion, in what way can that upload think that it has continuity of identity? It thinks it is the same, moment to moment, but in reality it is only recreated, exactly as it was before, just one clock tick forward, billions of times per second.

    I suspect it is much the same with we humans and our meat-brains. We too have something analogous to a processor clock, we to have the feeling of continuity, but I suspect this is only a perception of continuity…

    So maybe it does not matter so much after all

    • eric

      There was an award-winning sci-fi short story from a few years back called “Think like a Dinosaur,” that had an interesting take on transporter duplication. Though just to manage expectations, it has little to do with actual dinosaurs. There was also a movie about this problem but I hesitate to give the title since the transporter duplication thing is, essentially, the entire surprise ending.

      • Darren

        There was, I think, an Outer Limits show in the 90′s that had a premise of an intersteller transporter, with the twist that the “original” stuck around. The machine was designed so that the original was always disintigrated immediately after transport – the plot of the show being a malfunction where the delete step did not occur, and the original decided it did not want to be subsequently disintegrated…

        • eric

          That Outer Limits show was, in fact, based on the short story “Think like a dinosaur.” :)

          • Darren

            Nice, I will definitely read the story!

  • Daniel Engblom

    The main common ground here seems to be our shared (huge, not complete though, I hope people can agree) ignorance: Chalmers and to some extent Dennett as well agree that we don’t know.

    The problems seem to arise in what you decide that ignorance can tell you.
    Dennett and others seem to follow the honorable scientific mindset of saying that we can work on this problem.
    What I’m often left wondering about is whether “B Team” either:
    A) Are saying that “A Team” is going about it the wrong way in understanding consciousness, or
    B) It’s impossible, no one should even bother to try to solve this and let’s just instead throw our hands up in the air in a defeatist and anti-scientific manner.

    If it’s A then that is perfectly reasonable, though it would be hoped for that the dialogue would be constructive, where you point out HOW things are being done the wrong way, and what could be possibly done to fix things. Maybe there are excellent examples out there already of people doing this valuable hard work of pointing out errors and pointing out possible fruitful venues of discovery.
    If it’s B then I have no respect for “B Team”, they lose all credibility to voice their opinions on these matters when the philosophers and scientist are working on to figure out the right questions and tools for the problem at hand.

    • http://verbosestoic.wordpress.com Verbose Stoic

      It’s generally A, but sometimes it looks like B because they tend to reject third-person and therefore strictly scientific approaches, which opponents then classify as B.

      Some do say B, but they say that by saying that the actual properties are such that we can’t actually get access to it, which is more than just throwing hands up in the air and being defeatist, but pointing out that we’re trying to get answers to something that we are simply incapable of getting the answers to.

    • http://homeschoolingphysicist.blogspot.com PhysicistDave

      As a dyed-in-the-wool “B Teamer” (and a Ph.D. physicist), I think what most of my team are saying is that the ongoing work in neuroscience to solve the so-called “easy problem” is just fine, but we should not kid ourselves into thinking this will solve the “hard problem.” Even worse, we should not deny that the “hard problem” exists.

      I had a chance to discuss this with a neuroscientist at MIT, Gerald Schneider, last spring. My kids and I had sat in on a session of his course, and afterwards, we had a chance to chat with him, and I mentioned that what was really interesting was the “hard problem.” He turned with a grin to my kids and told them that physicists like their dad thought neuroscientists like him should be attacking the “hard problem” but it was just too difficult at the present time. I smiled and assured him that we physicists accepted his and his colleagues’ judgment with regard to current research possibilities, but we still had hopes for attacking the “hard problem” over the long term. I found it interesting that he viewed my perspective as typical of physicists (makes sense, of course).

      Incidentally, in my opinion the most readable intro to all this is not Chalmers but Colin McGinn’s The Mysterious Flame along with Colin’s essay “Consciousness and Cosmology,” published in Davies’ and Humphreys’ Consciousness: Psychological and Philosophical Essays: the latter essay is a slightly tongue-in-cheek argument as to why dualism might be true. Colin is, incidentally, an atheist and is scientifically literate.

      Dave Miller in Sacramento

      • Chris Hallquist

        Hi Dave, thanks for the recommendations from McGinn. I’ve read some of his work before, but not those pieces. I’ll have to read them, and if I like them too I may start recommending them in place of Chalmers.

        • http://homeschoolingphysicist.blogspot.com PhysicistDave

          Chris,

          I hope it is clear that I am not trying to trash Chalmers (I have a well-deserved reputation for trashing philosophers on more than one occasion, so I think I need to be explicit when I am not trashing some guy!): indeed, I did make it all the way through The Conscious Mind, and I think Chalmers had some excellent points, though I find his pan-psychism rather hard to swallow.

          Colin’s main advantage over Chalmers is his much, much greater readability and brevity, which makes him much more suitable for the casual reader. I also think his lighthearted, whimsical defense of dualism in “Consciousness and Cosmology” deserves more serious consideration: there are days when I think dualism might be true. As a physicist, I am tempted to “solve” the “problem of consciousness” the way we physicists usually go about solving fundamental problems: “Let there be an ectoplasmic quantum field psi such that…” Alas, it’s probably not that simple.

          Dave

          • Chris Hallquist

            Oh, I didn’t think you were trashing Chalmers! No worries!

          • hf

            Alas, it’s probably not that simple.

            Well no, not if this includes the idea that Aristotle (or an even smarter Aristotle with more mental tools) could figure out its existence from pure introspection.

            ‘Functionalism’, to me, means the fact that consciousness is compatible with a wide variety of ontologies and possible laws of physics. I’ll go further and claim that even if you told smarter-Aristotle some general facts about quantum mechanics, he couldn’t figure out that subjective awareness requires some particular object, in a world like ours – because it doesn’t. The nature of consciousness might tell you, at most, that our world needs certain properties (regularity of some kind perhaps) in order for us to perform certain functions.

            Now of course, a fact could be true even if we can never know it. But our clearest examples of this involve self-reference problems. To me that still points to functionalist explanations. It suggests that we could be looking right at a functionalist definition of consciousness, eg. in orthonormal’s posts, and fail to recognize it with certainty.

            Mitchell Porter (who sometimes posts on LessWrong.com) would seem to disagree with me here. But see my previous comment.

          • PhysicistDave

            hf,

            Well, I guess there are really two kinds of “functionalism,” aren’t there?

            What I’d call “definitional functionalism” is really just old-fashioned behaviorism: by definition if the input-output structure of some system tracks the input-output structure of conscious beings, then that system is conscious. I don’t want to beat a dead horse, since I think behaviorism was pretty definitively refuted decades ago. No one really believes in behaviorism even for material objects (e.g., even if electric cars functionally imitate fossil-fuel cars, that does not prove they are the same inside). And, the external behavior of us humans just is not the same as our internal experience (e.g., as shown by the fact that all of us from time to time choose to hide our inner feelings).

            What I’ll call “ontological functionalism” is different: this is the claim that, not by definition but as a matter of empirical fact, different systems that have similar functional organization are also similar in possessing consciousness.

            In all honesty, this does not “feel right” to me as a physicist: the problem is that to define “functional organization” in physically reductionist terms is difficult, maybe impossible. The main problem, I think, is that “functional” descriptions in practice tend to hinge on the viewpoint of the observer: cf. the old joke that a chicken is an egg’s way of making another egg.

            So, if ontological functionalism is true, it really seems to me that it will end up going beyond physics and will really end up being something like dual-aspect theory, epiphenomenalism, Cartesian dualism, etc.

            But, of course, I do not really know: the truth is that guessing the shape of future scientific discoveries is a very high-risk game. In physics, some people came close to guessing relativity before Einstein, but no one did guess (and I do not think anyone could have guessed) the bizarre structure of quantum mechanics until empirical discoveries forced us step by step into the quantum theory. And we physicists are still arguing passionately about what QM really means.

            I see no reason to think that consciousness should be any easier to understand than quantum mechanics. My guess is that we are in all for some big surprises.

            Personally, I like surprises.

            Dave

            P.S. Curiously, I just started a conversation with Mitchell on another topic on another blog. Small world.

  • http://deusdiapente.wordpress.com J. Quinton

    My problem, admittedly coming from someone who doesn’t know a whole lot about the subject, is that the brain doesn’t exist independently of the body. What is the “me” without, say, my abundance of testosterone? I think they would have to emulate the entire body’s endocrine system and not just “the brain/mind”.

    I’m pretty sure one of the scientists a lot more knowledgeable than me has probably already thought up that problem and maybe even its possible solution, so my objection probably isn’t all that new.

    • Chris Hallquist

      You’re not wrong. The focus is on brain emulation, because the assumption is that that would be the hardest part, the part requiring the greatest detail.

  • AndrewR

    I agree that “I don’t know” is the best answer to the question of whether there is something ineffable about consciousness that must necessarily be lost in a simulation. However, we would not be able to get to the point of being able to build a simulation you could upload your brain state into without having to gain a vastly better understanding of the mechanics of our minds than we have at the moment. As a result, I think that spending a lot of effort worrying about the problem _now_ is premature (unless you’re super-optimistic on the timeframe like Kurzweil). As we get closer to the ability, we will understand its problems better.

    It’s a bit like worrying about how we’re going to stop people killing their own grandfathers when we give them a time machine. It makes for fun thought experiments, but these will have a low ROI until the physics of time machines (if there is such a thing) is more fully understood.

    • Chris Hallquist

      I’m somewhat sympathetic to this viewpoint; in the past I’ve hoped that advances in neuroscience would clarify the problem of consciousness. The truth is, though, I have no idea how this could happen. (Which is not the same as saying it won’t happen.)

  • http://theotherweirdo.wordpress.com The Other Weirdo

    And if you upload your simulation and it subsequently attempts to launch a “Destroy All Humans!” attack, who is responsibile? The simulation or you? I can see it now: oh, you want to buy an airplane ticket? Well, first you must upload a copy of your brain into this simulation we’ve got running so we can verity it won’t try to crash it into New York City.


CLOSE | X

HIDE | X