Computer with consciousness?

A British newspaper has published an intriguing bit of speculation. From Are we on the brink of creating a computer with a human brain?:

What is it, in that three pounds of grey jelly, that gives rise to the feeling of conscious self-awareness, the thoughts and emotions, the agonies and ecstasies that comprise being a human being?

This is a question that has troubled scientists and philosophers for centuries. The traditional answer was to assume that some sort of ‘soul’ pervades the brain, a mysterious ‘ghost in the machine’ which gives rise to the feeling of self and consciousness.

If this is the case, then computers, being machines not flesh and blood, will never think. We will never be able to build a robot that will feel pain or get angry, and the Blue Brain project will fail.

But very few scientists still subscribe to this traditional ‘dualist’ view – ‘dualist’ because it assumes ‘mind’ and ‘matter’ are two separate things.

Instead, most neuroscientists believe that our feelings of self-awareness, pain, love and so on are simply the result of the countless billions of electrical and chemical impulses that flit between its equally countless billions of neurons.

So if you build something that works exactly like a brain, consciousness, at least in theory, will follow.

In fact, several teams are working to prove this is the case by attempting to build an electronic brain. They are not attempting to build flesh and blood brains like modern-day Dr Frankensteins.

They are using powerful mainframe computers to ‘model’ a brain. But, they say, the result will be just the same.

Two years ago, a team at IBM’s Almaden research lab at Nevada University used a BlueGene/L Supercomputer to model half a mouse brain.

Half a mouse brain consists of about eight million neurons, each of which can form around 8,000 links with neighbouring cells.

Creating a virtual version of this pushes a computer to the limit, even machines which, like the BlueGene, can perform 20trillion calculations a second.

The ‘mouse’ simulation was run for about ten seconds at a speed a tenth as fast as an actual rodent brain operates. Nevertheless, the scientists said they detected tell-tale patterns believed to correspond with the ‘thoughts’ seen by scanners in real-life mouse brains.

It is just possible a fleeting, mousey, ‘consciousness’ emerged in the mind of this machine. But building a thinking, remembering human mind is more difficult. Many neuroscientists claim the human brain is too complicated to copy.

Markram’s team is undaunted. They are using one of the most powerful computers in the world to replicate the actions of the 100billion neurons in the human brain. It is this approach – essentially copying how a brain works without necessarily understanding all of its actions – that will lead to success, the team hopes. And if so, what then?

Well, a mind, however fleeting and however shorn of the inevitable complexities and nuances that come from being embedded in a body, is still a mind, a ‘person’. We would effectively have created a ‘brain in a vat’. Conscious, aware, capable of feeling, pain, desire. And probably terrified.

And if it were modelled on a human brain, we would then have real ethical dilemmas. If our ‘brain’ – effectively just a piece of extremely impressive computer software – could be said to know it exists, then do we assign it rights?

Would turning it off constitute murder? Would performing experiments upon it constitute torture?

And there are other questions, too, questions at the centre of the nurture versus nature debate. Would this human mind, for example, automatically feel guilt or would it need to be ‘taught’ a sense of morality first? And how would it respond to religion? Indeed, are these questions that a human mind asks of its own accord, or must it be taught to ask them first?

Thankfully, we are probably a long way from having to confront these issues. It is important to stress that not one scientist has provided anything like a convincing explanation for how the brain works, let alone shown for sure that it would be possible to replicate this in a machine.

So if this can’t be achieved, I suppose that would be proof of the existence of the soul. If it could be achieved, would that undermine the Christian faith? I don’t think it would. The Bible emphasizes the resurrection of the body, so I have no problem with the notion that consciousness inheres in our physical makeup, even if we also have some immaterial spirit that survives in some manner with God. A conscious computer would not be human, of course. It would lack the Divine Image. But it would bear our image.

Say a conscious computer could be built, one that moreover had will, feelings, and a moral sensibility. Suppose it even had a religious impulse. Would it need to be evangelized? Or would it be unfallen?

About Gene Veith

Professor of Literature at Patrick Henry College, the Director of the Cranach Institute at Concordia Theological Seminary, a columnist for World Magazine and TableTalk, and the author of 18 books on different facets of Christianity & Culture.

  • EconJeff

    Didn’t Battlestar Galactica address these issues?

  • EconJeff

    Didn’t Battlestar Galactica address these issues?

  • CRB

    I wonder: would it be able to “Make a decision for Christ”?
    If so, would that result in abandoning the fight against synergism?

  • CRB

    I wonder: would it be able to “Make a decision for Christ”?
    If so, would that result in abandoning the fight against synergism?

  • Matt C.

    I don’t see how it could be unfallen given its creators. Even if it rose to the level of being considered our offspring, our offspring are all sinful.

    Could it be evangelized? Did Jesus die to save it?

  • Matt C.

    I don’t see how it could be unfallen given its creators. Even if it rose to the level of being considered our offspring, our offspring are all sinful.

    Could it be evangelized? Did Jesus die to save it?

  • Bryan Lindemood

    Made in man’s image, would it then be compoundedly fallen? A super-fallen-techno-brain-creature. This would make an awesome movie if said brain could be mobilized and make its own weaponry. After extremely destructive flashes of anger at the fallen people who dared to pull it into existence, it would finally compute remorse, repent, and defy all its creators best hopes and become a Lutheran! Its baptism after three years of Catechesis would prove to be too much death for the poor techno-brain. Could part II take place in heaven itself?

  • Bryan Lindemood

    Made in man’s image, would it then be compoundedly fallen? A super-fallen-techno-brain-creature. This would make an awesome movie if said brain could be mobilized and make its own weaponry. After extremely destructive flashes of anger at the fallen people who dared to pull it into existence, it would finally compute remorse, repent, and defy all its creators best hopes and become a Lutheran! Its baptism after three years of Catechesis would prove to be too much death for the poor techno-brain. Could part II take place in heaven itself?

  • John

    Robert A. Heinlein made this the heart of his novel, THE MOON IS A HARSH MISTRESS, over forty years ago.

  • John

    Robert A. Heinlein made this the heart of his novel, THE MOON IS A HARSH MISTRESS, over forty years ago.

  • Joe

    Did these people not watch Terminator? It will not end well.

  • Joe

    Did these people not watch Terminator? It will not end well.

  • Kirk

    @ Joe

    Seriously. I’m going to need to add an EMP to my post-apocolyptic survivcal kit.

  • Kirk

    @ Joe

    Seriously. I’m going to need to add an EMP to my post-apocolyptic survivcal kit.

  • http://www.toddstadler.com/ tODD

    Is it me, or have they missed part of the question. It’s not only “what gives rise to consciousness”, but also: “what is consciousness?”

    I mean, is it merely a computer saying that it knows it exists? I can program a computer to do that.
    Print(“I know that I exist”).
    Of course, someone will say that doesn’t count, since I told it to say that. Fair enough. But does that criticism change merely because I’ve written several billion lines of code, not just one? At what point is the computer not doing what we told it to?

    Talk of “pain” and “anger” as being somehow indicative of consciousness strikes me as odd. It’s not hard to program a computer to feel pain.
    If (measure_something() > $threshhold) { pain_routine(); }
    I mean, that’s not entirely dissimilar to how it works in humans, is it? For any computer, do human concepts like that have meaning? Or, if we can get a computer to mimic such concepts, does it only have meaning because we, as humans, impute it to the computer (which we programmed)?

    So clearly some smart people here have read much more from other smart people who no doubt have pondered this, even before the dawn of computers. Can one of you tell us: what, really, is consciousness? Do only humans have it, according to your definition?

  • http://www.toddstadler.com/ tODD

    Is it me, or have they missed part of the question. It’s not only “what gives rise to consciousness”, but also: “what is consciousness?”

    I mean, is it merely a computer saying that it knows it exists? I can program a computer to do that.
    Print(“I know that I exist”).
    Of course, someone will say that doesn’t count, since I told it to say that. Fair enough. But does that criticism change merely because I’ve written several billion lines of code, not just one? At what point is the computer not doing what we told it to?

    Talk of “pain” and “anger” as being somehow indicative of consciousness strikes me as odd. It’s not hard to program a computer to feel pain.
    If (measure_something() > $threshhold) { pain_routine(); }
    I mean, that’s not entirely dissimilar to how it works in humans, is it? For any computer, do human concepts like that have meaning? Or, if we can get a computer to mimic such concepts, does it only have meaning because we, as humans, impute it to the computer (which we programmed)?

    So clearly some smart people here have read much more from other smart people who no doubt have pondered this, even before the dawn of computers. Can one of you tell us: what, really, is consciousness? Do only humans have it, according to your definition?

  • http://www.oldsolar.com/currentblog.php Rick Ritchie

    tODD asks the right question: What is consciousness. But then he slips back into talking about whether or not the computer is doing what it was told to do. I think that is a separate question. Let’s say we have a robot with a computer brain. Would we say we told it to pick up the coffee cup only if we gave it the instructions, “Pick up the coffee cup”? Or could it be said to have autonomy if some lower level instructions (e.g. explore your environment, raise objects to your camera level to observe them) gave rise to picking up the coffee cup? And I can imagine the kinds of instructions to be much lower level than this, so that the behavior would be very unpredictable.

    Some of the dark machine future scenarios do bother me. I know from programming that the fact that a computer does only what you tell it to do are cold comfort. You often cannot follow what all the instructions will mean, taken together. (That’s the definition of a bug. It does what you told it to do, and what you told it to do is not what you wanted it to do.)

    But that is a separate question from consciousness. I have no opinion on that one. I have doubts that they will have anything like human consciousness unless they are built with bodies like ours. I think our embodiment has a lot to do with how that works for us.

  • http://www.oldsolar.com/currentblog.php Rick Ritchie

    tODD asks the right question: What is consciousness. But then he slips back into talking about whether or not the computer is doing what it was told to do. I think that is a separate question. Let’s say we have a robot with a computer brain. Would we say we told it to pick up the coffee cup only if we gave it the instructions, “Pick up the coffee cup”? Or could it be said to have autonomy if some lower level instructions (e.g. explore your environment, raise objects to your camera level to observe them) gave rise to picking up the coffee cup? And I can imagine the kinds of instructions to be much lower level than this, so that the behavior would be very unpredictable.

    Some of the dark machine future scenarios do bother me. I know from programming that the fact that a computer does only what you tell it to do are cold comfort. You often cannot follow what all the instructions will mean, taken together. (That’s the definition of a bug. It does what you told it to do, and what you told it to do is not what you wanted it to do.)

    But that is a separate question from consciousness. I have no opinion on that one. I have doubts that they will have anything like human consciousness unless they are built with bodies like ours. I think our embodiment has a lot to do with how that works for us.

  • http://www.toddstadler.com/ tODD

    Rick (@9), the points I raised about following instructions, pain, and so on, were just addressing some popular concepts of consciousness, as seen in the article. My point is that it’s trivial to mimic any of those things that people assert are equal to consciousness, so clearly consciousness is something else. But what?

    I think it’s a little funny that you “have doubts” that computers will have consciousness, but don’t offer up (or have) a working definition. I mean, maybe computers already do have consciousness — it all depends on the definition, right?

    I agree with your take on programming, bugs, and so forth. I’m a programmer, too (of a sort), and I know all too well how computers follow instructions perfectly, even if it’s not what I meant. Still, even if the programming gets highly abstracted, such that we’re no longer directing robots to simply “move from X to Y”, but rather, merely, to “explore and observe”, there will still be some functionality, programmed by humans, that tells it why or it will do so or in what way. Perhaps we will program a robot to “like” round objects, or blue ones, but does the robot truly like those things, or is it still doing what we told it to?

  • http://www.toddstadler.com/ tODD

    Rick (@9), the points I raised about following instructions, pain, and so on, were just addressing some popular concepts of consciousness, as seen in the article. My point is that it’s trivial to mimic any of those things that people assert are equal to consciousness, so clearly consciousness is something else. But what?

    I think it’s a little funny that you “have doubts” that computers will have consciousness, but don’t offer up (or have) a working definition. I mean, maybe computers already do have consciousness — it all depends on the definition, right?

    I agree with your take on programming, bugs, and so forth. I’m a programmer, too (of a sort), and I know all too well how computers follow instructions perfectly, even if it’s not what I meant. Still, even if the programming gets highly abstracted, such that we’re no longer directing robots to simply “move from X to Y”, but rather, merely, to “explore and observe”, there will still be some functionality, programmed by humans, that tells it why or it will do so or in what way. Perhaps we will program a robot to “like” round objects, or blue ones, but does the robot truly like those things, or is it still doing what we told it to?

  • Cincinnatus

    Next thing you know, the computer will discover lasers.

    Then we’re all doomed.

  • Cincinnatus

    Next thing you know, the computer will discover lasers.

    Then we’re all doomed.

  • WebMonk

    I think one of the underlying assumptions in the article is that humans are essentially really complex computers that operate on the very broad, base level, “programmed” commands on the inputs of the environment.

    The “programming” is almost infinitely varied in source, type, and strength – other people, environment, genetics, genetic expressions in response to environment, etc. A near infinite number and variety of extremely base level programming inputs are made on the near infinite variety of genetic makeups from the moment of conception forward – finally comes out with something that is so insanely complex that it is impossible to determine the specific programming that causes us to do something.

    At a certain level of complexity, “self-awareness” begins, though it is fundamentally a programmed response, sort of like what comments 8-10 describe. “Consciousness” is thus self-awareness and self-determination caused by inputs and programming that are too complex and obscure to understand in anything close to fullness.

    As for computers, can we program something we don’t understand? Sure. Put in more connections than we can feasibly trace. We could randomize them in some way to remove any unintentional biases in setting up the connections (or put them in a strict mathematical pattern, or whatever). Put in only very base-level instructions. Let the base level instructions respond to a wide variety of inputs in a wide variety of ways, and then let it modify itself so that the instructions can modify themselves.

    VERY quickly you have something that is far beyond our ability to comprehend in totality, and it is eminently possible that it could develop self-awareness and self-determination behaviors.

    I think that would probably qualify as “conscious” in this scenario. (remember what I said I think “consciousness” is considered to be)

    Would it have “true” free will or consciousness, or just an indistinguishable simulation? I think that the assumptions of the article would answer that when something gains abilities that are indistinguishable from consciousness, then it has consciousness, because that is essentially what humans are believed to have.

    I think that’s NOT what humans are, but at that point, true consciousness is something that is strictly God-given, and anything else is an imitation. That’s not a very satisfying definition of consciousness, though.

  • WebMonk

    I think one of the underlying assumptions in the article is that humans are essentially really complex computers that operate on the very broad, base level, “programmed” commands on the inputs of the environment.

    The “programming” is almost infinitely varied in source, type, and strength – other people, environment, genetics, genetic expressions in response to environment, etc. A near infinite number and variety of extremely base level programming inputs are made on the near infinite variety of genetic makeups from the moment of conception forward – finally comes out with something that is so insanely complex that it is impossible to determine the specific programming that causes us to do something.

    At a certain level of complexity, “self-awareness” begins, though it is fundamentally a programmed response, sort of like what comments 8-10 describe. “Consciousness” is thus self-awareness and self-determination caused by inputs and programming that are too complex and obscure to understand in anything close to fullness.

    As for computers, can we program something we don’t understand? Sure. Put in more connections than we can feasibly trace. We could randomize them in some way to remove any unintentional biases in setting up the connections (or put them in a strict mathematical pattern, or whatever). Put in only very base-level instructions. Let the base level instructions respond to a wide variety of inputs in a wide variety of ways, and then let it modify itself so that the instructions can modify themselves.

    VERY quickly you have something that is far beyond our ability to comprehend in totality, and it is eminently possible that it could develop self-awareness and self-determination behaviors.

    I think that would probably qualify as “conscious” in this scenario. (remember what I said I think “consciousness” is considered to be)

    Would it have “true” free will or consciousness, or just an indistinguishable simulation? I think that the assumptions of the article would answer that when something gains abilities that are indistinguishable from consciousness, then it has consciousness, because that is essentially what humans are believed to have.

    I think that’s NOT what humans are, but at that point, true consciousness is something that is strictly God-given, and anything else is an imitation. That’s not a very satisfying definition of consciousness, though.

  • http://www.brandywinebooks.net Lars Walker

    Such scenarios trouble me less than they did when I was young.

    All my life, it seems to me, scientists have been announcing that they are very near to “solving” or “explaining” or “replicating” some aspect of humanity. But when they make the discoveries they want, they just find a whole new level of complexity they can’t explain.

    Maybe I’m intellectually lazy, but I think the scientists are underestimating the challenge by an order of magnitude at least.

  • http://www.brandywinebooks.net Lars Walker

    Such scenarios trouble me less than they did when I was young.

    All my life, it seems to me, scientists have been announcing that they are very near to “solving” or “explaining” or “replicating” some aspect of humanity. But when they make the discoveries they want, they just find a whole new level of complexity they can’t explain.

    Maybe I’m intellectually lazy, but I think the scientists are underestimating the challenge by an order of magnitude at least.

  • http://www.geneveith.com Veith

    And the thing is, making such a computer would be such a huge, demanding, and technically complex task, demanding the input of multitudes of scientists, engineers, and technicians. And yet, the human mind, so we are told, evolved randomly, with nobody’s input!

  • http://www.geneveith.com Veith

    And the thing is, making such a computer would be such a huge, demanding, and technically complex task, demanding the input of multitudes of scientists, engineers, and technicians. And yet, the human mind, so we are told, evolved randomly, with nobody’s input!

  • Darin

    Check out the following article for further thoughts on this topic, especially the discussion of the “Chinese Room”:

    http://www.thenewatlantis.com/publications/why-minds-are-not-like-computers

  • Darin

    Check out the following article for further thoughts on this topic, especially the discussion of the “Chinese Room”:

    http://www.thenewatlantis.com/publications/why-minds-are-not-like-computers

  • http://www.oldsolar.com/currentblog.php Rick Ritchie

    tODD@10
    “I think it’s a little funny that you “have doubts” that computers will have consciousness, but don’t offer up (or have) a working definition.” I think the problem here is that when we use the term, we are pointing to a kind of experience that each of us has. When people try to offer an abstract definition, they often allow something other than what I know I am pointing to to be called “consciousness.” Daniel Dennet “explains” consciousness by defining it as something different from what most people are talking about, and then explaining that. For me to say someone is conscious means that I believe that they are having a set of experiences like I have when I am in a certain state. But I don’t know exactly where to draw the line on this. For instance, I know what it is to have my five senses. I can on some level conceive of having a sixth. But what if I had that sixth sense alone and not the other five. Should I count that as consciousness? Could a computer develop some kind of sensory awareness that should count?

    “Still, even if the programming gets highly abstracted, such that we’re no longer directing robots to simply “move from X to Y”, but rather, merely, to “explore and observe”, there will still be some functionality, programmed by humans, that tells it why or it will do so or in what way.”

    Some of this was based upon Jeffrey Satinover’s description of neural networks, which work in a very different fashion from traditional programming. At a certain level, they can begin to accomplish tasks in ways that we really cannot follow.

    As a side question, do you think that because your brain was created, that you are programmed just to follow instructions?

  • http://www.oldsolar.com/currentblog.php Rick Ritchie

    tODD@10
    “I think it’s a little funny that you “have doubts” that computers will have consciousness, but don’t offer up (or have) a working definition.” I think the problem here is that when we use the term, we are pointing to a kind of experience that each of us has. When people try to offer an abstract definition, they often allow something other than what I know I am pointing to to be called “consciousness.” Daniel Dennet “explains” consciousness by defining it as something different from what most people are talking about, and then explaining that. For me to say someone is conscious means that I believe that they are having a set of experiences like I have when I am in a certain state. But I don’t know exactly where to draw the line on this. For instance, I know what it is to have my five senses. I can on some level conceive of having a sixth. But what if I had that sixth sense alone and not the other five. Should I count that as consciousness? Could a computer develop some kind of sensory awareness that should count?

    “Still, even if the programming gets highly abstracted, such that we’re no longer directing robots to simply “move from X to Y”, but rather, merely, to “explore and observe”, there will still be some functionality, programmed by humans, that tells it why or it will do so or in what way.”

    Some of this was based upon Jeffrey Satinover’s description of neural networks, which work in a very different fashion from traditional programming. At a certain level, they can begin to accomplish tasks in ways that we really cannot follow.

    As a side question, do you think that because your brain was created, that you are programmed just to follow instructions?

  • Sean

    I have a hypothesis: For it to be consciousness and not just a program, it would need the programed laws/rules to be broad guidelines (breakable, unlike most programs). For that to happen, the computer would need to posses free-will. I have no idea how or if free will can be developed in a computer. To my understanding,by definition, if free-will can be programed into computers by people, it would be much more complex than any program I’ve ever seen or heard of.

    My conclusion is that may be hypothetically possible, but I doubt people will ever achieve it.

    Since this is just a philosophical hypothesis it could be wrong, but I believe that the logic used is accurate, therefore the conclusion should be accurate.

  • Sean

    I have a hypothesis: For it to be consciousness and not just a program, it would need the programed laws/rules to be broad guidelines (breakable, unlike most programs). For that to happen, the computer would need to posses free-will. I have no idea how or if free will can be developed in a computer. To my understanding,by definition, if free-will can be programed into computers by people, it would be much more complex than any program I’ve ever seen or heard of.

    My conclusion is that may be hypothetically possible, but I doubt people will ever achieve it.

    Since this is just a philosophical hypothesis it could be wrong, but I believe that the logic used is accurate, therefore the conclusion should be accurate.


CLOSE | X

HIDE | X