Fun and Games with Minds and Brains

There have been a bunch of interesting comments already on my post on Mind/Brain division, so I’m pulling some out here to reply to in depth.

dcb wrote

I have to agree that this is a hard sell to me. I am not an atheist, so perhaps there is something big that I am missing, but if the physical basis for the you-program is lost, in what sense is the “ported” version still you? Sure, it might think it is you, but wouldn’t that be basically a figment of its reconstructed imagination? 

This is especially so because, if we could in fact completely capture the physical state of your brain and create an effective duplicate, and assuming the result is in fact a working mind that cannot be distinguished from you as you are now (something I do not think is a given), it is conceivable that the original you might still be operational, so to speak, or that multiple copies could be stamped out. Which is actually “you”? None but the original, I think.

Persistence of identity problems are always interesting and lead to interesting hypotheticals.  I ported my consciousness to a different body or different hardware, I believe my body is constantly changing as is. Swapping meatsuits would be a jump discontinuity, rather than gradual change, but so would a medically necessary amputation or other radical physical change. In either of these examples, although some parts of my current identity would change and those changes would influence my future character, the continuity/persistence of my identity through time would not be destroyed.  So I don’t think installing my consciousness on a different substrate weakens the persistence of my identity except insofar as the culture shock of the shift could make me mad.

As for the question of doubling by running different copies: I think both are equally me, but they diverge at the moment of creation.  Duplicate mes are essentially the same as multiple mes which exist in parallel worlds (you know, the kind that split when you make significant choices).  In dcb’s example, the only difference is that instead of running in parallel worlds, here my doppelgangers would exist in the same universe.  They both would have equal claim to being ‘me.’

Nate asked

And with regards to the comment on dualism, are we not also proving that consciousness DOES require a physical manifestation, doing away with dualistic soul gobbeldy-gook? Who cares if you’ve created a non-human analog for a human brain – it still only functions by virtue of a set of physical processes.

Playathomedad added:

As for potentially transferring a mind to another medium, replacing neurons with silicon is hardly separating the mind from the physical material from which it arises. One can copy a computer file from one medium to another. Would you say that the file exists independently of the material on which it is stored?

So, the short answer is that I do think that there’s a sense in which the software or file can exist without hardware.  Maybe this is the almost-math major in me talking, but I think of algorithms as entities that exist on their own, whether or not they are written down or run on a particular machine.  I don’t think I go quite so far when I think about humans, but it would be most accurate to say I’m agnostic on that question.  Either way, I don’t think the mind’s reliance on some kind of physical substrate is a disproof of dualism.  There’s still an important distinction between the algorithm and the substrate– one confers the intentionality on the other.  Additionally, the algorithm is not necessarily subject to entropy and death in the same way that the body is.

About Leah Libresco

Leah Anthony Libresco graduated from Yale in 2011. She works as an Editorial Assistant at The American Conservative by day, and by night writes for Patheos about theology, philosophy, and math at www.patheos.com/blogs/unequallyyoked. She was received into the Catholic Church in November 2012."

  • http://www.blogger.com/profile/13116034158087704885 March Hare

    The most interesting thing about viewing a human as a bunch of (potentially knowable) algorithms is that not only could we leave the substrate, but we could be a bunch of equations written on (lots of) pieces of paper.The inputs into those equations would lead to outputs that would be the exact same as if they were actual inputs on us.Which leads to the strange question of: is it immoral to put some inputs into the equation because you cause what, in a living person, would be considered pain?Ultimately, and I can't see any way round it, the answer has to be: yes, some inputs generate outputs that are analogous to pain and so are immoral.

  • orgostrich

    Can I half recommend a book, "Sense and Goodness Without God," by Richard Carrier?He spends a lot of time talking about minds as something which exists solely as patterns / algorithms, and AI passing the Turing test. I wasn't impressed with the book as a whole, which is why it's only a half-recommendation, but that section was interesting.

  • playathomedad

    Leah said – "So, the short answer is that I do think that there's a sense in which the software or file can exist without hardware. Maybe this is the almost-math major in me talking, but I think of algorithms as entities that exist on their own, whether or not they are written down or run on a particular machine."I think that our thoughts *feel* real. That is, they feel like objects that exist independently of our brains, because we are aware of the workings of our brains not in terms of hemispheres or neurons or chemical cascades, but in thoughts and sensations. We project that sensation of independence on other information. A persons memories die with him. He can write them down, paint them, record them, but the full depth – the color, the scents, the tastes, the subjective value – of those memories is gone and other people cannot ever know them. The same is true of a computer file erased from a hard drive, a burned book, a faded photograph. Mathematical concepts do not exist independently of the minds that grasp them. I think the feeling that they do is nothing more than a trick of memory.

  • http://www.blogger.com/profile/14053209866506474919 Grant Atkinson

    The best argument I've ever heard against dualism is that, since it doesn't lead to new insight into interacting with minds, it doesn't explain anything new from a pragmatic standpoint. If you assume the brain is a deterministic system operating only as the sum of intermolecular interactions, you can enjoy all the insight into how minds work that neuroscience has to offer. Saying that, in addition to the biochemical substrate there's a metaphysically different conscious thing interacting with the brain doesn't seem to explain anything new.This argument fails in my opinion since it fails to take into account the subjective experience of consciousness. As David Chalmers pointed out, we can easily imagine beings that can speak and move and perform actions as an ordinary human does, but has no inner experience of consciousness. Philosophers like Susan Blackmore argue against the possibility of these zombies, but my understanding of her arguments amounts to "I don't like this idea, therefore I will ignore it so there." Maybe someone from the peanut gallery can enlighten me.The subjective experience of consciousness is the most fundamental reality we're faced with. If we must reject dualism (which I don't), I would be more inclined to accept idealist monism than materialist monism since a purely material world and the world of subjective experience seem incompatible to me. A universe where there were only experiences and ideas, but no matter, would be very odd given the success of materialistic thinking in science, but doesn't have such fundamental incompatibilities.I guess what I'm getting at is that a world where material brains and idealist minds coexist and mutually interact with each other seems to make the most sense. I don't have the slightest clue how to explain the mechanism of that interaction, but from a pragmatic standpoint it seems to be the best explanation of the world we experience. Perhaps the solution to the problem dcb submits is that only the original is you, and the facsimile has a new soul or else is a zombie? I don't know, and I can't think of a way to test this idea, but for now it's a hypothetical anyway.

  • http://kpharri.wordpress.com Keith

    It is tempting to think of a computer file transferred from computer to computer as having some sort of existence independent of its temporary hosts.However, computer files are no more real in this regard than waves in the ocean. Although waves move over great distances, no single molecule of water moves particularly far. It therefore makes little sense to think of water waves as having some sort of independent existence. They are merely the result of organized local movements in a physical medium. Similarly, computer files are manifestations of the local organization of bits in a computer's memory. "Moving" a file from computer to computer doesn't really move anything: it simply changes the value of bits in two machines' memory, in a coordinated fashion. Transferring a mind from one brain to another is closely analogous.Crucially, the only "existence" in these examples is the state of the local medium, be it water molecules, bits, or neurons.

  • http://www.blogger.com/profile/11174257204278139704 Charles

    I would once again reference my comment to the previous post (since for me I just made it, before reading this post). I would reiterate that several of these post seem to reference the mind being explained via an 'equation' however I see no evidence that it could using anything anyone currently involved math would understand as an 'equation'.@Leah: Your last paragraph sounds dangerously close to what seem to me to be a very Thomistic (or at least how Aquinas may have put it) view of the mind-body problem. He believed in a soul, obviously, but didn't see it as a thing that was separate from the physical body. Naturally Aquinas was not a determinist, and so had no problem accepting 'things' as existing seperate from physical objects or parts.@orgostrich: The turring test is a joke of a test, and proves nothing. I think the weight put on a piece of software (passing the test) is actually distracting to any 'real' work in the field of AI. So what if a program can trick you into thinking its human, what does that prove about anything?@Keith: are you suggesting that a wave is not real? Can't things that aren't physical be real? Isn't mathematics or the calculus real even if it is not a physical thing? You analogy of transferring a mind depends on minds being software on a brain that can have some kind of state change, an analogy that I find very wanting. It seems you are being dismissively reductionist, "It therefore makes little sense to think of water waves as having some sort of independent existence. They are merely the result of organized local movements in a physical medium." – why if there are 'merely' this does it make little sense to think of them as having existence? – Under this argument I can reduce anything to non-sense other then the entire universe a whole single, undivisable object! 'It makes little sense to think of a star as having independent existence, since it is merely a collection of hydrogen reacting due to gravitation.', 'it makes little sense of thinking of a dog as having any independent existence, since the dog is merely a evolved part of a hugely complex ecosystem, a dog cannot exist without stars, planets, photosynthesis, humans to domesticate, etc etc.'

  • http://www.blogger.com/profile/13116034158087704885 March Hare

    Charles, assuming a purely natural universe, then the brain is simply a bunch of biological activity responding to chemical and physical laws. Every single one of these reactions (and the prior state) can potentially be recorded and measured.The equations of how neurons react are known (or will be long before the point we can measure the interconnectedness of the brain with any degree of accuracy) and we simply use these equations along with the state of the brain at any given time and we can cascade the electrochemical signals (patterns) through the brain and receive an output – the same output the person would give.If the computer is quick enough it could potentially predict your responses 100% accurately, or it could be done in about a billion years using pencil and paper. The point is that all 3 of these (the biological, the paper and the computer) are all you.Unless you have some crazy hang-up about your wet-ware (which we all do) but that is replaced at a cellular level every 7 years, so get over it. (i.e. I've had the same brush for 12 years, I've replaced the head 14 times and the handle 5 times. Is that really the same brush?)

  • http://www.blogger.com/profile/13116034158087704885 March Hare

    Incidentally the 'zombie problem' is far from a problem.Imagine, if you will, a super-intelligence has studied the earth in minute details, and has decided to run a simulation of it. The 'actors' within it are the so-called zombies since their reactions are fixed according to the program that has been written but they have been programmed to act and feel as if they're not. They have the illusion of choice in each scenario but the program is fixed. The super-intelligence can rewind and play it back to its heart's content, each time being identical.How do we know that is not what we are in now? Or how can we possibly know that this isn't some elaborate Truman Show and I (or you) are the only person with 'subjective consciousness'?The fact is we can't but we have to assume that everyone is much like us and that we're what we feel we are, otherwise you fall down a solipsistic black hole (a.k.a. up your own ass.) For pragmatic reasons, if no other, we assume that it's all real, but we should never forget that there is no way of knowing…

  • http://www.blogger.com/profile/11174257204278139704 Charles

    March Hare, so you are claiming that in the chinese room problem the 'system' understands chinese? – I have to side with Searle on that one. A mind is NOT a program.

  • http://www.blogger.com/profile/13116034158087704885 March Hare

    The Chinese Room is an abortion of an example.There is potentially a computer with enough storage to have memory of (virtually) all previous human conversations in Chinese. It can pick and choose responses to each input based on the list of prior responses based on what has gone before in the current conversation.This is NOT understanding, but it may well converse better than the vast majority of Chinese speakers, native or not.The difference is that we are not talking about something simulating the external responses, we are talking about simulating the internal (biochemical) reactions, which is what makes you you.


CLOSE | X

HIDE | X