Picking an Atheist’s Brain… er… Mind

Over at The Warfare is Mental, blogger cl was perplexed by a recent comment of Luke’s at Common Sense Atheism.  Luke wrote:

What tools do you use when you think philosophically about God or morality or other subjects? Among other tools, you use your mind. Knowing how the mind works can help you do philosophy better, just as knowing how a camera works can make you a better photographer.

Prompting cl to ask:

If you are an atheist, do you think it’s accurate to use the term mind? I understand that it sufficiently conveys the point in everyday conversation, but, epistemically–shouldn’t an atheist limit themselves to belief in brains only?

I often attempt to envision the positions I’d hold if I were an atheist. As regards mind, if I were an atheist, I would probably categorize it with soul as an equally non-existent entity. In my experience, many an atheist has asked, “What does it mean to say one has a soul? Where is the evidence for the soul? What type of entity is the soul?” Similarly, what does it mean to say that one has a mind? Where is the evidence for the mind? What type of entity is the mind?

I’ll take a quick crack at this question, since I’m one of those atheists who don’t use the terms brain and mind interchangeably.  When I talk about the brain, I’m talking about that three pound lump of flesh which runs all autonomous bodily functions (breathing, secreting hormones, etc), which integrates various sense perceptions into a model of the physical world, and which runs the ‘software’ that we think of when we think of thinking.

I think of my ‘I’ as being in some way analogous to an algorithm.  Given data (from the outside world, from imagining of alternatives, from abstract reasoning) it comes to conclusions which are put into action.  Sometimes action is simple physical action (I want a book, it’s out of reach, so I get up and get it).  Sometimes action is more diffuse (I know a friend is going through a touch time, so I try to find more opportunities to be helpful to them).  And sometimes action is choosing to reprogram my thinking algorithm (I’ve recognized that a particular way I think is unhealthy, so I practice squashing it and work to eliminate it from my habits).

All these processes are ‘run’ on the ‘hardware’ of my brain, and, currently, there’s no way I could transfer the me-program to any other piece of hardware.  But I don’t think of my me-program or any similar program as being intrinsically limited to running on a human brain.  I think silicon based AI is a possibility, and it may even be possible to port existing human consciousnesses onto other substrates.

In that case, my existence as an embodied entity, running my software on a physical brain would be an important part of my history and identity, but not necessarily a constraint on who I might be in the future.  Therefore, I use the word ‘brain’ to talk about the current home and tools of my mind as well as the bits of program that perform maintenance on my body.  I use the word ‘mind’ to talk about everything I’d want to port over to a new piece of hardware, if I ever had the opportunity.

About Leah Libresco

Leah Anthony Libresco graduated from Yale in 2011. She works as an Editorial Assistant at The American Conservative by day, and by night writes for Patheos about theology, philosophy, and math at www.patheos.com/blogs/unequallyyoked. She was received into the Catholic Church in November 2012."

  • http://www.blogger.com/profile/04746612189094458441 Lukas

    David Chalmers, who is an atheist and a determinist, would agree with you that it "may even be possible to port existing human consciousnesses." However, he has a different (and, I think, more correct) understanding of the mind. For a non-technical intro, see his article in Scientific American: http://consc.net/papers/puzzle.pdfAnd here's more of his stuff: http://consc.net/consc-papers.html

  • http://thewarfareismental.wordpress.com/ cl

    Leah,I'm not sure whether "the etiquette" would call for me to respond here or there. I guess I'll respond at both. When you say,…I don't think of my me-program or any similar program as being intrinsically limited to running on a human brain. … my existence as an embodied entity, running my software on a physical brain would be an important part of my history and identity, but not necessarily a constraint on who I might be in the future.…I have to say, it sounds like you're open to the ideas of an afterlife and consciousness existing outside a brain. At least, more open than most atheists I've encountered. However, when you claim,…currently, there's no way I could transfer the me-program to any other piece of hardware.…I'm not so sure. Could you explain precisely what you mean by "transfer the me-program to any other piece of hardware?" For example, in your view, would computer programming count as an example of such? Typing one's thoughts into a computer? Or, do you mean – as I suspect – something more thorough? If so, can you provide an hypothetical example? Further, in your view, would the potential success of "mind uploading" falsify the claim that consciousness requires a human brain? I think it would. That said, if we falsify the claim that consciousness requires a human brain, haven't we effectively proven the plausibility of dualism?

  • dbp

    I have to agree that this is a hard sell to me. I am not an atheist, so perhaps there is something big that I am missing, but if the physical basis for the you-program is lost, in what sense is the "ported" version still you? Sure, it might think it is you, but wouldn't that be basically a figment of its reconstructed imagination?This is especially so because, if we could in fact completely capture the physical state of your brain and create an effective duplicate, and assuming the result is in fact a working mind that cannot be distinguished from you as you are now (something I do not think is a given), it is conceivable that the original you might still be operational, so to speak, or that multiple copies could be stamped out. Which is actually "you"? None but the original, I think.

  • http://www.blogger.com/profile/05635417731229647465 Nate

    Depending on how the brain physically processes and stores things like memories and our personalities, I see no reason something like "porting consciousness" would be impossible. I'm not a neuroscientist (yet), but if it's a physical mechanism than can be replicated, then theoretically creating a brain that is chemically identical to yours should create an identical consciousness, no? I do know enough biology to know it could never be that simple, though.And with regards to the comment on dualism, are we not also proving that consciousness DOES require a physical manifestation, doing away with dualistic soul gobbeldy-gook? Who cares if you've created a non-human analog for a human brain – it still only functions by virtue of a set of physical processes. It seems to me that the original distinction is more between the brain as an organ that all human beings have and the mind being the set of chemical changes and data storage accumulated in an individual's brain.

  • playathomedad

    cl – Your comparison of mind and soul is a false synonymy. There is a huge difference in how those words are used and defined, though the religious are wont to conflate the two. A soul is defined as being independent of the body. It is expressly non-material. Atheists accept that mind arises from the material; it is one and the same and not independent at all. While consciousness and mind feel immaterial, they are no more so than, say, software on a computer disk. As for potentially transferring a mind to another medium, replacing neurons with silicon is hardly separating the mind from the physical material from which it arises. One can copy a computer file from one medium to another. Would you say that the file exists independently of the material on which it is stored?

  • http://www.blogger.com/profile/13116034158087704885 March Hare

    Ultimately I will do a post on this but I think it needs said here:There is no 'you'.What you think of as you is a constantly fluctuating arrangement of neurons and (chemical and electrical) potential. As soon as one thing changes the 'you' that you thought you were has gone, never to be seen again and replaced by a very similar but ultimately different person.However, the changes between the two 'you's is so slight as to give the illusion of permanence. It is only when we do not see someone for a while or there is some catastrophic physical change or chemical imbalance that we notice the difference in others (or even ourselves).Why point this out? Well the 'you' that was uploaded would be an identical copy of the you being scanned, but as soon as one input was slightly different the two 'you's would diverge rapidly.e.g. Imagine you duplicated your physical self, would you die to save your copy? Immediately after the copy it makes no difference which one survives (e.g. Star Trek teleporters!) but as soon as there is any divergence you have two separate people and two wills to live.So if the Star Trek teleporter malfunctioned and beamed you up tot he ship, but also left the original on the planet would the one on the planet consent to be destroyed? I think not. Yet you consent to be destroyed in order to be beamed aboard.

  • http://thewarfareismental.wordpress.com/ thewarfareismental

    Leah, Nate, playathomedad, and dbp: I tried to respond to y'all here yesterday, but, for some reason, couldn't. So, instead of continuing to hassle with it, I posted my thoughts to my own thread, here.

  • http://www.blogger.com/profile/11174257204278139704 Charles

    I find the idea that the mind is 'software' running on the brain and not just 'the brain' to be uncompelling and fostered by poor analogy choice. However I find the idea that the 'mind' is a product of some non-physical soul to be unscientific and therefore meaningless in a scientific discussion. I think the drive to 'transplant' a human 'mind' to an artificial construct is hampered by these two things. The poor analogy of software to hardware has cause a great number of people, who otherwise don't believe in a non-physical soul, to envision 'downloading' there consciousness out of their brain an into a new grown biological brain or some kind of 'rom construct' to use the cyberpunk term. These ideas (no soul/downloadable software) seem to be mutually exclusive. Now I also have major problems with the idea that we could construct so-called hard AI, or otherwise build a substrate capable of containing a complete human consciousness by artificial means (maybe growing a brain is possible I wont get into that at this point) – primarily because everyone I have ever read working in the field of hard AI seems to have a total misunderstanding of the way (at least my) a brain works. They seem to think that the discovery of the mathematics of computation would some how make this possible, using the (again poor) analogy that since a turring complete machine can do any and all computable tasks all we need is to build a sufficiently complex turring machine to allow hard AI to work. I don't know about you but it is self-evident to me that my 'mind' is capable of doing things that turring machines cannot, therefore there must be some other process which is outside of 'computation' to explain what my mind does (brain or soul or otherwise) but I see no one discovering anything like this or even investigating anything like this (with published papers etc) all I see are folks building complicated (usually brute force, sometimes algorithmic) turring style machines to do thinks like lookup chess moves in a table or look up speech patterns in a table. These things, winning chess matches against Gasparov or discerning spoken grammar, are admittedly very impressive but thinking they lead down a path towards human consciousness is like thinking stacking bricks in a staircase will eventually get you a path to send humans to mars. They are similar analogically but not practically.- In other words: "Ray Kurzweil is wrong!"I apologize for being a little rambley – I only respond while I am at work at you've finally touched on a topic that I have extensively thought about!

  • http://www.blogger.com/profile/11174257204278139704 Charles

    I find the idea that the mind is 'software' running on the brain and not just 'the brain' to be uncompelling and fostered by poor analogy choice. However I find the idea that the 'mind' is a product of some non-physical soul to be unscientific and therefore meaningless in a scientific discussion. I think the drive to 'transplant' a human 'mind' to an artificial construct is hampered by these two things. The poor analogy of software to hardware has cause a great number of people, who otherwise don't believe in a non-physical soul, to envision 'downloading' there consciousness out of their brain an into a new grown biological brain or some kind of 'rom construct' to use the cyberpunk term. These ideas (no soul/downloadable software) seem to be mutually exclusive. Now I also have major problems with the idea that we could construct so-called hard AI, or otherwise build a substrate capable of containing a complete human consciousness by artificial means (maybe growing a brain is possible I wont get into that at this point) – primarily because everyone I have ever read working in the field of hard AI seems to have a total misunderstanding of the way (at least my) a brain works. They seem to think that the discovery of the mathematics of computation would some how make this possible, using the (again poor) analogy that since a turring complete machine can do any and all computable tasks all we need is to build a sufficiently complex turring machine to allow hard AI to work. I don't know about you but it is self-evident to me that my 'mind' is capable of doing things that turring machines cannot, therefore there must be some other process which is outside of 'computation' to explain what my mind does (brain or soul or otherwise) but I see no one discovering anything like this or even investigating anything like this (with published papers etc) all I see are folks building complicated (usually brute force, sometimes algorithmic) turring style machines to do thinks like lookup chess moves in a table or look up speech patterns in a table. These things, winning chess matches against Gasparov or discerning spoken grammar, are admittedly very impressive but thinking they lead down a path towards human consciousness is like thinking stacking bricks in a staircase will eventually get you a path to send humans to mars. They are similar analogically but not practically.- In other words: "Ray Kurzweil is wrong!"I apologize for being a little rambley – I only respond while I am at work at you've finally touched on a topic that I have extensively thought about!

  • http://www.blogger.com/profile/01202543574090953195 Tony

    I can't claim any special knowledge but i'd refer to Ruth Garrett Millikan's book "Language, Thought, and other Biological Categories" from 1984. Millikan carefully deals with questions of the essences of minds and clones and robots. She also covers topics of "purpose" or what she calls "function". Her trick or device is to say function or essence refers to mechanisms which ensure organic fitness in an ecology or a market, i.e., why an organism (or a gene) is reproduced or not, or why a product is manufactured and sold, or not. I think analytically her approach is sound. Dennett is a fan.On larger issues than just biology, I think Brouwer got it with his ideas of construction and intuition.He would say (and I too) that in general, when we tread in areas we can't fully grok, we have two choices: either assume an infinite, or set up and arrange building blocks to yield infinity in praxis. For instance, to say infinity is the absence of a limit, and to describe how to control reasoning if no limit is available, is a way to harness infinity without conceding a presence of an infinite being.Reproductive fitness is Millikan's biological equivalent of infinity — a statement that under certain environmental conditions, no limit to life is calculable.However, Brouwer is more than that. I believe he also asserts there are still ltwo issues out there. One is the existence or non-existence of constructions. Brouwer's line is: any construction is the product of human (or sentient) thought, and such a thought is fundamentally solipsistic: it creates a world in which it has validity. Each thinker of constructions is a solipsistic deity.The second issue pressed by Brouwer is how to deal with the presence of other beings who will claim other constructions or the existence of infinites. In a sense, this is the problem of life. Brouwer in "life, art and intuition" has a dark view here.For him, if a construction is good then a solipsist can reproduce it. In a sense, the solipsist is the sole verifier of reality because reality must be consistent and it can only be consistent in a solipsistic view. Those beings who assert inconsistency without allowing it to be subjected to a test of construction by oneself (i.e., submission to solipsism) are unfaithful, and maybe evil. So there is much more evil than good in the world of life, and this evil opposes art and intuition. Entropy is evil.


CLOSE | X

HIDE | X