- Part 1: Introduction / Where is the Soul Hiding?
- Part 2: The Argument from Mind-Brain Unity
- Part 3: The God Part of the Brain
- Part 4: Philosophical Problems with the Soul
- Part 5: The Mysteries of Consciousness and the God of the Gaps
Central to many religions, both Eastern and Western, is the doctrine of dualism: that there is a non-material essence called the soul that inhabits and animates our bodies and is the cause and the source of consciousness, personality, free will, thoughts, ideas, feelings, emotions, memories, the sense of self – in short, everything a person thinks of as “I”. Theists typically believe that the soul survives the physical death of the body and goes on to whatever comes after death, be it an afterlife in Heaven or Hell or reincarnation in a new body.
I am an atheist because I have found no evidence that leads me to believe that the supernatural claims of any religion are true, and the notion of the soul is no exception. In fact, as this essay will demonstrate, there is strong evidence against the existence of a soul in humans, pointing instead to the alternative of materialism – that the mind is not separate from the brain, but that it arises from and is produced by neural activity within the brain. Simply stated, the mind is what the brain does.
As a practical matter, it should be easy to judge between dualism and materialism, because unlike most religious doctrines, the notion of the soul is an idea that would seem to have testable consequences. Specifically, if the human mind is the product of a “ghost in the machine” and not the result of electrochemical interactions among neurons, then the mind should not be dependent on the configuration of the brain that houses it. In short, there should be aspects of the mind that owe nothing to the physical functioning of the brain.
Until recently, this prediction was difficult to test, but modern scientific innovations have thrown light on the subject. Medical techniques such as CAT scans (short for computed axial tomography), PET (positron emission topography), and MRI (magnetic resonance imaging) allow the structure and function of the living brain to be studied. Scientists can see which areas of the brain “light up” with activity when a healthy person performs a mental task, or they can examine patients who have suffered injury or disease to see which parts of the brain, when damaged, correspond to which deficits of neural function.
And already, a disappointing result for theists has emerged. Some mental functions are localized, while others are more diffuse, but there is no aspect of the mind that does not correspond to any area of the brain. In fact, we know precisely which brain regions control many fundamental aspects of human consciousness.
The image of the brain that is familiar to most people – the organ of convoluted gray matter, about the size of two fists held together – is actually an image of just the cerebrum, the outermost and topmost area of the brain. In humans, the cerebrum takes up about 80% of total brain volume, and is responsible for most higher-order cognitive functions. The thin outer layer of the cerebrum, only a few millimeters thick, is called the cerebral cortex or simply cortex for short, and it is this which has the distinctive wrinkled, folded and convoluted gray appearance (“cortex” is Latin for “bark”).
The cerebrum is divided into two hemispheres, the left and the right, which are basically symmetrical in structure, mirror images of each other. There is some specialization of function between the two; for example, in most people language is controlled entirely by the left hemisphere. However, to a large extent the two hemispheres do similar jobs. For example, each one receives sensory input from, and sends motor commands to, one side of the body.
Each hemisphere is divided into four main regions, called lobes: the frontal lobe, the temporal lobe, the occipital lobe, and the parietal lobe. Roughly speaking, the occipital lobes are located in the rear of the brain, the temporal lobes on the bottom, the parietal lobes on top of the brain, and the frontal lobes, as their name implies, toward the front, behind the forehead (Austin 1998, p. 150).
Each lobe performs a variety of functions. The temporal lobes, for example, play an important role in emotional response, memory, and hearing (and therefore language). A set of structures in the brain collectively called the limbic system, which is responsible for the former two functions, lies chiefly within the temporal lobes. The occipital lobes are concerned primarily with vision, though the parietal lobes also play a part in this; the parietal lobes also process information from other senses, especially touch. Finally, the frontal lobes seem to be responsible for many of the qualities we think of as distinctively human, including personality and what is called “executive” behavior: judgment, motivation, planning and goals, regulating and inhibiting actions, making decisions, controlling attention, and responding appropriately to external events and stimuli, among other things. The functions of the frontal lobes, and the changes that can be produced by damaging them, will be discussed in much more detail later in this essay.
With this basic framework in mind, we can examine more specific aspects of brain function. For example, a section of the brain called Broca’s area, usually within the left frontal lobe (in left-handed people it may sometimes be on the right side instead), controls the ability to produce speech. When this area is damaged by a stroke or other injury, the result is a condition called Broca’s aphasia, which renders the victim mute, able to understand speech but unable to speak himself. A nearby section called Wernicke’s area, in the left temporal lobe, performs the opposite role: it gives us the ability to understand speech by storing the memories of how words sound. Damage to this area produces Wernicke’s aphasia, in which the sufferer cannot understand speech, either his own or others’, and speaks only in meaningless babble. (Since the victim has lost all memory of what words are supposed to sound like, he is usually unaware that there is anything wrong with him, and does not understand why he cannot be comprehended by others.) While this jargon often sounds very much like a language and indeed is often mistaken for a foreign language by someone unfamiliar with the condition, it conveys no meaning (Heilman 2002, p. 4). There are evident implications for the sects that believe in glossolalia (“speaking in tongues”).
Other sections of the brain’s left hemisphere are essential to other aspects of communication. The structure known as the left angular gyrus contains the memories of how words are spelled, while the supramarginal gyrus converts speech sounds into letters (Heilman 2002, p. 49). Damage to these systems, both of which are within the parietal lobe, can result in inability to read or write, respectively known as alexia and agraphia. (Bizarrely, some people with specific types of damage to these regions can write but not read.) The left angular gyrus also seems to play a role in mathematical ability, since people who suffer damage to it sometimes become unable to do even the simplest calculations (Ramachandran 1998, p. 19). Damage to the entire language area of the left hemisphere produces a condition called global aphasia, in which the sufferer is completely unable to communicate; this syndrome will be discussed in more detail later on.
The ability to synthesize sensory input into a coherent picture of the world is associated with physical regions of the brain. Most critical of all our senses is vision, and the brain devotes more resources to visual perception than to any other sense. The occipital lobes receive input from the eyes; cell groups within them are specialized to process specific aspects of vision such as color, edge, shape and motion (Heilman 2002, p. 183, 184). Information from the visual system then splits into two streams: the superior parietal lobe’s “where” system, which helps us form spatial coordinates of objects and navigate in our environment, and the inferior occipital and ventral temporal lobes’ “what” system, which tells us what it is we are looking at (p. 100). Other regions, such as the right parietal lobe, control sensory perceptions of our own body; damage to this region produces a truly bizarre disorder called asomatognosia, in which the sufferer is unable to recognize his own body as belonging to him (p. 119). Electrical stimulation of the right angular gyrus, a substructure of the right parietal lobe, can cause out-of-body experiences (Blanke et al. 2002).
Still other brain regions control aspects of consciousness more fundamental than the ability to communicate or navigate. While the left hemisphere is typically responsible for understanding language per se, the right hemisphere mediates emotional aspects of communication, such as the tone of someone’s voice or the expression on their face. Damage to the parietal and temporal lobes of the right hemisphere can leave a person unable to comprehend emotional displays in others (Heilman 2002, p. 56).
Our emotions also arise from the functions of the brain. The left hemisphere seems to govern the expression of positive emotions such as happiness and joy, while the right hemisphere primarily governs negative ones such as anger and sadness. Those who suffer damage to their left hemisphere (leaving the more volatile right hemisphere “in charge”) often become severely depressed, but right hemisphere damage can leave a person emotionally indifferent, even constantly euphoric (p. 75-76). Electrical stimulation of one part of the brain, a part of the limbic system called the amygdala, can produce intense fear (Heilman 2002, p. 74), while stimulation of other regions can cause uncontrollable laughter and feelings of mirth (Ramachandran 1998, p. 201), and stimulation of yet a third region, the insula, can produce feelings of nausea and disgust (Glausiusz 2002, p. 33). Electrical stimulation of a fourth region, the septum, produces consistent sensations of pleasure, and frequently causes a sudden shift in mood from depression to optimism (Austin 1998, p. 170).
Memory, a fundamental aspect of consciousness, is strongly tied to brain function as well. A small brain region called the hippocampus, among other structures in the limbic system, is critical for forming new factual memories (Heilman 2002, p. 150); the effects its destruction has on a person are nothing short of profound.
The question now arises, where in all of this is the soul? Which brain lobe does it inhabit? Where is it hiding in this tangle of neurons and synapses?
In the seventeenth century, the philosopher Rene Descartes proposed that the soul interacted with the brain through the pineal gland, based on his observations that it is located near the center of the brain and is the only brain structure that is single, not paired. Unfortunately for Descartes, today we know the pineal gland is merely part of the endocrine system; its main function is to produce melatonin, a hormone that regulates sleep-wake cycles and influences the immune system, among other things (Heilman 2002, p. 3).
So where is the soul hiding? Area after area of the brain has yielded up its secrets to the probing of neuroscience, and not a trace of it has been found. The more our knowledge advances, the less reason we have to suppose that it exists, and the less sustainable the dualist position becomes. All the evidence we currently possess suggests that there is nothing inside our skulls that does not obey the ordinary laws of physics.
This is not to imply that there is nothing wondrous or amazing about the brain. On the contrary, it has been called, with some justification, the most complexly organized form of matter in the universe. The average human brain has over one hundred billion neurons, connected by hundreds of trillions of synapses. So immense is the complexity of this system, it has been calculated that the number of theoretically possible brain states exceeds the number of elementary particles in the known universe (Ramachandran 1998, p. 8). The brain’s raw computational power has been estimated to be between 10 trillion and 10 quadrillion operations per second (Merkle 1989). (By way of comparison, one of the fastest supercomputers in the world, the Earth Simulator in Yokohama, Japan, can perform 36 trillion calculations per second.)
That our minds arise from the workings of our brains is nothing to be dismayed about. On the contrary, the fires of evolution have spent over four billion years forging the brain into an engine of staggering complexity and computational power only to bequeath it to us. We have been given a unique and priceless privilege, a gift unlike anything else in the known universe. To understand this heritage can only uplift us, and those who would assert that this organic marvel can accomplish nothing on its own without the help of fading shadows of superstition, are only cheating themselves by replacing a greater wonder with a far lesser one.
But in the end, it is the evidence that must decide the question, and so it is to the evidence I shall turn. Part Two of this essay will set forth and defend the position that, not only is there no evidence for the existence of the soul, but that there is strong positive evidence against the existence of the soul, deploying an argument I have styled the argument from mind-brain unity. Part Three will discuss the neurological causes of religion, arguing that all religious experience can be fully and parsimoniously explained as the result of electrochemical activity within the brain. Part Four will lay out some additional arguments against the existence of the soul, and finally, Part Five will consider the greatest and most enduring mysteries of neuroscience – the source of sensory perceptions, of free will, of consciousness – and will show that these questions in no way offer any support to theism.
What does the soul do?
Remarkably, I have yet to find any theist source which explains this. Those which address the topic of the soul seem content to assume that everyone knows what it is and what functions it is responsible for.
However, although no theist source I have come across clearly explains the nature and function of the soul, it is fairly simple to deduce these properties based on what theists claim happens to one’s soul after death. In the belief structure of many religions, when a person’s body dies, their soul departs and goes to face God, where it is judged based on that person’s actions in life. If the person has been virtuous, the soul is admitted to Heaven for an eternity of reward; if the person has been wicked or sinful, their soul descends to Hell for an eternity of punishment. It is further asserted by these religions that throughout this process there is a continuity of consciousness; i.e., the soul will be self-aware, will feel that it is the same person it was when it was embodied, and will in a sense be the same person, so that the reward or punishment will be justified.
Given this, it is relatively easy to work out what the soul must do. If my soul is the part of me that thinks of itself as “I”, that makes me the person I am, and that bears responsibility for my actions during life, then it must be responsible for three things: identity, personality, and behavior. Identity is consciousness, self-awareness, my recognizance of myself as a distinct and autonomous being that is continuous through time. Personality is comprised by the character traits that in combination make me a unique agent. Behavior is the sum total of the acts I perform, whether for good or for bad, during my life. To an extent, these three categories blend into each other; identity encompasses personality, and personality determines behavior – but if dualist theology is correct, it must ultimately be the soul, and not the brain, that is the source of all three of them.
However, as an atheist, I argue that dualist theology is not correct – that these three things are not separate from the brain, are not even linked to the brain, but are unified with the brain. The evidence shows that they are completely determined by the physical configuration of the brain, and that a change to this configuration can alter or eliminate any of them. In short, I will show that, as the materialist position predicts, every part of the mind is entirely dependent on and controlled by the brain. This is what I call the argument from mind-brain unity, and I feel it is one of the strongest arguments against many varieties of theism.
After all, if there is an immortal soul, why would it be subordinate to flawed biology? If there is a god who is fair and just, and who punishes or rewards us for our actions, he would not set things up so that these actions can be dictated or altered by brain chemistry, genes, or other factors over which we have no control. Unless he is an unjust tyrant, he would make our actions the result of the individual’s free choice. This is consistent with the idea of consciousness arising from a spiritual soul not subject to the weaknesses of the physical body. Unfortunately, both of these ideas are contradicted by the evidence. The evidence is undeniable that our identity, our personality, and our behavior are unified with the brain, and can be dramatically influenced by causes beyond our control which affect the brain. Following are case studies that demonstrate this principle in all three areas.
(Note: Except in the case of Phineas Gage, discussed below, the sources which provided these cases have fictionalized the names and circumstances of the sufferers, as is standard practice to protect their anonymity. In all cases, I have followed the lead of these original sources in using these details. The clinical aspects of these cases, however, are all true.)
- Unity of Identity
- Unity of Personality
- Unity of Behavior
Though there are far more unusual neurological disorders that support the argument from mind-brain unity, the first one this essay will examine is relatively well-known and straightforward: amnesia, the loss or disturbance of memory. The best-known type of amnesia is the inability to remember past events due to a blow on the head or some other brain trauma. This condition is known to neurologists as retrograde amnesia, and in most cases is transient, encompassing only the most recent memories and lasting only a brief period of time. However, the type this essay will deal with is less well-known and more severe in its repercussions: anterograde amnesia, the inability to form new memories.
Permanent anterograde amnesia – a total inability to form new memories, without impairing intellectual capacity in any other way – is also known as pure amnesia. (This condition was dramatized in the 2000 film Memento.) It is often the result of alcohol abuse (a condition called Korsakoff’s syndrome), but can have other causes as well. The most famous case on record is that of a man identified only by his initials, H.M., who lived in the mid-twentieth century. After a childhood head injury, H.M. began to suffer from severe epilepsy, with frequent seizures originating in the temporal lobes of his brain. The seizures did not respond to medication, and to cure him, surgeons resorted to removing the anterior portions of both his temporal lobes. This procedure did give him relief from the seizures, but it had an unintended side effect. The portions of his brain that were removed contained the two lobes known as the hippocampi (singular, hippocampus – named after the seahorses they resemble in shape) that are now known to be critical for the formation of new memories.
As a result of the surgery, H.M. acquired a severe case of anterograde amnesia. Though his intelligence was unaffected and he retained most of the memories he had had before the surgery, he completely lost the ability to form new ones. As soon as he ceased to pay attention to something, he completely forgot that he had ever experienced it. He did not know what date it was, knew nothing of current events, and was unable to remember conversations a few minutes after they were over. Drs. Brenda Milner and Suzanne Corkin studied him for years, but he never recognized them or came to know them; they had to reintroduce themselves each time they met him (Heilman 2002, p. 149-150). (For a moving account of H.M.’s loss, see “The Day His World Stood Still“.)
Memory is such a natural and integral part of everyday functioning that it is difficult to imagine what an existence without it must be like. Likewise, it is almost impossible to overstate the terrible and tragic nature of this condition and the importance of what H.M. and people like him have lost. Another case study, however, may make the point clearer.
In 1985, a professional musician named Clive Wearing fell ill with a severe case of encephalitis – a viral infection that attacked his brain, producing inflammation and substantial brain damage. With the aid of modern medicine, he survived and made a recovery, but it soon became apparent that he had not survived unscathed. Like H.M., his hippocampi had been destroyed, leaving him with a permanent case of total anterograde amnesia.
Superficially, Wearing seems unchanged. His emotions are intact, as are his intellectual and rational faculties, and his musical abilities are unaffected. He still recognizes his wife and greets her with happiness and affection when he sees her, and he can still play the piano or harpsichord with all the skill he had before his illness. But something about him is deeply and fundamentally wrong. So dense is his amnesia that he can literally remember nothing from more than a few minutes before, and as a result, he continually believes that he has only just recovered consciousness. He fills his journal with pages and pages of the same entry, repeated endlessly: “Now I am completely awake, for the first time in years” (Time-Life 1991, p. 85). He does not recognize or remember making any earlier entries, denies being the author if asked, and rapidly becomes angry if it is pointed out that they are in his handwriting. Likewise, every time Wearing’s wife Deborah visits him, he immediately forgets the visit as soon as she leaves the room. When she returns, even if she has been gone for only a few minutes, he greets her with joy and affection, declaring that he has not seen her for months and asking how long he was unconscious (Baddeley 1990, p. 5).
Wearing’s brain damage has left him completely incapable of learning any new facts. When attempts are made to teach him anything, he easily becomes frustrated and angry, and of course within minutes has utterly forgotten the experience. He could (and does) read the same book or watch the same TV show over and over, and every time is equally surprised and delighted at the outcome. His bout with illness destroyed some of his past memories as well: he remembers the events of his life only in sketchy outline, and he no longer knows who the Queen of England is, or who wrote the play Romeo and Juliet (ibid.)
Wearing, of course, is completely helpless in everyday life and requires constant care. But there is another implication of his condition that is, depending on how one views it, either a small mercy or the cruelest irony of all. That implication is this: Clive Wearing is not aware, and cannot be made aware, that there is anything wrong with him. He cannot learn the nature of his condition any more than he can learn any other new information. If he were to be told what had happened to him, he would undoubtedly experience all the sensations of shock and dismay any average person would, and then would completely forget them moments later. Without memory, he is trapped in an endless, timeless present, with no past and no future. Barring some radical advance that would make it possible to repair his damaged brain, he will be this way until he dies.
Though Clive Wearing’s ability to form new memories is irreparably destroyed, he does remember, at least in broad outline, the events of his life. Not so for another patient with an even more severe amnesia, a patient studied by Antonio Damasio and colleagues. This patient, who suffered damage to both his hippocampus and his temporal lobes (thought to be important for storing memories), has total anterograde and near-total retrograde amnesia: he cannot form new memories or recall old ones. He is trapped in a permanent present, a void of consciousness without memory.
“Indeed, he has no sense of time at all. He cannot tell us the date, and when asked to guess, his responses are wild – as disparate [as] 1942 and 2013…. This patient cannot state his age, either. He can guess, but the guess tends to be wrong. Two of the few specific things he knows for certain are that he was married and that he is the father of two children. But when did he get married? He cannot say. When were the children born? He does not know. He cannot place himself in the time line of his family life.” (Damasio 2002, p. 69-71)
As Dr. Damasio tells us, the patient’s wife divorced him over 20 years ago, and his children are long since grown up and married. Does this man still have a soul? In what sense is he conscious? He is adrift in a world of darkness, a blank void with neither past nor future, merely an ever-moving present that continually fades from sight.
One more case study will drive home the point of how devastating this condition is, how utterly it deprives a person of some fundamental aspect of their humanity.
The hippocampus is not the only brain structure that seems to be vital for laying down new memories. It is part of a circuit in the brain involving several distinct regions, all of which seem to be equally important for that task. One such region, which connects directly to the hippocampus, is called the fornix, and Dr. Kenneth Heilman tells us of a patient named Flora Pape whose left and right fornices both had to be excised to save her from a life-threatening brain tumor. Mrs. Pape had lived in east Kentucky all of her life, until she and her husband both moved to Jacksonville, Florida, two years before her surgery. At the time of her surgery, she had two sons in their 20s, both of whom still lived in Kentucky.
When she was discharged from the hospital, her husband drove her from Gainesville to their home in Jacksonville. After leaving Gainesville, her husband noticed that she was looking out the window and saying, “Oh, my!” He asked what was troubling her and she said, “What happened to the mountains?”
He asked, “What mountains?”
She replied, “You know, the mountains.”
He said, “There are no mountains here.”
She replied, “No mountains in Kentucky. We must be in the western part of the state. What are we doing here?”
Mr. Pape had been told by [the doctor] that the surgery might make her memory worse, but he was still surprised. “Dear, we are not in Kentucky. We are in Florida.”
She asked, “Why are we in Florida?”
He told her that they had moved to Jacksonville about 2 years earlier. She said, “Moved to Jacksonville? Why?” He told her that the company had asked him to transfer. She asked, “Where are we going now?”
“Back to Jacksonville from Gainesville. You had some surgery on your brain. It was a tumor. The doctors think they got it all out. You are having some memory problems, but the surgeons hope it will improve with time.”
Then she asked, “Who is watching the boys?”
“No one,” he replied. “They are grown and live in Kentucky.”
“What do you mean, grown? They are still teenagers.”
“No, they are not. They are in their twenties. They are coming down this weekend to see you.”
She stopped asking questions for a few minutes and looked out of the car window. Then she turned to her husband and asked, “Where are all the mountains?” (Heilman 2002, p. 151-152)
Like H.M., Clive Wearing and Dr. Damasio’s patient, Mrs. Pape’s memory disorder seems to be permanent, and no treatment known to medical science can cure it. The question must now be asked: According to dualist beliefs, what has happened to these people? Where are their souls?
If any of them were not religious before the onset of their conditions (I was unable to find information on whether they were), they never will be now. Any proselytizer who tries to convert them has, at most, a few minutes to introduce himself, make the person’s acquaintance, earn their trust, explain the tenets of the religion he is offering, and convince them to accept it. After that, they will forget and he will have to start all over. And if the religion requires any type of repeated behavior or ritual, that is out of the question – a few minutes after their conversion, they will have completely forgotten that it ever happened. Will God condemn them for this? Assuming these people were not religious, are they now doomed to Hell because their souls are trapped in an endless loop of brain chemistry?
More to the point, how is this condition compatible with a thing such as the soul in the first place? As one researcher has put it, “the memories of a person define the self” (Persinger 1987, p. 53). Without memory, a person’s identity is irrevocably altered. The effects of this condition are consistent with the materialist prediction that the mind is unified with the brain, but seem considerably more difficult to reconcile with dualism.
Our consciousness is normally continuous in two respects: it is continuous in space (there exists exactly one consciousness in each body) and in time (each body has exactly one consciousness per lifetime). This is as we would expect if the soul existed. After all, we could not fairly be judged for the actions of our body if we were only one of many presences inhabiting it and struggling to control it, nor could an elderly person be fairly held responsible for the sins of their youth, or vice versa, if the consciousness we possess throughout our lifetime is not the “same” consciousness at each point within our lifetime.
However, the condition called pure amnesia proves that it is possible for brain damage to create a consciousness that is not continuous in time. What about a consciousness that is not continuous in space? Can a brain disorder produce multiple consciousnesses within a single body?
For the purposes of this essay, multiple personality disorder (or dissociative identity disorder, as the American Psychiatric Association calls it) will not be considered. It is still a matter of considerable controversy whether this disorder actually even exists (see Piper 1998) and even if it does, it may be purely psychological in nature (Carroll 2002). For the argument from mind-brain unity, only definite conditions caused by physical neurological damage will be reviewed.
As it happens, there is such a condition – one that is not quite as well-known as multiple personality disorder, but that is even more revealing about the way our brain, and therefore our consciousness, is organized. This syndrome is generally known as callosal disconnection.
The human brain is divided into two hemispheres, the left and the right. These hemispheres are mirror images of each other, and perform many of the same functions. For example, each hemisphere receives sensory input from, and controls movements of, one side of the body. However, there is also some specialization. For example, in most people, language is controlled entirely by the left hemisphere. So that they can exchange information with each other, the two hemispheres are connected by a bundle of nerve fibers called the corpus callosum.
However, in some people, the corpus callosum is damaged or severed. Sometimes this occurs accidentally, as the result of brain injuries such as stroke; sometimes it is done deliberately, as the result of a surgical procedure. The most common reason for such a procedure is to treat severe epilepsy: cutting the corpus callosum prevents seizures – storms of uncoordinated neural activity – that begin on one side of the brain from spreading to the other, and so affords sufferers some relief.
However, doing so has a strange side effect that provides insight into the nature of consciousness. As previously stated, each hemisphere receives sensory input from one side of the body only. (Due to a quirk of evolution, our brains have their wires crossed – the left hemisphere controls the right side of the body, and vice versa.) However, also as previously stated, only the left hemisphere controls language. Therefore, when we perceive something on the left side of the body, that sensory information normally travels to the right hemisphere and then through the corpus callosum to the left, which can verbalize and describe what was perceived. But what happens if that connection is severed?
Studies have repeatedly found that, if a patient with callosal disconnection is blindfolded and has an object put into their left hand, they will not be able to name or describe it (Heilman 2002, p. 128). The sensory information received by the right hemisphere cannot be transferred to the language systems of the left. However, since the right hemisphere controls movements of the left side of the body, including the left hand, the person will be able to use that hand to draw the object, or select it from among a group of similar objects, if asked to do so (Newberg and D’Aquili 2001, p. 23; Feinberg 2001, p. 92) – even while remaining unable to explain what they are doing or why. But there are even more important symptoms of callosal disconnection.
One thing the right hemisphere controls is certain types of emotion. If an image with strong emotional associations is projected only to the left visual field, so the visual signal can only travel to the right hemisphere, a person with callosal disconnection will experience the appropriate emotional response. But if asked to explain why they are feeling that emotion, the person will not stand mute. Instead, surprisingly, they will give a reason that is logical but completely unrelated to the true cause. As Andrew Newberg and Eugene D’Aquili write,
“A split-brain patient shown a photograph of Hitler only in the right hemisphere, for example, might exhibit facial expressions indicating anger or disgust. But when asked to explain those emotions, the patient will often invent an answer, such as ‘I was thinking about a time when someone made me angry.'” (Newberg and D’Aquili 2001, p. 23)
Kenneth Heilman offers another, more concrete example, writing about the research of Dr. Michael Gazzaniga and his colleagues. In one experiment, they showed sexually suggestive pictures to a woman with callosal disconnection, flashing them only on the left half of a screen so only her right hemisphere could perceive them. The woman giggled and blushed, but when asked why she was doing so, she replied that she was thinking of something embarrassing (Heilman 2002, p. 129).
Are these people lying? In one sense of the word, perhaps; but it seems clear that there is no conscious intent to deceive. Rather, researchers have concluded, what is happening is that the right hemisphere, upon seeing an image with strong emotional connotations, generates the appropriate response. However, due to the callosal disconnection, it cannot transmit the associated sensory data to the left hemisphere and its language centers. The left hemisphere perceives a change in the body’s state, but does not know why – and so it “fills in” the missing details, fabricating a logical reason for the emotional reaction. This happens at a subconscious level, so that the person genuinely believes the verbal explanation they provide. In the language of psychology, this filling-in process of unconscious invention is called confabulation.
But there is a significant implication to be drawn from these experiments. Clearly, these people’s right hemispheres are aware of their environments, since they can generate the appropriate emotional response to a stimulus. But just as clearly, their verbal left hemispheres do not know some things that their right hemispheres do know. In short, these people’s callosal disconnections have produced two separate consciousnesses – two distinct spheres of awareness – within their minds.
There is an even more startling manifestation of callosal disconnection that supports this conclusion. Although the right hemisphere has no access to the language centers and therefore cannot speak, it can spell words by arranging block letters by touch. In one study of split-brain patients, a subject was asked what his ideal profession was. Verbally (i.e., using the left hemisphere), the patient responded that he would like to be a draftsman. However, with his left hand (i.e., using the right hemisphere), he spelled the words “automobile race” (Hock 2002, p. 8).
As Andrew Newberg and Eugene D’Aquili say of results such as this:
“Research shows that in such split-brain cases, the brain generates what seems to be two separate consciousnesses. Research on split-brain patients led brain scientist and Nobel laureate Roger Sperry to conclude, ‘Everything we have seen indicates that the surgery has left these people with two separate minds, that is, two separate spheres of consciousness. What is experienced in the right hemisphere seems to lie entirely outside the realm of the left hemisphere.'” (Newberg and D’Aquili 2001, p. 22-23)
An atheist is entitled to ask how this evidence can be reconciled with dualism. Clearly, in these split-brain patients, the different halves of their brain have access to different information and may even hold differing opinions. But under the soul hypothesis, this is much more difficult to explain. Presumably, the soul does not have its own internal information-sharing pathways that can be damaged or disconnected. If one part of the soul knows what is happening, all of it should know. If one part of the soul believes a certain thing, all of the soul should believe it. Descartes himself wrote that the soul was by nature indivisible, that it would not make sense to speak of “half a soul” (Feinberg 2001, p. 107). But the facts show that this is not the case.
The preceding two conditions both strike at the common-sense notion that each human being possesses a single, unified identity. Pure amnesia obliterates our sense of ourselves as continuous through time, chopping a person up into numerous evanescent selves, and the effects of callosal disconnection hint that there are multiple spheres of awareness lurking within our minds, which we usually do not notice because they normally communicate seamlessly with each other. Some experiments even seem to show that these spheres of awareness can have different desires from each other. However, there is another syndrome which demonstrates in a truly bizarre fashion that not only do these separate spheres of awareness exist, but that the divide runs far deeper than mere sensory perception. These discrete spheres within our brain can have different emotions and different thoughts – as is proven by the extraordinary condition called alien hand syndrome.
In Stanley Kubrick’s classic black comedy Dr. Strangelove, the title character is afflicted with a bizarre disorder – one of his hands will not obey him. It attempts to make Nazi salutes at inappropriate times, even tries to strangle him on occasion, and he is often forced to use his other hand to restrain it. It has been said that truth is sometimes stranger than fiction, but in this case, truth is equally as strange as fiction, because Dr. Strangelove’s malady really does exist.
“More than fifty years ago a middle-aged woman walked into the clinic of Kurt Goldstein, a world-renowned neurologist with keen diagnostic skills. The woman appeared normal and conversed fluently; indeed, nothing was obviously wrong with her. But she had one extraordinary complaint – every now and then her left hand would fly up to her throat and try to strangle her. She often had to use her right hand to wrestle the left hand under control…. She sometimes even had to sit on the murderous hand, so intent was it on trying to end her life.” (Ramachandran 1998, p. 12)
The obvious explanation was that she was mentally disturbed and doing this to herself, and indeed that was the diagnosis of several physicians who had previously examined her. But Dr. Goldstein found no signs of hysteria or other mental disorders – it genuinely seemed as if her left hand had a will of its own – and so he proposed a radically different explanation. He theorized that the woman’s right hemisphere (which controls the left side of the body, including the left hand) had “latent suicidal tendencies” (ibid.) In a normal person, the more rational left hemisphere would inhibit these and prevent them from being translated into action; but if this woman had suffered damage to her corpus callosum, these inhibitory messages could no longer be transmitted to the other half of her brain, and the right hemisphere would attempt to act on its irrational self-destructive urges.
Shortly after visiting Dr. Goldstein, the woman died (no, not from strangling herself). An autopsy confirmed the doctor’s suspicions: she had suffered a stroke that had damaged her corpus callosum and severed the connection between the hemispheres, removing the brake her left hemisphere had put on the actions of her right.
Today, additional data has backed up Dr. Goldstein’s explanation. It is now known that the right hemisphere is mainly responsible for producing and mediating negative emotions, such as anger and sadness; patients with right hemisphere damage often lose the ability to feel these emotions and become inappropriately cheerful and euphoric (this will be discussed in more detail below). Startling as it may seem, the woman’s callosal disconnection had revealed that there were two separate spheres of consciousness within her mind that felt and desired completely different things.
One more question arises: If half this woman’s brain had become suicidal, who or what was left over? What was the part of her that did not want to commit suicide and fought off the impulses of her “possessed” hand?
The answer, of course, was her rational left hemisphere, disconnected from the right and so unaffected by the negative emotions it was churning out. Since the left hemisphere controls language, it – and she – was able to express shock and dismay over the irrational behavior of the other side of her body. But what this implies for normal people is that we – the part that we think of as “ourselves” – is only the left hemisphere. That is the part that creates a narrative to explain our actions and communicates with the rest of the world. But all the while, there is another, separate consciousness dwelling within our heads – the silent right hemisphere. Unable to control language, it cannot make its presence known directly, and in any case it usually communicates with the left so seamlessly that we do not perceive it as a separate entity. But when callosal damage brings this mute, watching presence to the surface, the results can be astonishing.
Other cases of alien hand syndrome support this explanation. While this syndrome can happen in either hand, damage to the corpus callosum produces almost exclusively left alien hands. (Damage to the frontal lobes of the brain, which will be discussed in more detail later, can produce either a left or a right alien hand.) Furthermore, exactly as one would expect if AHS results from a disinhibition of the more emotionally volatile right hemisphere, alien hands are rarely helpful or pleasant. Instead, most of them perform actions ranging from the merely mischievous to the outright aggressive to the downright frightening. Frequently they do the opposite of what the consciously controlled hand intends. There are cases on record of alien hands that answer the phone and then refuse to surrender the receiver, that spill out drinks, that violently hurl objects at random. Sometimes a patient may open a drawer with his good hand only to have his alien hand close it; sometimes he may try to button up a shirt with one hand while the alien hand follows behind undoing the buttons. In one case on record, the alien hand attempted to tear up money (Feinberg 2001, p. 94-97).
Most disturbing of all, some alien hands are genuinely violent. Strangling actions, as described above, do occur. In another case on record, Dr. Michael Gazzaniga describes a patient whose left alien hand grabbed his wife and shook her violently, while his right hand tried to assist her in bringing the left under control. On another occasion, the doctor was visiting the same patient, playing horseshoes with him in his backyard, when the patient’s left hand reached out and picked up an ax leaning against the side of the house.
“Because it was entirely likely that the more aggressive right hemisphere might be in control, I discretely left the scene – not wanting to be the victim for the test case of which half-brain does society punish or execute” (quoted in Feinberg 2001, p. 98).
As the good doctor astutely noted, there would be a real problem of who was responsible if his patient’s alien hand had followed through on its seemingly ominous intentions. But that problem would not be limited to merely mortal agents of justice. How would God judge such a case?
The dualist must answer the question of how all of this is compatible with the existence of the soul. Do our souls reside only in our left hemispheres? Then who or what lives in the right? Or would dualists claim that the single, unified soul can somehow become fractured, split into two distinct consciousnesses, by damage to the physical brain?
Imagine that you are a doctor, making your rounds in the neurology wing of a hospital. You enter one of the rooms to check up on an elderly patient who recently suffered a severe ischemic stroke in the right hemisphere of his brain. Though he survived with the help of clot-busting drugs, it was too late to prevent damage from being done. The motor centers of his right hemisphere have been destroyed, and the patient is entirely paralyzed on the left half of his body. He will never stand or walk again, and will be confined to a bed or a wheelchair for the rest of his life. But thankfully, he seems to be in good spirits. He is taking the news extraordinarily well – perhaps almost too well – reacting to it with an incongruous lightness.
You greet the patient and wish him a good morning. “How are you feeling?” you ask.
“Fine,” he says cheerfully.
“Do you know why you’re in the hospital?” you ask.
The patient admits he had a stroke. “That’s what the doctors told me, anyway. They did all their scans and X-rays. I guess I don’t have any reason to doubt them.”
“But you’re feeling fine now?”
“Yes, fine,” he agrees.
Something is not quite right here; a suspicion is beginning to coalesce in your mind. It may upset this man, but you have to know, so you ask. “Can you walk?”
“Of course I can,” the patient says with a tone of mild petulance, as if he isn’t sure why you’re asking such a silly question.
“And your hands? Can you use them?”
“Are they both equally strong?”
“Yes, of course they are,” he says nonchalantly. This man has not moved his left hand or stood up since his stroke; he has been in a bed or wheelchair since he arrived at the hospital.
Though that terrible suspicion is essentially confirmed, you decide to push things just a bit further. “Can you touch your nose with your right hand for me?” you ask.
He agrees and does so with no trouble.
“What about your left hand?” you then ask. “Can you touch your nose with your left hand?”
“Sure I can.” The patient’s paralyzed left hand does not move.
“Are you touching it now?”
“Yes, of course I am.” His hand still has not moved.
“Can you actually see yourself touching your nose with your left hand?” you ask.
“Of course I can,” he says in irritation. “It’s right in front of my face.”
You decide to ask just one more question. “Can you clap your hands for me, please?”
The patient looks at you in some puzzlement, but resignedly lifts his right hand and waves it in front of him, as if clapping it against an imaginary left hand. His real left hand lies where it is, completely paralyzed. (adapted from Ramachandran 1998, p. 128-129, where a virtually identical conversation occurs)*
Few neurological disorders force us to confront the fragility of the sense of self more thoroughly than the condition called anosognosia. The Greek word basically means “unawareness of illness”, and that is exactly what this syndrome is: a person who has suffered some severe, disabling injury yet remains steadfastly unaware – and vehemently denies if asked – that there is anything wrong with them. True anosognosia is not simple confusion; the patient literally cannot be convinced of the reality of their condition (Feinberg 2001, p. 21). Though anosognosia is most commonly associated with partial paralysis after a stroke, such as in the example above, it occurs in other conditions as well. Some sufferers of malignant brain tumors and other fatal conditions will steadfastly handwave away their doctor’s diagnosis, insisting that they feel fine (Ramachandran 1998, p. 143). There is even an example on record of a patient who was unaware that he was blind (Heilman 2002, p. 133).
What causes anosognosia? Advocates of dualism and others may contend that it is purely psychological, a Freudian defense mechanism employed by people facing a truth too terrible to accept. But other facts weigh against this explanation.
First, stroke victims with anosognosia may deny their paralysis, but usually freely admit to other things wrong with them. As in the example above, they almost never deny that they did in fact have a stroke. Dr. Vilayanur Ramachandran tells us of a paralyzed patient with denial whom he offered candy if she could tie her shoelaces, only to have her chastise him, “You know I’m diabetic, doctor. I can’t eat candy!” (Ramachandran 1998, p. 142)
But much more important, and much more destructive to the Freudian theory, is that denial is almost exclusively associated with paralysis of the left side – in other words, with damage to the right hemisphere (Feinberg 2001, p. 51). Patients with left-hemisphere damage and right-side paralysis almost never experience denial, despite the fact that they would presumably have just as much psychological need for it. This strong association between denial and damage to a specific region of the brain suggests that something in that region is doing something critical for updating the mental image of one’s own body.
However, there is a third, very powerful piece of evidence – one that decisively rules out both the Freudian theory and dualism as well, and shows clearly how this bizarre disorder, as well as the sense of self in general, arises from and is inextricably unified with the functioning of our physical brains.
In 1987, an Italian neurologist named Eduardo Bisiach performing tests on a patient with denial squirted cold water into her left ear, a test of the function of the nerves that control balance. But the experiment had an astonishing side effect: shortly after the test, when asked if she was paralyzed, the patient calmly replied that she had no use of her left arm! The mere injection of cold water in her ear had effected a complete, though temporary, cure of her denial.
Dr. Ramachandran performed this experiment on another patient with the same condition and got the same astonishing result. He injected cold water into the ears of a patient with left-side paralysis who steadfastly denied her paralysis and insisted that both her arms were equally strong. Irrigating the right ear canal had no effect whatsoever; she continued to insist that she was fine. But when he tried irrigating her left ear canal instead:
After [injecting the water], I asked again, “How are you feeling?”
“My ear’s cold.”
“What about your arms? Can you use your arms?”
“No,” she replied, “my left arm is paralyzed.”
That was the first time she had used that word in the three weeks since her stroke.
“Mrs. Macken, how long have you been paralyzed?”
She said, “Oh, continuously, all these days.”
(Ramachandran 1998, p. 145-146)
Twelve hours later, one of his students repeated the questioning:
“Do you remember Dr. Ramachandran?”
“Oh, yes, he was that Indian doctor.”
“And what did he do?”
“He took some ice-cold water and he put it into my left ear and it hurt.”
…”What did he ask you?”
“He asked me if I could use both my arms.”
“And what did you tell him?”
“I told him I was fine.”
But this puzzle becomes even more complex. Sometimes denial is not permanent, but resolves itself spontaneously as the patient recovers. But what happens when such a patient is asked about their earlier denials? Dr. Ramachandran quizzed one:
“Do you remember I asked you about your arms? What did you say?”
“I told you my left arm was paralyzed.”
“Do you remember I saw you several times? What did you say each time?”
“Several times, several times – yes, I said the same thing, that I was paralyzed.”
(Actually she had told me each time that her arm was fine.)
“…Think clearly. Do you remember telling me that your left arm was fine, that it wasn’t paralyzed?”
“Well, doctor, if I said that, then it implies that I was lying. And I am not a liar.”
At this point it should be plain that this disorder goes far beyond any psychological defense mechanism; it even goes beyond a simple failure of the brain to update one’s own body image. When a denial patient goes into remission, it seems as if they completely “rewrite their script” to make their past behavior compatible with their current knowledge of their paralysis. When the denial returns, their internal script is again rewritten in light of their renewed belief. It is as if, Dr. Ramachandran writes, “we had created two separate conscious human beings who were mutually amnesic”, a “partial insulation of one personality from the other… even though they occupy a single body” (p. 146-147, emphasis added).
So how can this strange condition – the division of a person into two selves, one aware of what has happened to them and one deluded, and each unaware of the other – be explained? Freudian theories, as already shown, will not suffice. Nor can theistic dualism hope to adequately explain this. If the soul stores our memories (a crucial component of identity and a requirement for continuity of consciousness – after all, aren’t we supposed to remember our earthly lives when we get to the afterlife?), how can something so mundane as a squirt of cold water in the ear radically alter those memories? Even if some body defect prevents the information about the paralysis from reaching the soul – which is very unlikely, since these people can plainly see their limbs not moving, if nothing else – once it has reached the soul, how can it be so quickly lost again, as soon as the cold-water treatment wears off? On the contrary, only through materialism and the recognition that our mind is unified with, and at the mercy of flaws in, the physical brain can this phenomenon be satisfactorily explained.
And a non-dualistic theory of the workings of the human mind can offer just such an explanation. Observing that anosognosia almost exclusively results from right-hemisphere damage, Dr. Ramachandran has proposed that the two halves of our brain serve two different roles in regard to our worldviews. His hypothesis is that the left hemisphere’s job is to create a coherent perspective: to sort through the data it constantly receives from the senses and integrate it all into a consistent worldview. But when inconsistent data arrives – information that conflicts with what we already know or believe – it must be handled. One option, of course, is to completely tear down the existing belief structure and start over again; but if we did this for every minor discrepancy we encountered, it would be impossible to function in the world. Therefore, the left hemisphere functions as a preserver of the status quo, defending a person’s belief system by discounting contradictory evidence or force-fitting it into the existing framework. The right hemisphere, by contrast, is hypothesized to be a “devil’s advocate,” searching for major inconsistencies and problems with the status quo and forcing a reevaluation of preexisting beliefs if enough inconsistencies turn up. (One possible explanation for why the cold-water treatment works is that it may stimulate nerves leading into the right hemisphere.)
While it may be an oversimplification, this hypothesis is not without support. In a simple experiment using mirrors that created a discrepancy between what a test subject felt his arm doing and what he saw it doing, a region of the right hemisphere near the right parietal lobe lit up in a brain scan – regardless of whether the discrepancy occurred with the left or the right hand (p. 142).
* This is a more severe form of denial. Not all people with the syndrome create such blatant confabulations; more commonly, they will give excuses or rationalizations why they’re not walking or why their paralyzed arm isn’t moving – they may say the limb is “lazy” or “a little tired” (Feinberg 2001, p. 21), or “It hurts to move that arm,” or “The doctors have been testing me all day and I’m tired of it, so I don’t want to move it” (Ramachandran 1998, p.129-130). Oppositely, on the most extreme end of the spectrum, some people with denial actually claim the paralyzed limb connected to their body belongs to someone else, a condition called somatoparaphrenia (p. 131). (Such people might say something like, “It’s my brother’s arm.”) In all cases, however, these people will go to any lengths, rationalize away any contrary evidence or clutch at any explanation, no matter how strained, rather than admit the obvious. The potential relevance to theistic belief is interesting. Could it be that some fundamentalists and cult members have underdeveloped right hemispheres? (back)
The disorder called Capgras’ syndrome is “one of the rarest and most colorful syndromes in neurology” (Ramachandran 1998, p. 161). The sufferer, who is in every other respect perfectly rational and lucid, suddenly begins to insist that a close friend or loved one – a parent, a spouse, even a pet in some cases – has been replaced by an impostor who looks exactly like the missing individual. While this condition can occur spontaneously in those with schizophrenia or dementing illnesses such as Alzheimer’s (Feinberg 2001, p. 33), it is often (about one-third of documented cases) found in people who have survived some sort of traumatic head injury.
How can this condition be explained? The answer lies in the limbic system, a collection of structures deep within the brain that is responsible for emotional activation. When information from the eyes arrives in the brain, it is transmitted to the object recognition pathway of the temporal lobes to determine what a person is looking at – a face, a house, an animal – and that information is in turn relayed to the amygdala, the gateway to the limbic system, to determine the emotional significance of the object. If the object is the face of a loved one, the limbic system generates the appropriate emotional “glow” to let us know that it is indeed that person.
But what if brain damage disconnects the pathway between the visual system and the amygdala? In that case, a person would still be able to recognize faces, but would not experience the emotions usually associated with them. In essence, the brain says to itself something like, “If this is my mother, why doesn’t her presence make me feel like I’m with my mother?” (Ramachandran 1998, p. 162) – and the only way it can make sense of this discrepancy is to produce the Capgras delusion, assuming that the person merely resembles someone important to the viewer.
It can be disconcerting to realize that emotions play such an important role in the process of judgment. One might well ask, why does this disconnection produce such a severe delusion? Even if the sufferer can’t emotionally feel close to people, can’t they still recognize them, at least intellectually? And under the doctrine of the soul, an immaterial rational consciousness not subject to the flaws of the brain, one might well expect this to be the case. But materialism lays to rest the common-sense notion of a “homunculus” – a little person living inside the brain, receiving its inputs and directing its actions like an air-traffic controller. We are our brains, and their defects are defects in our minds.
But what does Capgras’ syndrome have to do with a flaw in the sense of identity? As it turns out, this condition has some other, far more surprising side effects.
The emotional “glow” produced by the limbic system does more than provide moment-by-moment recognition of significant faces. It turns out that this glow is a crucial component of forming and associating long-term memories.
For example, suppose you go to the grocery store one day and a friend introduces you to a new person – Joe. You form a memory of that episode and tuck it away in your brain. Two weeks go by and you run into Joe in the library. He tells you a story about your mutual friend, you share a laugh and your brain files a memory about this second episode. Another few weeks pass and you meet Joe again in his office – he’s a medical researcher and he’s wearing a white lab coat – but you recognize him instantly from earlier encounters. More memories of Joe are created during this time so that you now have in your mind a “category” called Joe. This mental picture becomes progressively refined and enriched each time you meet Joe, aided by an increasing sense of familiarity that creates an incentive to link the images and the episodes. Eventually you develop a robust concept of Joe – he tells great stories, works in a lab, makes you laugh, knows a lot about gardening, and so forth. (Ramachandran 1998, p.169)
But how does the brain link these disparate episodes together, recognizing that Joe is the same person each time? As an analogy, say the brain’s memory, in the abstract, functions like a computer file management system. Every time you encounter someone, a new “file” of memory is created. How does the brain know that these new files belong in the same previously created “folder” together with earlier memories of the same person?
The answer lies in the limbic system. The emotional glow it generates apparently functions as a sort of thread running through these disparate memories and tying them together, letting the brain know that they belong together in the same “folder”. But what if this glow is absent? The brain would have no way of associating new memories with older ones, and upon meeting a person, rather than adding your memories of the encounter to the already existing “folder”, an entirely new one would be created. This would lead to the Capgras patient’s insistence that the people they have met are not their loved ones, but different people who merely resemble them.
And now, the question must be asked: Do you count as a “loved one” to yourself? Does the image of your own face evoke emotional significance within your brain?
Dr. Ramachandran found out the answer to this question unexpectedly while examining a Capgras patient named Arthur:
“I was showing Arthur pictures of himself from a family photo album and I pointed to a snapshot of him taken two years before the accident [that resulted in him acquiring Capgras’ syndrome].
‘Whose picture is this?’ I asked.
‘That’s another Arthur,’ he replied. ‘He looks just like me but it isn’t me.’ I couldn’t believe my ears. Arthur may have detected my surprise since he then reinforced his point by saying, ‘You see? He has a mustache. I don’t.'” (p. 172)
This delusion did not occur when Arthur looked in a mirror; despite his condition, he seemed to recognize that the face in the mirror could not be anyone’s but his own. But the tendency to “duplicate” himself did appear on other occasions. At one point, he said, “Yes, my parents sent a check, but they sent it to the other Arthur,” and one day he even turned to his mother and asked, “Mom, if the real Arthur ever returns, do you promise that you will still treat me as a friend and love me?” (ibid.)
No dualistic explanation of Capgras’ syndrome can hope to account for this. The condition of Arthur and others like him shows us that the sense of self and identity is unified with the brain, and can be fractured by damage to the brain. To let Dr. Ramachandran sum up:
“How can a sane human being who is perfectly intelligent in other respects come to regard himself as two people? There seems to be something inherently contradictory about splitting the Self, which is by its very nature unitary. If I started to regard myself as several people, which one would I plan for? Which one is the ‘real’ me?
…Philosophers have argued for centuries that if there is any one thing about our existence that is completely beyond question, it is the simple fact that ‘I’ exist as a single human being who endures in space and time. But even this basic axiomatic foundation of human existence is called into question by Arthur.” (p. 172-173)
The strange story of Phineas Gage is one of the classic cases of neurology, and one of the first that led scientists to suspect that there might be regions of the brain specifically devoted to personality and reasoning. The terrible accident this man suffered, while tragic, also served to cast a gleam of light on the inner workings of the mind and reveal how fragile the neurological construct called the self is in all of us.
It was the summer of 1848, in New England, and the Rutland and Burlington Railroad Company was building new tracks for its trains. The proposed path ran over uneven ground, and outcrops of stone had to be blasted to clear a way for the rails to be laid.
The construction crew supervising the blasting was led by one Phineas Gage, a 25-year-old man whose employers described him as “the most efficient and capable foreman in their employ” (Macmillan 2000, p. 65). He was further said to have “temperate habits” and “considerable energy of character”, and “was looked upon by those who knew him as a shrewd, smart businessman, very energetic and persistent in executing all his plans of action” (ibid.) In short, a better person to lead the construction crew could not have been found.
In order to blast, the crew would drill a narrow shaft in the rock, fill it halfway with explosive powder, insert the fuse, and then fill the hole the rest of the way with sand, which would direct the explosion inward, toward the rock it was intended to destroy. Once the sand was added, it had to be tamped down with an iron bar, then the fuse was lit and the blast went off. Gage and his crew were performing just such a procedure when the fatal mistake occurred.
For one particular shaft, the blasting powder had been poured and the fuse set, but the sand had not yet been poured. However, before it could be added, Gage became distracted and unthinkingly tamped down the explosive powder itself. His iron tamping rod struck sparks, the powder ignited, and the blast – channeled and directed by the narrow walls of the drill shaft – went off in Phineas Gage’s face. The tamping iron, which had been in the hole at the time, was propelled upward like a bullet straight at his head.
The iron tamping rod, over three feet in length and tapering from an inch-and-a-quarter diameter at one end to a quarter-inch at the other, pierced Gage’s left cheek point first, penetrated the base of his skull, passed through the front of his brain, and flew out through the top of his head, leaving a ghastly exit wound. Covered with blood and brain material, the iron landed over a hundred feet away. Gage was knocked over by the force of the blow, but astonishingly, then sat up and spoke. He was conscious and seemed in command of his faculties, despite the terrible injury he had suffered. His men helped him get to town to see a doctor.
The physician who examined him, Dr. John Harlow, confirmed that initial impression. Phineas Gage was fully coherent; he was not paralyzed and had no difficulty walking, speaking or using his hands. He had lost sight in his left eye as the result of his accident, but otherwise his senses and faculties were intact. He even spoke with Harlow perfectly calmly and rationally despite the gaping wound in his skull. In disbelief, the doctor helped treat him, and with his help Gage eventually survived the injury and a subsequent infection and fever – a major achievement by itself, in an age before antibiotics.
However, it soon became apparent that Gage had not survived his ordeal unchanged. Almost immediately after his fever had passed and his wounds had healed, major and surprising changes in his personality began to surface. In essence, he was no longer the man he had been before the accident. As Dr. Antonio Damasio writes:
“Yet this astonishing outcome [Gage’s survival] pales in comparison with the extraordinary turn that Gage’s personality is about to undergo. Gage’s disposition, his likes and dislikes, his dreams and aspirations are all to change. Gage’s body may be alive and well, but there is a new spirit animating it.” (Damasio 1994, p. 7)
The man Phineas Gage had been before his accident was gone. As a perplexed Dr. John Harlow wrote, he had become “fitful, irreverent, indulging at times in the grossest profanity which was not previously his custom, manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires, at times pertinaciously obstinate, yet capricious and vacillating, devising many plans of future operation, which are no sooner arranged than they are abandoned… A child in his intellectual capacity and manifestations, he has the animal passions of a strong man.” (quoted in Damasio 1994, p. 8)
A sharper contrast with the man he had been before would be impossible to imagine – “the alterations in Gage’s personality were not subtle” (p. 11). Where once he had been polite, modest and likable, he had become crude, profane and tactless. Where once he had been responsible and goal-driven, he now became lazy and irresponsible, and would conceive all kinds of wild plans and fail to follow up on any of them. Where once he had made shrewd and wise decisions, it now seemed he was actively attempting to drive himself to ruin through repeated instances of bad judgment. So dramatic and obvious was the change that his former friends sadly said that he was “no longer Gage” (quoted in Damasio 1994, p. 8). His employers refused to give him his old job back, not because he lacked the skill, but because he no longer had the discipline or the character.
For the next several years, Gage held menial jobs working in a stable or as a stagecoach driver. However, in 1860, he began unexpectedly suffering from seizures. After this, his decline began to accelerate; he worked as a farmhand and did other odd jobs, but always moved on before long, as he “[found] something that did not suit him in every place he tried” (Macmillan 2000, p. 66). Finally, on May 21, 1861, he suffered a series of major seizures, slipped into a coma, and died without regaining consciousness.
Gage’s skull was exhumed after his death and became a museum exhibit, and a hundred and twenty years later, Dr. Damasio and his colleagues decided to analyze it to determine exactly where his brain had been injured. Building up a three-dimensional computer model of his skull, they ran simulations to determine the most likely path of the iron bar through it based on the never-fully-healed entrance and exit wounds.
What they found was not surprising. The region of Gage’s brain that was damaged was a part of the frontal lobes called the ventromedial prefrontal cortex – precisely the part now believed to be critical for normal decision-making (p. 32). With this part of his brain destroyed, he was unable to plan for the future, behave himself according to social rules and customs, or decide on the most advantageous course of action. That he behaved as he subsequently did was to be expected, and was not his fault or the result of any conscious decision. Dr. Damasio writes, “It is appropriate to say… that Gage’s free will had been compromised” (p. 38).
The most important lesson that we can draw from the strange and tragic case of Phineas Gage is that the frontal regions of the brain play a major role in determining personality. Likewise, they play a crucial role in controlling behavior, allowing us to inhibit our reckless impulses and conduct ourselves as society expects. These functions can be disabled when the frontal lobes are damaged or destroyed. Nor is Phineas Gage’s case the only one like it on record. As the remainder of this section will show, there are many more examples of people with frontal lobe damage who exhibit similar symptoms: an inability to make wise decisions, to behave as law or custom expect, and to fit into society as an ordinary human being. How can a dualist hypothesis explain this? Did the blast of the iron on that morning in 1848 knock Gage’s soul out of his head? A materialist theory of mind can easily explain how a person’s character traits can be altered by physical harm. For dualist models that hold that character traits ultimately arise from an immaterial “ghost” invulnerable to harm, these cases are not similarly explicable.
As the case of Phineas Gage shows, the frontal lobes of the brain play two important roles: they are the “executive” or regulatory centers, controlling behavior and inhibiting inappropriate actions, and they are vital components of the sense of self, giving rise to the personality traits that make a person a unique individual. And exactly as the materialist position predicts, either of these functions can be altered or eliminated entirely by neurological damage.
Further evidence for this comes from a study done by Dr. Bruce Miller and colleagues for the journal Neurology, in which they examined the effects of an inherited brain disorder called Pick’s disease, or frontotemporal dementia (FTD), that normally strikes people in their mid- to late 50s. Although similar in many respects to other neurodegenerative disorders such as Alzheimer’s disease, FTD differs from them in that it primarily attacks the frontal lobes and anterior temporal lobes, areas believed by neuroscience to be intimately tied to personality and the sense of self. Early symptoms of FTD include loss of empathy, selfish, inconsiderate or tactless behavior, loss of inhibitions, and aggressiveness (Alzheimer’s Society 2000).
FTD of the left hemisphere usually causes aphasia and language deficits. By contrast, six of the seven patients in Dr. Miller’s study who had FTD of the right frontal hemisphere showed dramatic changes in personality as one of the earliest symptoms of their illness. These changes altered personality traits and preferences in areas as diverse as food and dress choice, political ideology, social behavior, sexual preference, and – most destructive of all to dualism – religion.
The first of Dr. Miller’s patients was a 54-year-old woman who had begun to experience loss of judgment and inhibition nine years earlier. Once a lover of expensive designer clothing and French cuisine, she had begun to buy cheaper brands of clothing and eat at fast-food restaurants. Her personality became “irritable, aggressive, and domineering” (Miller 2001, p. 818), and she became apathetic and quit her job.
The second patient was a 67-year-old man who had begun showing symptoms of dementia as early as 40. He had sold a business he had owned and had tried a variety of new jobs, but was repeatedly fired for irresponsible behavior; once “a critical, self-reliant individual who recognized his own failings” (p. 919), he now blamed his employers for his poor record. His previously puritanical views about sex became liberal, tolerant and experimental, and he urged his children to adopt a “libertine” lifestyle. By age 57, he had become careless, irritable and easily angered, and by 64 he was thoroughly demented.
The third patient was a 63-year-old woman who had once been a well-dressed conservative. By age 56, however, she had become withdrawn, antisocial, hostile and uncaring; in one instance, she ran a red light, hit another car and then carelessly left the scene of the accident to go shopping. By age 62, her political ideology had changed, becoming diametrically opposed to her former beliefs; she became a passionate advocate of animal rights, publicly argued with people she saw buying conservative books, began to wear T-shirts and baggy pants with pro-wildlife slogans, and made incendiary anti-conservative statements such as, “Republicans should be taken off the Earth” (ibid.)
The fourth patient, a 53-year-old man, quit his job as president of a successful advertising agency at age 35, declaring his intent to write a political novel. He moved to Guatemala, but never wrote anything, instead taking up an interest in photography and wax sculpture; during this entire time, he ignored his family and made no attempt to communicate with his wife or children for months at a stretch. He eventually returned home, but by age 51 he was running red lights, cheating and lying compulsively, criticizing unsuspecting guests and family members harshly for minor matters, staring inappropriately at women other than his wife, and masturbating in public. Within two years, he had to be committed to an institution.
Another patient was very similar, a retired stockbroker who decided to become an artist. Like the last patient, he exhibited changes in language, dress and behavior, and rapidly grew disinhibited, frequently shoplifting or changing clothes in public without embarrassment. Eventually he stopped bathing and changing clothes entirely, the artistic talent he had developed faded, and he became demented.
The final patient’s case is the most interesting of all. A 70-year-old woman, she expressed “intense hatred” (ibid.) for her husband at his death, even though she had been married to him for decades. As the dementia worsened, the woman, who had been Lutheran all her life, converted to Catholicism, made donations to the church, and fell in love with a priest and claimed she was in a relationship with him. Six months after her religious conversion, she was thoroughly demented.
Materialism can explain the effects of frontotemporal dementia without difficulty. How does dualism explain it? Is the deterioration of the brain causing changes to the soul – or are personality traits a quality of the brain and not the soul? But that implies that these traits will be lost upon death. In that case, in what sense will the soul in the afterlife be the same person it was during life?
Furthermore, many varieties of theism must necessarily hold that personality traits and preferences come from the soul and not the brain, because personality traits determine the way we think and react, and it is said in most religions that thoughts and actions will be taken into consideration when our place in the afterlife is judged. Indeed, five of the classic “seven deadly sins” – pride, envy, lust, anger and greed – are purely states of mind, and all the major religions hold that mere improper thought can be a sin.
But the most interesting case of all is the last one. If FTD can cause one person to change their religion, it can cause others to do the same. And if there is one thing that according to dualism must come from the soul and not depend on a vulnerable material brain, it is one’s choice of religion. However, the evidence shows this is not so. FTD can be inherited; will God damn people for their genes? If a person’s choice of religion matters to him, why would he even create such a disease in the first place? Or is it the work of the Devil – but since when do his powers extend to him actually being able to cause people to change their beliefs, rather than merely tempting them?
It has been noted already that the right hemisphere of the brain seems to be more emotionally volatile than the left, more prone to negative emotions such as anger, sadness and grief. It consistently tends to view the world as more hostile and unpleasant than the left hemisphere does (Sagan 1977, p. 189). However, that does not mean that the right hemisphere is on the whole detrimental or unnecessary to human emotion. These feelings are a normal part of human makeup, and there are occasions when they are appropriate and healthy. A lack of them can be just as detrimental as an excess of them, as the following case shows. If the right hemisphere is damaged and unable to function normally, the more placid left hemisphere dominates, and the person can be left either emotionally “flat” and indifferent to the cares of others, or in a state of constant low-grade euphoria, regardless of the circumstances.
Dr. Kenneth Heilman provides a case study that he witnessed: a young Jesuit priest who surfaced too quickly after scuba diving, and as a result incurred a decompression sickness that ultimately resulted in a rupture in his right carotid artery and a stroke to his right hemisphere. For the first few days after the accident, he was lethargic, but then recovered somewhat and became more alert. However, even though he knew he had had a stroke, “he appeared to be totally unconcerned about [it] and at times appeared to think that it was funny” (Heilman 2002, p. 77).
However, his syndrome had deeper repercussions. In a bid to determine the exact effects of the stroke, Dr. Heilman spent several days talking to him. The young priest was an intelligent and articulate man, and their conversations ranged over many topics, but he “never saw any signs of concern, sadness, or anger even when we talked about issues that ordinarily evoke such responses” (p. 78). In particular, the young priest had a sister with leukemia, but expressed no sadness or concern even when discussing her or her disease.
Dr. Heilman’s suspicions about what had happened to the man were confirmed when his parents showed up to visit him and came away distraught. As they told him:
“He looks like our son and has the same voice as our son, but he is not the same person we knew and loved…. He’s not the same person he was before he had this stroke. Our son was a warm, caring, and sensitive person. All that is gone. He now sounds like a robot.” (p. 78-79, emphasis added)
As previously discussed, the right hemisphere, among other things, confers the ability to invest our speech and facial expressions with emotional tone. But something far more fundamental was wrong with the young man. His parents explained:
“When we spoke about his duties as a priest and said he may not be able to perform [them], he said, ‘So what?'”
and even more so:
“He has a younger sister who has leukemia. He is crazy about her, or maybe I should say, was. She was in remission when he came to the West Indies, but now she is also in the hospital with a relapse. At first, we were hesitant to tell him because we didn’t want to upset him, but I was surprised that he never asked about her, so I decided to tell him. He never asked how she was doing, and the only thing he said after we told him about her condition was, ‘Is Jim Thomas still taking care of her? What a character Jim is. Always had the best jokes. Do you want to hear one?’ I told him, no! I was in no mood for jokes. He said, ‘Shame.’ That’s not the way our son acted before he became sick.” (ibid., emphasis added)
This wrenching story illustrates how a human property as fundamental as compassion arises from the brain and can be destroyed by altering the brain. A warm, caring, intelligent young man, as the result of brain damage, underwent a drastic personality change. He became indifferent to his priestly duties and unconcerned about the potentially fatal illness of a loved one, even light-heartedly joking about it with his grief-stricken parents, who said that he was “not the same person [they] knew and loved”, not the same person he had been before his stroke.
One of the most basic ethical teachings, found in the sacred writings of many religions, is to love one’s fellow human beings as oneself – in other words, to show compassion. But this young man, through no fault of his own, seems to have lost that ability. No longer able to feel these negative emotions himself, as a consequence he apparently became unable to imagine what they felt like in others. His ability to empathize with the sufferings of others, to experience their pain as if it was his own, had been “smeared out” into a constant low-grade euphoria. How can a man be robbed of one of the most fundamental defining traits of humanity by brain damage if the doctrine of the soul is true?
Clinical depression is one of the most common mental disorders in existence. Between 10 and 15% of people will suffer from some form of depressive illness during their lifetimes – as many as 19 million people each year in America alone. But despite the commonness of the condition, it is frequently misunderstood. Depression is not the same thing as a temporary bout of sadness, nor can a depressed individual just “snap out of it” or will themselves to feel better, nor is depression a sign of personal weakness. It is a serious, but treatable, medical illness resulting from an imbalance of neurotransmitters within the brain. Although depression can be induced by environmental conditions, many cases have a genetic basis, and often the onset of depression bears no relation to the circumstances of an individual’s life. Depression produces a persistent mood of sadness, anxiety, guilt, helplessness, and hopelessness lasting for weeks or months that interferes with the ability to live a normal life, and in severe cases can lead to suicide attempts (NIMH 2002, AFSP 2002). Christians and other theists are by no means immune to this condition (see http://www.christian-depression.org).
Misunderstanding of depression among Christians is widespread – the Christian Depression website bears witness to this, mentioning, for example, a church whose website condemned depression as a failure of self-discipline. However, such claims are false, and the true explanation of clinical depression is less concordant with theism. Since many religious traditions hold suicide to be a sin, would God hold responsible a person who took their own life as a result of their depressive illness? It is difficult to believe, if there is a soul, that the failures of the body can exert such an overwhelming influence on it.
But even more jarring to dualism, depression also turns out to be one of the most curable mental illnesses – around 90% of sufferers can be effectively treated (AFSP 2002), often through the use of antidepressants which increase the amount of the neurotransmitter serotonin in the brain. (Serotonin is a chemical neurons use to communicate with each other that influences a wide range of moods and behaviors. Serotonin deficiencies have been implicated in depression, aggressive behaviors, obsessive-compulsive disorder, and other mental illnesses.) Exactly as the argument from mind-brain unity predicts, chemicals which alter brain chemistry can have fundamental and powerful effects on consciousness. How can this be squared with the theist claim that our consciousness depends on more than matter?
Even stronger support for the argument from mind-brain unity comes from psychotic conditions such as schizophrenia which sometimes cause the sufferer to attempt to harm others. Schizophrenia is the classic example: a severe, chronic mental illness which affects approximately 1% of the population (NIMH 1999). Schizophrenics suffer from bizarre and terrifying symptoms such as hallucinations, hearing voices, and general disconnection from reality. In severe cases, these inner voices may command the individual to cause harm to other people, a command they may not be able to resist.
It should be emphasized that schizophrenia and other mental illnesses do not necessarily cause violent behavior. The vast majority of people suffering from psychotic conditions are withdrawn rather than violent, and pose more of a danger to themselves than to others. However, there is undeniably a small subgroup of cases in which psychotic disorders are linked to aggressive and violent acts (Walsh et al. 2004). Indeed, religious themes often appear in these situations; the sufferer may believe that God has commanded them to commit these acts, or that their victim is Satan or otherwise evil. (For a sampling of cases where schizophrenia and other mental disorders have been linked to violent behavior, see http://www.schizophrenia.com/family/viol.html.) Other mental disorders can cause violent behavior as well; for instance, one individual suffering from a severe form of Capgras syndrome became convinced his father was a robot, decapitated him and cut open his skull to look for microchips (Ramachandran 1998, p. 166).
Like depression, psychotic illnesses can often be treated with medication that suppresses hallucinations, paranoid delusions and other symptoms of the disorder. Again, it is fair to ask how dualism can account for this. An imbalance in brain chemistry produces an alteration in consciousness; a chemical that corrects this imbalance undoes the alteration. At no point does the soul enter this equation. And how do theistic afterlife doctrines accommodate these facts? When judging the souls of the dead, will God condemn people who genuinely did lack the ability to distinguish right from wrong? How does he handle people who truly could not control their violent behavior – or people who sincerely believe he told them to do what they did? Is it possible to escape Hell on an insanity plea?
If, as most varieties of theism believe, the purpose of embodied life on Earth is as a testing ground where people are allowed to freely determine their eternal fate, then the existence of such conditions is an unexpected fact that does not fit well within such a framework. Theists who believe this must postulate that God placed us on Earth for the purpose of exercising our free will and then created conditions that influence certain people’s personalities and prevent them from exercising their free will – a highly contrived and ad hoc assumption. By contrast, a materialist theory of the mind can consistently account for these conditions and others like them.
The future seemed bright for Mary Jackson. Though she had grown up in a poor inner-city neighborhood, she had overcome this disadvantage to become the valedictorian of her high school graduating class, and had won a scholarship to an Ivy League university where she made the dean’s list all four semesters of her first two years. She was well on her way to realizing her goal of becoming a pediatrician and working with children in the same inner-city areas where she had spent her own childhood.
However, in the summer after her sophomore year, those close to her began to notice strange changes in her behavior. She had been raised a devout Baptist and only rarely drank alcohol in the past, but now she began to drink regularly, in alarmingly large amounts. She began going to bars, first on weekends, then on weekdays, and often ended up sleeping with the men she met there, even though she already had a boyfriend. Eventually, she began using cocaine.
The summer ended and she returned to school. Her grades the first semester of her junior year were dismal – three F’s and two D’s. Her advisor warned her that she would lose her scholarship if this continued, but she flatly refused his recommendation of counseling and became angry and verbally abusive when he suggested it. Her academic performance, as well as her health, continued to worsen during her next semester. She finally saw a doctor after catching a case of pneumonia that would not go away, and his examination revealed a dread diagnosis: Mary Jackson had become infected with HIV and now was suffering from AIDS. Her fall from grace, it seemed, was complete.
Mary admitted sleeping around, but insisted it was not for money. Crying, she said that she could not understand why she had become promiscuous; this had never happened when she was younger, but for some reason, she no longer seemed able to turn down men she met in the bars. Her physician suspected a personality disorder, but she had one other symptom that made him suspicious: she had not had a menstrual period for months. Suspecting a disorder of her pituitary gland, he referred her to the neurologist Dr. Kenneth Heilman.
Dr. Heilman found that Mary had lost her drive to achieve long-term goals, could not avoid seductive situations, and had become short-tempered and easily frustrated. When asked to repeat a simple memory test, she snapped, “Once is enough,” and admitted, “Up to about a year ago, it was extremely rare that I got angry. Now it seems I am always flying off the handle” (Heilman 2002, p. 83).
As well, she had a cluster of other strange symptoms. One of them was a disorder called environmental dependency syndrome, in which the patient’s behavior seems controlled by external cues and stimuli rather than internal decisions. Given a pen and paper, but no instructions on what to do with them, she immediately picked up the pen and began writing her name. When a comb was placed on the table in front of her, she took the comb, as if unconsciously, and began to comb her hair (p. 84).
The frontal lobes regulate and inhibit our behavior, and environmental dependency syndrome is a classic sign of frontal lobe dysfunction. Her other symptoms fit this diagnosis perfectly as well. But why had this change in behavior come on her so suddenly?
Dr. Heilman found the answer when he ordered a magnetic resonance scan of Mary’s brain. The MRI revealed that a large tumor was growing in her brain, emerging from the pituitary gland and pressing on her orbitofrontal cortices, areas of the frontal lobe so named because they are directly over the orbits of the eyes. It was this tumor that had caused the sudden and dramatic change in her personality.
Mary underwent surgery to remove the tumor and began antiviral combination therapy to control the HIV infection, and the resulting change in her personality was every bit as sudden and dramatic as the last one had been. Her drive and motivation returned, and she returned to college, got her bachelor’s degree, and enrolled in a program to get her master’s degree in social work. “Her mother thinks that she still loses her temper more rapidly than she did before the tumor developed, but in general says her daughter is ‘her old self'” (p. 85).
A case possibly even more shocking than Mary Jackson’s was presented by neurologists Russell Swerdlow and Jeffrey Burns at the 2002 annual meeting of the American Neurological Association: a man whose brain tumor turned him into a pedophile (Choi 2002).
The patient, a 40-year-old schoolteacher, had had a normal history with no previous record as a sex offender. But then, without warning and for no apparent reason, his behavior changed; he began soliciting prostitutes, secretly visiting child pornography web sites, and finally made sexual advances toward minors, behavior for which he was arrested and convicted of child molestation. The man himself knew that this behavior was not acceptable, but in his own words, the “pleasure principle” overrode his self-restraint (ibid.), and he failed to pass a court-mandated Sexaholics Anonymous course. The evening before he was to be sentenced, he checked himself into a hospital, saying he feared that he would rape his landlady and complaining of headaches. An MRI revealed that he had an egg-sized brain tumor – and just like Mary Jackson’s, it was pressing on his orbitofrontal cortex.
Brain surgeons removed the tumor, and after recovering from the operation, the man was able to successfully complete the Sexaholics Anonymous course and returned home. For some time, his behavior was completely normal. Then, around October 2001, he began complaining of headaches again, and once again began collecting pornography. A second MRI scan revealed that the tumor had begun to grow back; again it was surgically removed, and again the behavior disappeared.
In both cases, as the tumor grew, these patients’ personalities changed radically, along with corresponding alterations in their behavior. When it was removed, their personalities promptly returned to type, and normal, societally acceptable behavior resumed. This correlated variance provides strong evidence that personality and behavior are unified with the brain. The values that guide our behavior, the motivation to embark on and complete goals, the basic character traits that determine who we are and how we act – the evidence shows clearly that all of these things arise from the frontal lobes of our brains.
Paralysis is the inability to move, but there exists a more unusual condition called akinesia, the unwillingness to move. In this condition, there is no physical reason why the person cannot perform tasks; instead, what is missing is the ability to initiate movement. Unless strongly encouraged by others, and sometimes not even then, sufferers of akinesia will sit passively by and do absolutely nothing, except to attend to the most immediate short-term needs. Unlike paralysis, which is a physical defect, akinesia results from a defect in personality, in motivation. Based on some of the conditions this essay has already covered, one might suspect that akinesia is another disorder related to the frontal lobes, and that suspicion turns out to be absolutely correct.
Dr. Kenneth Heilman gives us a particularly dramatic example of what akinesia is like: the case of Thomas Taylor, a 58-year-old Baptist minister. Before his illness, he had been a hard-working man, meticulous, independent, and active in his church and his community, and so dedicated that he refused his parishioners’ offers to pay him a salary and continued to support himself by working outside of church (Heilman 2002, p. 206).
But this was not to last. The first symptom that manifested was that he began to show up late for his appointments. However, as his akinesia became more pronounced, he stopped keeping his appointments altogether, then stopped leaving the house. All he would do every day, after his family got him out of bed, was to sit on the couch; at first he turned on the TV, but eventually even this stopped. He stopped bathing, shaving, or changing clothes on his own, and then stopped speaking on his own. He spoke only to answer direct questions, and even then, usually spoke only in one-word answers. By two or three years after the onset of his condition, his akinesia was so severe that he would not even get up to go to the bathroom, but instead urinated in his pants. Though he had formerly written and delivered a new sermon each week, the last month he preached at his church, he gave the same sermon three weeks in a row.
This was the account his wife gave of his condition, and when Dr. Heilman asked Mr. Taylor himself, he confirmed this story.
“When I asked why he gave the same sermon repeatedly, he replied, ‘If they are dumb enough to stay and listen to the same sermon, they deserve what they get.'” On hearing these words, a few tears came to his wife’s eyes. ‘Dr. Heilman, you cannot believe how much this man has changed. Three or four years ago, I could never imagine him saying anything like that.‘” (p. 207, emphasis added)
Dr. Heilman’s examination soon unearthed the cause of the minister’s condition: a benign tumor pressing on both his left and right frontal lobes. Routine surgery removed it, and as with the case of Mary Jackson, his recovery was rapid and striking.
“I saw Mr. Taylor once at a follow-up visit, and he showed dramatic improvement. He was not preaching but was teaching at Sunday school, caring for himself, and making plans to start work again.” (p. 207)
As the previous case shows, the frontal lobes of the brain play an important role in initiating behavior, and this function can be disabled if they are damaged. However, the evidence shows that the frontal lobes play an equally important part in inhibiting behavior, and this function can also be disabled by frontal lobe damage. Recall the case of Phineas Gage, whose frontal lobe injury left him unable to suppress the vulgar impulses and crude behaviors that we all must refrain from if we are to fit into society. Likewise, recall the case of Mary Jackson; the tumor pressing on her frontal lobes impaired her ability to inhibit dangerous or unwise behavior, which resulted in her using drugs, going to bars, and frequently sleeping with the men she met there since she had largely lost the ability to turn them down.
Furthermore, an examination revealed that Mary Jackson had a classic symptom of frontal lobe dysfunction called environmental dependency syndrome, in which the patient’s behavior seems to be controlled by external cues rather than voluntary internal decisions. Given a pen and paper, but no instructions on what to do with them, she immediately took the pen and began to write her name; given a comb, without prompting, she took it and began to comb her hair. Thomas Taylor, the Baptist minister from another previous case, expressed a similar symptom; when he was given a pen and paper, he immediately took them and began to write without being asked to do so (Heilman 2002, p. 211). Francois Lhermitte described a nurse with frontal lobe dysfunction who, when given a syringe, attempted to give the doctor examining her an injection (ibid.) James Austin (Austin 1998, p. 255) describes what may or may not be the same case: a patient who, at the mere sight of the usual medical instruments in her neurologist’s office, not only could not resist picking them up but actually used them to perform a physical examination on the very surprised neurologist.
The existence of environmental dependency syndrome poses difficulties for those dualists who would argue that the soul is the source of free-willed behavior. However, the people described above did retain some degree of voluntary control over their actions. In some more dramatic examples of frontal lobe dysfunction, though, this is not always the case. Often, sufferers of this disorder are unable to control their actions even when asked to do so.
While traveling in Malaysia in 1884, the famous neurologist Georges Gilles de la Tourette (from whom we have “Tourette syndrome”) was given the opportunity to study several sufferers of a condition he named latah. However it had happened, by disease or injury, the inhibitory brain functions of sufferers of this condition are entirely disabled, and as a result, the victims are compelled to obey any command they hear, and sometimes, to imitate any action they see. Tourette called these people “jumpers”, and wrote:
“Two jumpers who were standing near each other were told to strike… and they struck each other, each very forcibly. When the commands are issued in a quick, loud voice the jumper repeats the order. When told to strike, he strikes, when told to throw it, he throws it, whatever he has in his hands.” (quoted in Newberg and D’Aquili 2001, p. 93)
Tourette interviewed another woman with this condition, and spoke to her for ten minutes without noticing anything abnormal about her. Then, the man who had introduced him to her took off his coat.
“To my horror, my venerable guest sprang to her feet and tore off her kabayah. My entreaties came too late to prevent her continuing the same course with the rest of her garments” (ibid.)
Were these people crazy? Tourette did not think so. In his observations, he found no evidence of psychosis in any of them, no sign of a break with reality. In fact, he said, each victim was “perfectly conscious of the mental abasement which he is exhibiting, and resents his degradation” (ibid.) Though they wanted to control their behavior, they were not able to do so. As another neurologist has observed in similar cases of environmental dependency, “[t]he behavior of these patients appears to be entirely controlled by the external world” (Heilman 2002, p. 211).
Again, the dilemma for dualists is obvious. How can God hold us responsible for our behavior if our behavior can be removed from our conscious control by damage to the brain? And does the existence of these conditions not imply that conscious behavior is controlled by the same brain regions in normal people as well?
Aphasia, an impairment of spoken language, can be brought on by damage to the language centers of the brain. This essay has already discussed two of the most common types: Broca’s aphasia, loss of the ability to speak, and Wernicke’s aphasia, loss of the ability to understand. However, there are rarer and more specific varieties of both of these. As it turns out, standard Broca’s aphasia actually encompasses two distinct capabilities – spontaneous speech, the ability to carry on a conversation, and automatic speech, the ability to recite from memory, such as when singing. More specific types of brain damage can disable one of these abilities without affecting the other.
For example, Dr. Kenneth Heilman tells us of a patient of his, an Orthodox Jew who had lived in France for most of his life but had immigrated to Israel shortly before suffering a stroke. Several times each day, devout Orthodox Jews chant in Hebrew the monotheistic prayer from Deuteronomy 6:4: “Hear, O Israel, the Lord our God is One.” But upon waking after his stroke, the patient found to his shock that he could not chant this prayer, which he had recited every day for over sixty years (Heilman 2002, p. 14). It also turned out that he could no longer sing the French national anthem. In every other respect, however, his ability to converse was unaffected. Incredible as it may seem, the damage to his brain had disabled only this single specific ability.
Another example concerns a deacon from a Vermont Christian church who suffered a similar stroke, except that this one disabled his conversational speech while leaving intact his automatic speech. Before his stroke, his wife had never heard him utter a curse; afterwards, all he could say was curse words and the Lord’s Prayer (p. 13).
It may seem incredible that such specific abilities can be compromised without affecting others. The average person might well ask, “What’s the problem? Why couldn’t they just talk?”, and indeed, if the doctrine of the soul were true – if we all had a supernatural “ghost” in our heads, unaffected by physical brain damage, that directs our actions – this would be a valid question. However, the neurological evidence has demonstrated time and again that our consciousness and its attendant abilities are unified with the brain, and can be disabled by damage to it. In summary, “the mind is the product of the brain’s activities, and the brain’s activities depend on its organization” (p. vii).
One must ask whether these people’s disabilities will affect their eternal fate. Would a Christian, Jew or Muslim who lost their automatic speech be held accountable by God for failing to say the prayers he has demanded of them, through no fault of their own? What about a deeply religious individual who loses the ability to speak except in profanities?
Aphasia is relevant to religion in another way as well. While it is sometimes highly specific, as the above examples show, in cases of widespread brain damage it can also be very general, producing a total destruction of the ability to communicate. Such a condition is called global aphasia, and Dr. Heilman gives us an example: a 34-year-old woman named Cathy Henson who was admitted to the hospital one day after suddenly developing right-side paralysis, accompanied by a total loss of the ability to speak, write and understand others’ speech or writing. An MRI confirmed that she had suffered a massive stroke that had caused damage throughout her left hemisphere, including the entire language cortex of her brain (p. 61-62). Though she recognized her family when they came to visit, she was completely unable to communicate with them.
The existence of such a condition raises difficulties for members of evangelistic religions. How is it possible to convert a person one cannot even communicate with? Will she be held responsible for her belief if she gets to the afterlife and finds out she was wrong, when she never had a chance to learn or be told differently during her earthly life?
The last syndrome that this section will discuss is called akinetic mutism, which can best be described as a state of “suspended animation, mental and external” (Damasio 1994, p. 71) or as a “vigilant coma” (Ramachandran 1998, p. 252). This condition often results from damage to a region of the brain called the anterior cingulate cortex, which seems to be where systems of attention, emotion and short-term memory come together. Patients with akinetic mutism, although they are awake, alert and conscious, simply do nothing. Their eyes track moving objects, but they lie in bed without moving or speaking (hence the name of the condition) and they are unresponsive to painful or other stimuli (p. 253). Though similar to akinesia, this condition is a more extreme version.
Dr. Antonio Damasio gives a case study of a patient, named Mrs. T, with this condition. As he writes:
“She suddenly became motionless and speechless, and she would lie in bed with her eyes open but with a blank facial expression; I have often used the term ‘neutral’ to convey the equanimity – or absence – of such an expression…. Her body was no more animated than her face. She might make a normal movement with arm and hand, to pull her bed covers for instance, but in general, her limbs were in repose. When asked about her situation, she usually would remain silent, although after much coaxing she might say her name, or the names of her husband and children, or the name of the town where she lived. But she would not tell you about her medical history, past or present, and she could not describe the events leading to her admission to the hospital. There was no way of knowing, then, whether she had no recollection of those events or whether she had a recollection but was unwilling or unable to talk about it. She never became upset with my insistent questioning, never showed a flicker of worry about herself or anything else.” (p.71-72)
Months later, Mrs. T recovered from this waking coma, and the strangest part of all was her recollections of her illness. Most strikingly, she was certain that she had not been paralyzed, nor had she been in any pain or anguish. “Nothing had forced her not to speak her mind. Rather, as she recalled, ‘I really had nothing to say'” (p. 72). For the duration of her illness, there had been no thoughts in her mind, no reasoning, no decision-making, no desire to communicate or do anything else. Though she had been fully aware, it was as if her will to act had been turned off.
In light of the above evidence, advocates of dualism must now explain how the doctrine of the soul is sustainable. With these case studies, I have strived to show that the three basic aspects of consciousness – identity, personality, and behavior – are in all respects unified with the brain, and can be altered or disabled by damage to the brain. Brain damage can fragment the fragile boundaries of the self, splitting a single individual into non-overlapping spheres of consciousness that perceive and desire completely different things, or shattering the continuous thread of awareness into a multitude of fleeting selves cut off from themselves and from external reality. Changes to the physical structure of the brain can exert dramatic effects on personality, turning a friendly, hard-working, dedicated individual into a vulgar, abusive, lazy and reckless scoundrel. Conditions that affect the chemistry of the brain can entirely control behavior, robbing an individual of the ability to act or denying them the ability to stop themselves from doing so.
These facts, to some extent, are common knowledge already. It has been universally acknowledged for some time, even among theists, that diseases such as Alzheimer’s can have a profound effect on an individual’s consciousness, or that psychological conditions such as severe depression can be cured by drugs that manipulate the chemistry of the brain. In addition, I have attempted to highlight some particularly dramatic examples of these phenomena, examples that demonstrate fundamental alterations of the self. Such occurrences seem to have been accepted by theism without comment.
However, these cases constitute strong evidence against most varieties of brain-soul dualism. After all, most theists hold that the total destruction of the brain upon the death of the body will have no effect on the soul: how then can the destruction or alteration of small parts of the brain during life have such a dramatic and profound effect on it? Once we acknowledge that the brain mediates and controls all the aspects of consciousness to an overriding degree, what then do we even need to postulate a soul for?
Even the more sophisticated versions of the soul doctrine are vulnerable to the argument from mind-brain unity. For example, a theist might argue that there is no soul as it is usually understood, no immaterial “ghost in the machine” living inside our heads and directing our actions, but that after a person’s death God perfectly reconstitutes their neural patterns in a new body. Of course, such a position is in full agreement with this essay’s conclusion that consciousness is a fundamentally physical phenomenon; but even beyond that, the objection can be raised that it is by no means axiomatic that such a process would preserve continuity of consciousness. To put it another way, if God does this, is it really “you” who survives? Or has God simply allowed you to pass into oblivion and then created a duplicate of you whose fate is based on your actions?
Furthermore, the argument from mind-brain unity strikes at such a belief system in a different way. Consider the case of an individual with a brain disorder that alters their personality, to an extent that their friends and relatives believe them to be in essence a different person (as in the case of Phineas Gage, for example). Which will be resurrected – the person as they were before the disorder, or the person as they were afterwards? Which is that person’s “real” self? If the person’s religious or ethical beliefs changed in the course of that disorder, for which set of beliefs will they be judged? It seems absurd to suggest that those differing selves would be resurrected and judged independently, as if they were separate people; but on the other hand, for God to combine those two essentially different selves in the course of resurrection would be to create a new, composite individual that did not previously exist, rather than to recreate a previously existing one. These difficulties seem insuperable to traditional theistic dualism, and again the argument from mind-brain unity leads inevitably to the conclusion that the self is unified with the state of the brain at any given moment, and cannot be conceived of as something with any independent existence.
And even beyond this, the argument from mind-brain unity has another card to play. There is evidence suggesting that religion itself is explicable as the result of the workings of the brain. Neuroscientists studying the biological roots of religious experience have made some discoveries that may be startling to theists, but that are perfectly in accord with what atheists have been saying all along.
- The Neurobiology of Religious Experience
- Temporal Lobe Transients
- Dr. Michael Persinger’s “God Helmet”
Seeking to understand the neurological basis for religious experience, researchers Andrew Newberg and Eugene D’Aquili performed an experiment. Finding a group of eight volunteers who were Zen Buddhists, they asked them to meditate in the peace and silence of a darkened room. These Buddhists had claimed that, through meditation, they could reach a state called satori in which they experienced a sense of transcendent bliss along with a feeling of timelessness and infinity, as if they were a deeply interwoven part of all of reality. Newberg and D’Aquili wanted to find out what was going on in their minds when this happened.
When the volunteers reached the apex of their meditative state, they tugged on a string, which was Newberg and D’Aquili’s cue to inject a radioactive tracer into their blood through an IV line. This tracer traveled to their brains and became bound to the neurons that were most active, creating a snapshot of brain activity at that particular moment that could later be imaged through a technique called SPECT (short for single photon emission computed tomography). When the imaging was performed, it showed, unsurprisingly, that brain regions responsible for concentration were highly active. However, there was one other consistent result that stood out. In all eight subjects, a particular region of the brain, the superior parietal lobe, showed a sharp reduction in activity.
The role of this brain region was already known. As discussed in Part 1 of this essay, the superior parietal lobe is the brain’s “where” system. Its job is to orient a person in three-dimensional space and help them move through the world; as part of this task, it must draw a clear distinction between “self” and “not-self”. For this reason, Newberg and D’Aquili call it the “orientation association area”, or OAA for short. In all eight volunteers, the OAA had been inhibited by their deep meditative state, deprived of the sensory information it needs to build a coherent picture of the world.
What would be the result of this? Without the OAA, the brain is unable to perceive the physical limits of the self – unable to tell where the body ends and the world begins. (One of the meditators who took part in the study described the experience as feeling “like a loss of boundary” (Holmes 2001, p. 26)). And “[i]n that case, the brain would have no choice but to perceive that the self is endless and intimately interwoven with everyone and everything the mind senses. And this perception would feel utterly and unquestionably real” (Newberg and D’Aquili 2001, p. 6).
Intrigued by the possibility of a biological basis for religious experience, Newberg and D’Aquili broadened their study to include Franciscan nuns who claimed they felt a sense of closeness with God while deep in prayer. The experiment was repeated, and the results were the same: both the Franciscans and the Buddhists experienced similar drops in activity in the OAA, producing a sense of infinite self which both groups then interpreted through the milieu of their own religious beliefs.
This sense of unity with the infinite is not the only characteristic feature of religious experience. Such experiences are typically accompanied by another sensation: a feeling of ecstasy and awe, as though everything has been imbued with cosmic significance and deep intrinsic meaning. In the past, such sensations were put down to the effects of communion with the divine, but science has been homing in on their neurological basis as well. Unsurprisingly, these sensations too can be fully and parsimoniously explained without reference to a deity.
Within the temporal lobes of the brain is an evolutionarily ancient region called the limbic system. The main function of this system is to produce and control emotions. In particular, one important task which the limbic system performs is to “tag” sensory input with emotional significance, enabling us to determine the meaning that a person or object holds for us. When this function is disabled by brain damage, the result is Capgras’ syndrome, described in Part 2 of this essay.
As with many brain structures, we know the function of the limbic system mostly by observing what happens to people in whom it is defective. Specifically, we have observed the symptoms of temporal lobe epilepsy, a condition characterized by erratic storms of random neural firing that occur in this part of the brain. When such seizures occur in areas dedicated to motor control, the result is the most well-known symptoms of epilepsy, the physical fits and powerful involuntary muscular contractions typical of so-called grand mal seizures. But when seizures occur predominantly in the temporal lobes and thus the limbic system, the predominant effects are not physical, but emotional. Patients say that their “feelings are on fire” (Ramachandran 1998, p. 179), fluctuating wildly from soaring heights of ecstasy to paralyzing depths of terror and fury.
In addition, there is another symptom frequently associated with temporal lobe epilepsy. Sufferers of the condition are often hyperreligious: they claim to have profound spiritual and mystical experiences during their seizures; they are obsessively preoccupied with theological issues, churning out meticulously detailed, elaborate and usually incomprehensible text explaining their beliefs (a condition called hypergraphia); they see cosmic significance in trivial everyday events; and they may believe they were visited by God or in God’s presence, or that they have been “chosen” (Ramachandran 1998, p. 179-180). Distortions of time and space, including out-of-body experiences, often also occur (Persinger 1987, p. 123; Newberg and D’Aquili 2001, p. 110). The novelist Fyodor Dostoevsky, who is believed to have been a temporal lobe epileptic, wrote that he “touched God” during his seizures (Holmes 2001, p. 27).
To most people with normal mental functioning, it is obvious that the hyperreligious behavior of some temporal lobe epileptics is merely one symptom of a treatable disorder, not a sign of special favor from God. However, the conviction produced in those who experience these events is unshakable. And besides, if one believes that God exists and may occasionally speak to human beings, then on what grounds can we be certain that these people are not actually communicating with him? The nebulous and unfalsifiable world of theistic belief offers no way to exclude this possibility. As Dr. Ramachandran astutely puts it:
“Who is to say whether such experiences are ‘genuine’ (whatever that might mean) or ‘pathological’? Would you, the physician, really want to medicate such a patient and deny visitation rights to the Almighty?” (p. 179)
Still, a theist might ask what relevance this phenomenon holds for the rest of us. Although most people, including most theists, are not temporal lobe epileptics, the relevance of finding a brain region associated with religious experiences is obvious. Indeed, while most religious people are not temporal lobe epileptics, most religious people also do not have spiritual experiences as intense as those of epileptics. However, we do all have temporal lobes. Could transient and sporadic patterns of firing within them that do not rise to the level of a seizure produce the lower-key, less elaborate religious beliefs and experiences that most people take for granted?
Such is exactly the hypothesis of neuroscientist Dr. Michael Persinger, who has dubbed these patterns of activation temporal lobe transients, or TLTs for short (Persinger 1987, p. 111). Under this theory, TLTs are short-lived electrical instabilities – microseizures – that occur within the temporal lobes of normal individuals and are triggered by a variety of conditions including physical and mental stress (such as grief, fatigue, anxiety, hypoglycemia, or hypoxia), ritualized behavior, loud rhythmic sound patterns such as singing, clapping or chanting, or the ingestion of certain chemicals. TLTs produce feelings of meaningfulness, conviction and anxiety reduction, and are complemented by the conditioned patterns of learning and reinforcement called organized religion.
Though TLTs have not yet been directly measured due to their unpredictable nature, there is good circumstantial evidence in favor of their existence. Dr. Persinger notes that tissues of the temporal lobes display more electrical instability than any other part of the brain (p. 15). In addition, their mere existence in epileptics gives us good reason to suspect they occur in normal individuals as well. “There is nothing unusual about studying the exception in order to find the rule” in neurology (p. 17), and if TLTs behave like other phenomena, in the population at large they will be distributed along a statistical continuum. Most of us would have small ones perhaps once or twice a year; a smaller number of people would have them more frequently. And at the highest and rarest end of the scale would be those who have frequent and intense bursts of temporal activity – the temporal lobe epileptics.
We can make other predictions from this hypothesis. The temporal lobes contain projections to all the sensory areas – vision, hearing, taste and smell, even the vestibular regions (the sense of balance). The most intense TLTs could potentially spread into these regions, producing vivid sensory hallucinations – the affected individuals might see bright, shining forms and landscapes, hear voices, experience a sense of floating or flying, or experience all of these at once, depending on where in the temporal lobes the electrical instability occurs and how far it spreads. These symptoms often occur in temporal lobe epilepsy. Milder TLTs, such as the kind that occur in most people, would not produce these experiences, but would be more subtle and abstract. Depending on their extent, some might be “mild cosmic highs, the kind we feel in the early morning hours when a hidden truth becomes sudden knowledge. Other more intense transients would evoke the peak experiences of life and determine it thereafter. They would involve religious conversions, rededications, and personal communions with God” (Persinger 1987, p. 16). Like all TLTs, they would be followed by marked reductions in anxiety and positive expectations for the future. In any case, there is no fundamental difference between the seizures of temporal lobe epileptics and the temporal lobe transients experienced by ordinary people – the difference is a matter of degree, not of kind.
Empirical support for this hypothesis comes in the form of experiments conducted by Dr. Persinger himself. We cannot predict natural temporal lobe transients, so it is difficult to precisely measure their effects – but what if we could produce artificial ones on demand?
This is exactly what Dr. Persinger has done, by constructing what some have dubbed the “God helmet”. It is an ordinary motorcycle helmet fitted with solenoids which, when worn, produce a complex magnetic field designed to interact with and stimulate the temporal lobes of the brain. Four out of five people who undergo this experience report sensing a “presence” in the room with them, one which religious individuals frequently identify as that of God (Holmes 2001, p. 28).
Of course, a theist might argue that all we have found is a way to tap into the same channels that God normally uses to communicate with us. This is a religious hypothesis, outside the realm of science, and strictly speaking cannot be disproven. However, there are several considerations that weigh against it. First of all, it is unparsimonious, containing extra assumptions that do not increase its explanatory power. We know for a fact that religious sensations can be produced by stimulating certain areas of the brain; we do not know for a fact that such sensations are actually caused by a deity stimulating those areas. The atheist’s explanation that such sensations arise from ordinary neurological activity and nothing more suffices, so why not just stop there? To insist on complicating this perfectly sufficient explanation with extra assumptions is a step that cannot be justified by the evidence, but can only be supported by preconceived faith commitments. Consider a believer in UFOs arguing that yes, all the photos we have of alleged extraterrestrial spacecraft are fakes, but aliens do exist – they manufactured the fakes themselves to inspire us to keep searching for them. Phrased this way, the absurdity is obvious, but some theists would make an equivalent argument for God.
Secondly, the idea of God communicating to humans by activating specific pathways in the brain seems theologically problematic. Certainly the fact that these mystical sensations can be artificially reproduced should be troubling to the believer. Why would God make it possible for himself to be counterfeited? For God to communicate with us through a specific pathway of the brain leaves open the possibility that other causes can hijack this pathway and delude people with false visions that they genuinely believe to originate with God. (This, of course, is exactly what happens in temporal lobe epilepsy – again, unless one chooses to believe that God genuinely is speaking to these people.) It cannot be considered fair for God to create our brains in such a way as to leave people vulnerable to false revelations indistinguishable from the genuine article and then condemn them for being unable to tell the difference.
Thirdly, what if this “God-communication” pathway is damaged? Would such people no longer be able to hear God’s voice at all? And if so, would it be fair for God to condemn them if they ceased to follow his commands simply because they could no longer perceive them? Dr. Ramachandran speculates on just such a topic:
“What would happen to the patient’s personality – especially his spiritual leanings – if we removed a chunk of his temporal lobe? …. Would he suddenly stop having mystical experiences and become an atheist or an agnostic? Would we have performed a ‘Godectomy’?” (Ramachandran 1998, p. 187)
Indeed, such a situation can happen naturally. Alzheimer’s disease, for example, tends to attack and damage the limbic system early on – and therefore it can be no coincidence that loss of religious interest is a frequent symptom of its onset (Holmes 2001, p. 27). Why would God create and then inflict on people a disease that robs them of the ability to hear and respond to him? Would such an individual be punished for nonbelief upon their death?
A theist might argue that such an individual’s situation is not nearly so dire. An omnipotent god would undoubtedly retain the ability to communicate with them and make himself heard if he so desires, even if that person’s temporal lobes are damaged. This is true, but brings us right back to the original question: Why create a “God-communication” module in the human brain in the first place? The atheist’s explanation remains the most plausible: that this brain module is an evolutionary legacy, a part of our brains that first evolved either for some unknown adaptive purpose or as a spandrel, and that persists today and produces the sensations that our culture conditions people to interpret as the presence of a deity. In short, the evidence suggests that God is all in our minds.
- The Problem of Brain-Soul Interaction
- The Problem of Soul Immutability
- The Problem of Body Dominance
The evidence of neuroscience renders the soul at best unnecessary to explain mental processes, and at worst starkly incompatible with the observed fact of the mutability of the self. However, there are several additional arguments that add to the weight of evidence against this theistic doctrine.
This argument is part of a broader argument against theism, namely that so many of its crucial terms have been left undefined. To say something is “spiritual” or “immaterial” does not explain what it is, but only what it is not; all it means is that an immaterial object neither feels nor exerts any of the forces that act on matter. The soul, presumably, is not affected by magnets or electric charge, is not attracted by gravitational forces, and is not held together by the strong or weak nuclear forces. Atoms and other particles pass through it without being affected or affecting it in turn, if indeed it even occupies a location in space. It is not connected to the brain or influenced by the electrochemical processes that occur there. How then can it possibly receive sensory information from the brain, or affect it in return? What exactly does it do that confers consciousness on us, and how is this influence transmitted to the brain?
To answer these questions with “a miracle” is patently unsatisfactory. If we invoke miracles, we know no more than when we started; we have not answered the question, merely moved it beyond the realm of the answerable. Miracles, by definition, are those things which cannot be tested, explained or described any further – if it were otherwise, they would not be miracles, but ordinary events suitable for scientific study, which would bring us right back to the original question of how the soul interacts with the brain. In essence, miracle claims are a smokescreen to protect unsatisfactory assertions from further questioning.
There are two alternatives to this view, which is called substance dualism. One is to postulate that the soul exists, but that it too is made of matter – the position of the ancient Greek atomist philosophers. However, this implies that the soul can be destroyed, and that consciousness will end with the physical dissolution of the body upon death. This view is unacceptable to most modern theists. Alternatively, one might speculate, as some philosophers have, that an immaterial soul exists but that it does not and cannot exert any causal force on the body. This position is called epiphenomenalism. In this view, the soul is like a shadow following at a person’s footsteps, or the cloud of steam produced by a locomotive’s whistle – trailing along with the body, but separate from it.
Epiphenomenalism would also seem to be an unacceptable alternative to most theists, because it necessarily denies free will. According to an epiphenomenalist, if I perceive hunger and go to the kitchen to get a snack, I may believe I went to the kitchen because I was hungry, but I would be wrong. My body became hungry and decided to get a snack of its own accord, and my causally impotent mind mistakenly believes it initiated the action. In essence, under epiphenomenalism consciousness itself is one lifelong post hoc ergo propter hoc fallacy. As absurd as it seems, this is what epiphenomenalism necessarily implies.
But if neither of these alternatives suffice, we are left with the conundrum of how an immaterial soul can possibly alter the state of the body. Since no adequate resolution to this problem exists, I propose that strict materialism is the only possibility remaining that adequately accounts for the facts. Under this view, we need no external immaterial object affecting my brain through some mysterious and undefined mechanism; instead, my brain is a self-adjusting system causally potent upon its own operation, a web of feedback loops that has reached a critical point of complexity where it can perceive its own workings. This proposal deals with all the available evidence in a parsimonious way and provides a genuine explanation for mental phenomena.
A second argument against the traditional substance-dualist conception of the soul is as follows. How can it be maintained that a person has only one immutable soul when people are constantly changing throughout their lives? To put it another way, in what sense is an old man upon his deathbed the same individual as he was at the time of his childhood? A person’s interests, desires, beliefs, worldviews and values frequently, if not inevitably, change over the course of their lifetime. Very few people, if any, could confidently state that they are exactly the same person as the one they were ten years ago, or the one they will be ten years in the future. When a person dies as an old man, would God hold him responsible for a pack of gum he stole when he was a child eight or nine years of age, even if that man has learned so much and his values have matured to an extent that he would no longer dream of doing such a thing again, even if the parts of his soul that caused him to commit that act have long since passed out of existence?
As Heraclitus is reported to have said, one cannot step twice into the same river. When a person changes to such an extent that their past self is like a stranger to them, is it really fair to hold them responsible for the actions of that self? Or do we have multiple souls over the course of a lifetime, each of which will be judged independently? But people hardly ever change in Damascus Road-like flashes, instantaneously jumping from one self to another; instead, changes in one’s outlook and consciousness almost always grade slowly into each other, and cannot be localized to a single point in time. We would need an infinite number of discrete souls to accommodate such a thing, and this is absurd.
The final consideration relates back to the argument from mind-brain unity presented in Part 2 of this essay. The fact that brain injury can alter the self strongly implies that the brain is the true seat of the self. Some theists dispute this, arguing that the soul’s true nature is immutable, but that it interfaces with the body only through the brain, and that brain damage can distort this interface and cause a person to act in ways not in keeping with the true nature of their soul.
But such an argument only raises further questions. For example, why would God create an immutable soul-nature and then make it subject to the changeable and imperfect nature of a fallible material body, and judge it for the actions committed by that body? Why do we even need such bodies, if at best they can only allow the true nature of the soul to shine through unaltered, and at worst obscure it? Are we to believe that, for example, in a person with Capgras syndrome, their soul recognizes their parents and friends and wants to respond with love and affection, but is prevented from doing so by the flawed brain which instead instructs their body to angrily denounce them as impostors? This raises the question of in what sense the soul can be said to control the body at all. Even in people without neurological disorders, the desires of a flawed material body can compel the soul to commit sins: greed, gluttony, lust, anger. Under materialism such conditions make perfect sense – we are our bodies – but no theist yet has explained God’s rationale for imprisoning our souls in bodies and holding them responsible for the uncontrollable irrationalities of those bodies. As the Christian Gospel of Matthew says in verse 26:41, “The spirit indeed is willing, but the flesh is weak.” Exactly.
- Free Will
- The God of the Gaps
Despite all we have learned through science about how the brain works, there are a number of fundamental questions about the mind whose answers still elude us. One concerns the subjective qualities of sensory experience, what some philosophers call qualia. Another is the puzzle of free will – are we truly responsible for our own actions or controlled by forces beyond ourselves? The third is the nature of consciousness itself – we know that we know things, but who is the knower? In this section of this essay, I will survey each of these questions, arguing that although significant questions remain in each case, all of these phenomena can be adequately accounted for by a materialist view of the mind, and none give us any good reason to believe that there is an immaterial soul separate from the functioning of the brain that guides our actions.
One of the most basic truths about human beings is the richness of our experience. We are not robots unconsciously responding to external stimuli; rather, we inhabit a vivid internal world of sensory perception. These inner, subjective dimensions of experience are called qualia, the perceptions of what something “feels like” (Feinberg 2001, p. 145). The brilliant colors of a rainbow, the sting of a cold wind on one’s face or the soothing feeling of a hot bath, the roughness of sandpaper or the smoothness of silk, the taste of peppermint or chocolate, the glissando of an orchestra or the painful screech of fingernails on a blackboard, the icy chill of fear or the warm glow of happiness – all of these things are qualia. In each case, it is not the mere description of the experience, but the inner, subjective “feels like” of the experience itself that is the quale. The essence of qualia is impossible to convey in words; one cannot explain what middle C sounds like to a deaf man, nor describe the color red to one blind since birth.
The existence of qualia has been used by some dualist philosophers to argue that a strictly materialist account of the mind cannot be correct. The best-known defense of this position is probably Jackson 1986, who proposed a now-classic thought experiment that gave rise to what has been dubbed the knowledge argument for dualism.
In this thought experiment, Jackson proposes that a person (arbitrarily assumed to be female, and named Mary) has been born and raised in a home where everything is black and white. The walls are painted black and white, she has black and white books to read, and she has black and white television to learn about the external world. In all her life, she has never ventured outside this house and so has never seen color. However, she hears of the concept of color and is intrigued, and makes it her mission to understand what color is.
Mary takes up the study of physics, chemistry and neuroscience, learning everything that is known about the biological basis of color vision. After exhaustive study, she learns every relevant fact about how human beings see color, perhaps producing a comprehensive flowchart showing exactly how the sensation of color is produced in the brain – from the time light impinges on the retina until the time that the sensation of color is consciously perceived, detailing every individual neuron firing, every neurotransmitter release and every electrochemical reaction in the pathway. Under a materialist conception of the mind, Mary now knows everything that there is to be known about color.
Imagine, then, that after completing her studies Mary steps outside her black and white house for the first time in her life and sees a red rose. She picks it and stares at it in wonder. She has realized that she now understands what color is in a way that she did not before; furthermore, she has realized that, despite the comprehensiveness of her diagram, it was somehow incomplete. There is more to the sensation of “red” than a map of neurons firing can explain. There is an inner, subjective dimension to the experience – the quale of redness – that no external examination of the brain can ever capture.
This is the crux of the knowledge argument. If our imaginary neuroscientist knew all the physical facts relating to the brain’s perception of color, yet was still able to gain new knowledge when she saw color herself for the first time, then it follows that there must be facts about the mind that are not physical facts – in other words, some version of dualism must be true.
How can a materialist reply to the knowledge argument? The easiest way would simply be to deny the plausibility of Jackson’s scenario, and to claim that any truly complete understanding of the physical functioning of the brain would necessarily include all aspects of sensory perception, including qualia. In other words, this position holds that a person who had never seen the color red in their life, after completing a study of how the brain perceives color, would necessarily be able to imagine red. Incredible though this seems, it does not entail any logical contradiction.
However, I do not believe that this position is correct. There is a more plausible explanation for why the knowledge argument does not suffice to overturn materialism. To see this, we need a slightly modified version of Jackson’s original thought experiment. Imagine a similar scenario, except that our hypothetical researcher this time makes it her objective to study the game of tennis, instead of the neurophysiology of color perception. Imagine that this researcher not only memorizes the rules of tennis, but learns everything about the physics of how the game is played, down to the correct speed and orientation of the tennis racket in order to return a serve arriving with a given angle and velocity. Finally, imagine that upon the completion of her studies, this researcher is sent out onto the court to play her first ever game of actual tennis, against an experienced tennis player – and loses handily, as we might expect. Further imagine that with practice, this researcher’s tennis skills gradually improve, despite the fact that her factual knowledge of how to play the game does not increase. Would it therefore follow that the sport of tennis is not reducible to the rules of the game and the physics of the match? Would we be forced to conclude that there is some mysterious, non-physical “essence of tennis” that can never be derived from mere study of the physical rules of the game?
Such a conclusion would plainly be absurd. The correct resolution to this apparent paradox is to realize that there is more than one kind of knowledge. There is propositional knowledge – knowledge of facts – but there is also procedural knowledge, the type of experiential skill that comes only through practice. The two are not equivalent, as the above example shows: mere propositional knowledge of how to play tennis does not equate to procedural knowledge that confers skill at tennis. Similarly, when Mary sees red for the first time, she gains no new propositional knowledge; she learns no new fact that she did not already know. What she does gain is a new ability: the ability to imagine the quale of redness. Qualia in general are therefore a type of procedural knowledge, and by the nature of how the brain works, this type of knowledge can only be gained through firsthand experience. However, they are still a thoroughly physical phenomenon.
To provide further support for this position, we can once again deploy the argument from mind-brain unity. In this particular case, the argument takes the following form: we can know that qualia are not non-physical entities because the perception of qualia can be altered by physically altering the brain.
For example, there exists a condition called pain asymbolia which can be produced by brain damage. Patients with this condition lose no sensory perception – they can tell the difference between heat, cold, touch, and various other sensations – but what they seem to lose is the emotional response to pain (Feinberg 2001, p. 4). This condition can be induced deliberately, by surgically severing nerve connections in the brain, to treat patients who are experiencing incurable pain and deep in misery and depression. After the operation, they are invariably much more cheerful, and say things such as, “The pain is the same, but I feel much better now” (Damasio 1994, p. 266). What have these people lost if not their “pain” qualia?
Another condition which demonstrates the link between qualia and the brain is known as synesthesia. In people with this condition, the senses have “crossed wires” – sensations normally experienced in only one sensory modality are experienced in additional modalities as well. For example, many synesthetes see colors whenever they hear sound; when they listen to music, they perceive a continual explosion of colors like a fireworks display in their mind’s eye. Others read in color, perceiving every letter, number or symbol they see as having a distinct and vivid color associated with it, regardless of the actual color of the print. Still others may taste shapes, experiencing sensations of texture and shape associated with a given flavor (Ramachandran and Hubbard 2003). In one recently reported case, a professional musician identified as E.S. has a version of synesthesia in which tonal intervals in sound she hears are consistently linked to specific tastes; she uses this ability to perform the complex task of tone-interval identification in her music significantly faster than non-synesthetic musicians can (Beeli et al. 2005).
What is the cause of this cross-linking of qualia? One hypothesis is that we are all synesthetes at birth, but as most of us grow, the extra neural connections between the senses are pruned away. Adult synesthetes would merely be those people who retain these connections into maturity. Whether this is correct or not, one fact is certain: synesthesia can be inherited. The condition runs in families; approximately one-third of synesthetes report having a family member who also possesses the condition (see http://www.synaesthesia.uwaterloo.ca/genetics.htm). This would seem difficult for a non-materialist theory of qualia to explain. Are souls inherited? Does the condition of your parents’ soul influence the condition of yours?
As the above examples show, qualia are connected to the structure of the brain, and can be altered by changes to that structure. But it therefore follows that qualia have a material basis. Purely non-physical qualia, existing entirely in a non-physical mind, could not be so affected.
Granted, this does not explain how a given sensory input is linked to a specific quale (why does red look red and not blue or green, or for that matter, why doesn’t it “look” high-pitched, squeaky, or sour?), what I call the mapping problem. The answer to this is unknown at present. It may be that we will never know; perhaps no mind can ever fully conceptualize itself in this way. However, it is also not beyond the realm of possibility that future advances in understanding may solve this, and I remain optimistic. (Though the idea of explaining how a series of neural firings produces the subjective sensation of “red” may seem incredible, the scientific understanding of the brain is truly in its infancy. We probably do not yet possess the right framework even to know what questions to ask.) However, in either case, there is no reason to introduce the idea of a soul. No dualist hypothesis actually explains qualia in a way that a materialist hypothesis does not. Instead, dualism merely muddies the waters, adding additional levels of complexity and mystery without actually explaining anything.
The next difficulty that a materialist framework of mind must confront is the problem of free will. Are our actions in some sense “up to us”, or are we merely pawns of forces beyond our control? The answer to this question, whatever it may be, is highly significant, because it has repercussions beyond the rarefied fields of neurology or philosophy. Namely, only in a universe where there is free will can moral responsibility exist. It makes no sense to say that what a person did was wrong unless they could have chosen to act differently.
At this point, advocates of dualism often present what they see as an insuperable dilemma for any materialist conception of free will. In a materialist worldview, everything that exists is ultimately matter and energy, both of which obey a set of precise physical laws. If there is nothing supernatural that can defy these laws, then every interaction that occurs, on every level, is ultimately reducible to them. The reason I feel a given emotion or desire is because of a series of neurons firing in my brain; the reason those particular neurons fired and not others can be explained in terms of chemistry and electromagnetism; those factors can in turn be explained as a result of the precise arrangement of molecules inside those neurons; and so on.
But this seems to present a problem. After all, the arrangement of molecules in my brain can be explained by postulating that it evolved to that state from an earlier state in accordance with the aforementioned physical laws; that state was in turn dictated by a still earlier state; and so on. If we extend this causal chain far back enough, we seem to reach the conclusion that every action I will take during my life was predetermined by causes that were in effect before I came into existence. Extend this argument still further and we apparently reach the conclusion that everything that would ever happen in the history of the universe, including each event in each of our lives, was unalterably fixed from the moment of the Big Bang. The seeming consequence is that choice is an illusion – there was and is no possibility of things turning out any differently than the way they actually did.
This position, which is known as hard determinism, is unpalatable to many people, and understandably so. Most people – and I admit that I include myself in this category – value the idea that the future is not fixed, that we can exert some control over it by our choices; and more importantly, that our decisions are in some sense our own and not simply the result of a mechanistic chain of causes stretching beyond the beginning of our individual existence. Of course, just because we dislike the idea of hard determinism does not necessarily prove it false. But a worldview that could accommodate the idea of free will and moral responsibility would be far more attractive to many people, and advocates of theistic dualism often assert that theirs is such a worldview.
Before exploring this claim, however, it will prove beneficial to back up and examine the concept of free will itself. The key question is this: What does it mean to say that a decision was freely willed? Many people would say that if our decisions are completely determined by prior causes, they are not free and thus we cannot be held morally responsible for them. But if our decisions are not completely determined by prior causes, how is that an improvement? If a given action was not determined by prior causes, that can only mean that it happened at random. This position, known as libertarianism (though it is unrelated to the political philosophy of the same name), does not seem to restore the moral responsibility that hard determinism denies. After all, if our decisions are random, that means that they happen for no reason at all, and again it cannot be said that we have any control over them, any more than we can control the outcome of a pair of tumbling dice.
Is there a gap between determinism and randomness where free will can reside? But what third option could there possibly be other than those two? Either an event was caused or it was not; that certainly seems to exhaust all the possibilities. If our decisions are neither caused nor uncaused, what other option is there? This dilemma seems to suggest, at first glance, that the very concept of free will may be incoherent.
However, I believe there is a solution to this paradox that is compatible with materialism. To see how this is possible, consider a thought experiment I call the “prediction machine”. The prediction machine is a construct that is informed of a choice you are currently facing and takes into account all relevant background information, up to and including the exact state of every subatomic particle within your brain, to infallibly predict what your decision will be. If your decisions are not free-willed and are the result of a mechanistic chain of cause and effect, it is logically possible to build a working prediction machine. Conversely, if it is logically impossible to build a prediction machine, then hard determinism must be false.
Though this concept alone may not seem illuminative, it can be used as the foundation for a powerful insight. Using that foundation, I will now advance a conclusion that may seem audacious. For clarity’s sake, I will state the conclusion before explaining the reasoning that led to it: it is logically impossible to build a prediction machine, at least in any world that obeys the same physical laws as ours does.
Why is this? Consider the classic thought experiment called “Maxwell’s demon“. Maxwell’s demon is a tiny being that controls an atom-sized gate between two reservoirs of gas of equal temperature. When the demon sees a slightly faster-moving (i.e., hotter) atom approaching from one side, he opens the gate to let it through; when he sees a slightly slower-moving (and therefore colder) atom approaching from the other side, he also lets it through. Otherwise, he keeps any approaching atom on the same side. In this way, the demon could seemingly create a temperature differential between the two sides – thus reversing entropy and creating useful energy – without doing any work. This is in apparent violation of the Second Law of Thermodynamics.
The solution to this paradox is not a philosophical, but a practical one. If Maxwell’s demon was omniscient and could perceive the temperature of each approaching gas particle without having to interact with it in any way, then he could indeed violate the Second Law of Thermodynamics. But in our world, this is impossible. The only way to tell how rapidly an approaching atom is moving is to bounce a photon off it, a process we usually refer to as seeing, and it turns out that the increase in entropy that making such an observation entails counterbalances any decrease in entropy the process of sorting atoms could produce, causing the system as a whole to obey the laws of thermodynamics.
A similar reason explains why prediction machines are impossible. The reason such a machine cannot be built is because, in the real world, prediction requires measurement, measurement requires interaction, and interaction unpredictably changes the system being interacted with. These unavoidable perturbations make the accurate prediction of a free-willed person’s actions impossible.
As a more concrete example, imagine that you were presented with a simple forced-choice test – say, a red card and a blue card placed side-by-side on a table, of which you were told to choose one and pick it up. In order to prove that free will does not exist and your decisions are merely the result of mechanistic cause-and-effect laws, you are hooked up to the prediction machine prior to doing so, and that machine is tasked with determining in advance which card you will pick. Assessing the relevant characteristics of every subatomic particle in your head, it makes a prediction about what your choice will be.
But the mere act of making that prediction invalidates it. By measuring the state of your brain, the prediction machine has changed that state, and its prediction may now turn out to be wrong. (This can be conceptualized in mechanistic terms – by probing the relative levels of neurotransmitters, electric potentials of key neurons, and so on, it may have changed these qualities and thus altered your choice – or it can be conceptualized in higher-level terms – knowing the machine is trying to outguess you, you alter your choice. Either way, it works out to the same thing.) The only way for the accuracy of the prediction to be ensured is to run the prediction machine again, to recalculate what your decision will be based on the new, changed state of your brain; but in so doing, that state is again changed, and so the second prediction is likewise invalidated. It might still be right, but it can no longer be guaranteed to be right.
The consequence should be obvious. No matter how many times the prediction machine is run, it will never produce a prediction whose accuracy can be guaranteed. This has nothing to do with practical limits on the resources of the machine’s builders. As long as it has to obey the laws of physics, infallibly accurate prediction is impossible, no matter how much background knowledge it possesses. Such a machine may make predictions that are often correct, but it cannot make predictions that are always correct. In short, human behavior is predictable on a statistical level, but not on an individual level. And this is a core component of what is usually meant by the commonsense understanding of the term “free will”.
In proposing this definition, I do not seek to postulate a third type of action, one that is neither caused nor uncaused. Human decisions have causes (or motivations, if one prefers); that much should be obvious. We have good reason to believe that the basis of the mind is a materialistic one, which if true means that libertarian free will cannot exist. I view this as a good thing, because in my opinion libertarianism destroys the possibility of moral responsibility even more thoroughly than hard determinism does. Even in a world of hard determinism, there is the possibility that people who harm others can be rehabilitated through punishment, but in a world where human actions are fundamentally random, there is no reason whatsoever to believe that such treatment would work.
As I said, I am not seeking to wedge a third alternative between the options of decisions being caused and decisions being uncaused. Instead, I argue that the option that decisions are caused should actually be properly viewed as two separate options – caused and predictable, and caused but unpredictable. I further argue that hard determinism should be seen as equivalent to the former, while the latter is what should be meant by the term free will. If decisions cannot, even in principle, be predicted ahead of time, I hold that it is meaningless to label them deterministic in the hard sense.
At this point, some additional clarifications are needed. Although unpredictability is a necessary component of free will, it is not a sufficient one. After all, if the argument given above is correct, then infallibly predicting the behavior of any sufficiently complex system is impossible, for the same reasons. But it is absurd to say tumbling dice have free will, just because their behavior is unpredictable. Instead, I argue there are other conditions that must also be met for a decision to be considered free, all of which I believe to be firmly rooted in common sense. In addition to being caused but unpredictable, a free-willed decision must be:
- Self-directed. An action is free only if it is performed by an agent who consciously intended it to turn out a specific way.
- Not coerced. A decision is not free if overwhelming external force was applied to it by another agent with the aim of influencing it toward a specific outcome.
- Informed. A decision is not free if it is made in ignorance of the likely consequences.
I believe that this definition fits the common usage of free will: acts performed by a conscious person that arise from that person’s nature, that have causes and motivations, that are potentially largely predictable, but not completely predictable regardless of how much background knowledge one possesses. In a materialistic world, this type of free will is eminently possible, and in no way denies individual moral responsibility.
How could free will come into being? By far the most feasible answer involves the evolutionary process that created the human species. After all, free will is a highly adaptive property. A living creature without free will, or some equivalent decision-making capability, would necessarily be guided purely by preprogrammed instinct. This can work so long as that creature never encounters anything other than the limited range of situations it is programmed to deal with, but if it is faced with a situation that does not fit the assumptions of its programming, it will be unable to respond effectively and may well die. (For an excellent example of how instinctive programming can produce a creature unable to deal with novel situations, consider the sphex wasp). By contrast, a free-willed living being would stand an excellent chance of responding appropriately no matter what type of situation it is faced with, rather than becoming inert or entering an endless loop such as the sphex wasp does. This could conceivably be a powerful selective advantage that evolution would favor for living beings, such as the ancestors of humans, that inhabited complex and unpredictable environments.
However, the requirement that free-willed acts can only be performed by a conscious agent brings us to the final and possibly most difficult conundrum of all – the source of consciousness itself. How is it that we are aware of ourselves as autonomous individuals? Who or what is the the observer, the “I”, that resides within each of our minds and does the actual perceiving? How is it possible that we can “step back” and examine our own mental functioning with a brain made up of neurons, even though no individual neuron possesses such an ability?
Although a full answer to this question is beyond the scope of this essay, and indeed beyond the scope of human knowledge at this time, one thing is for certain: the theists who assert that it is logically impossible that consciousness could arise from non-consciousness, or who make similar claims, are wrong. Simply because no individual nerve cell or electrochemical reaction possesses the property of consciousness, it does not follow that a great number of them, suitably arranged, cannot possess this property. In much the same way, just because no single brick possesses the property of being a house, it does not follow that no arrangement of many bricks cannot possess this property either. There are many systems where the arrangement and interaction of simple components can produce new, complex properties and behaviors that no individual component has.
This is the concept from the field of complexity theory called emergence. Emergent phenomena are those which spontaneously arise from the interaction of simpler components, producing new levels of complexity with new properties that do not exist in any of the components taken individually. For example, the properties of a protein emerge from the interaction of the amino acids that comprise it. The properties of a flock of birds or school of fish emerge from the behavior of the individual animals in it. The properties of a stock market emerge from the actions of the individual traders that make it up. In all cases, studying the behavior of the individual components of the system in isolation will probably not provide insight into the origin of behaviors of the system as a whole, and the behavior displayed by each individual component is much simpler than the behavior of the system. Consciousness is almost certain to be this type of phenomenon, and as such, any attempt to isolate a single source or point of origin for it is very probably doomed. All our understanding of neuroscience weighs against the possibility of a “Cartesian theater”, a single center of consciousness in the brain where all the disparate streams of information processing come together like a film playing on a screen. Instead, consciousness is very probably an emergent property of the entire ensemble of neurons in the brain, taken as a whole.
Granted, this does not truly explain how consciousness emerges. But then, that was never my intent. These issues are at the heart of what it means to be human, and possibly the most fundamental mysteries we will ever confront. They have been studied and debated by scientists and philosophers for centuries and in all likelihood will continue to be studied for centuries to come. I do not claim to have explained in comprehensive detail how such things come to exist. Instead, it is my aim to show that these phenomena are compatible with an atheist’s worldview – that we have reason to believe that they can exist in a material world, even if we do not know exactly how. In this respect, materialism and dualism are on equal footing. Once that is established, additional arguments such as the argument from mind-brain unity tilt the balance in materialism’s favor. Qualia, free will and consciousness, mysterious though they may be, do not provide soul-based hypotheses any comfort.
After all, what better explanation can dualism provide for any of these things? If we explain qualia, free will or consciousness by saying “the soul does it”, what more do we know than when we started? Not only does postulating a soul explain nothing, it explicitly cuts off the possibility of later improving our understanding, since supernatural phenomena by definition follow no rules that we can understand. By contrast, if we stick to the naturalistic view that so far is supported by all available evidence, the amazing success neuroscience has so far had in unraveling the workings of the mind gives us reason to believe we will find out even more in the future.
To attribute the mysteries of the mind to an immaterial soul is a natural mistake to make. Throughout human history, people have employed this “God of the Gaps” type of reasoning: whenever we come across something we do not understand, attribute it to the action of a supernatural force. If the soul is to be added to the list of things explained away by this method, then I am content, because every other entity that has ever been on that list has ultimately been shown to have a thoroughly naturalistic origin and explanation. Day and night, the nature of the moon and the stars, the cycles of the seasons, the cause of disease, the motions of the planets, the weather, the source of disasters such as earthquakes – we now know that all these things and many more are natural phenomena, with no gap-dwelling gods needed to explain them. The supernatural explanations for these things have melted away in light of the much more satisfactory scientific explanations, and I am confident that the source of the mind will eventually join them.
The God of the Gaps was a common method of reasoning when people did not know better, but now we do know better. We have a real way of understanding the world now, a better way. While there are still many things we do not know, we no longer have no reason to attribute anything to magical forces, except for the lingering remnants of supernatural thinking that hang about us like vapor. It is time for us to set aside these last traces of our superstitious past; we have the wisdom and the maturity to face the truth about who and what we are. There is nothing demeaning about the material basis of our minds – our brains are truly marvelous instruments, whose power has enabled us to do many amazing things, and it is time we started giving credit where credit is due. Attributing the accomplishments of our minds to a ghost in the machine of our heads is an idea that can no longer be supported, and therefore should be set aside once and for all.
Carroll, Robert. “Multiple Personality Disorder.” Accessed online at the Skeptic’s Dictionary website, [http://www.skepdic.com/mpd.html], 22 November 2002, page last modified 22 July 2002.
Choi, Charles. “Brain tumour causes uncontrollable paedophilia.” Accessed online at the New Scientist news service, [http://www.newscientist.com/news/news.jsp?id=ns99992943], 2 January 2003, page last modified 21 October 2002.
“Facts About Unipolar Depression and Suicide.” Accessed online at the American Foundation for Suicide Prevention website, [http://www.afsp.org/about/depresfc.htm], 21 December 2002.
Merkle, Ralph. “Energy Limits to the Computational Power of the Human Brain.” Accessed online at [http://www.merkle.com/brainLimits.html], 16 November 2002.
Miller, Bruce, W. Seeley, P. Mychack, H. Rosen, Ismael Mena, and K. Boone. “Neuroanatomy of the self: Evidence from patients with frontotemporal dementia.” Neurology, vol.57, no.5, p.817-821 (11 September 2001).
Piper, August. “Multiple Personality Disorder: Witchcraft Survives in the Twentieth Century.” Accessed online at CSICOP, [http://www.csicop.org/si/9805/witch.html], 22 November 2002.
Spearing, Melissa. “Schizophrenia.” Accessed online at the National Institute of Mental Health website, [http://www.nimh.nih.gov/publicat/schizoph.cfm], 21 December 2002, page last modified 1 June 1999.
Strock, Margaret. “Depression.” Accessed online at the National Institute of Mental Health website, [http://www.nimh.nih.gov/publicat/depression.cfm], 21 December 2002, page last modified 22 May 2002.
Walsh, Elizabeth, Catherine Gilvarry, Chiara Samele, Kate Harvey, Catherine Manley, Theresa Tattan, Peter Tyrer, Francis Creed, Robin Murray and Thomas Fahy. “Predicting violence in schizophrenia: a prospective study.” Schizophrenia Research, vol.67, no.2-3, p.247-252 (1 April 2004).
“What is fronto-temporal dementia?” Accessed online at the Alzheimer’s Society website, [http://www.alzheimers.org.uk/Facts_about_dementia/What_is_dementia/info_fronto.htm], 30 December 2002, page last modified November 2000.