On Free Will III: Outsmarting the Prediction Machine

Clearly, what is immaterial in the human mind can influence the physical world, or our acts of will and understanding would be without effect. If our will is free these physical effects are not wholly predictable.
http://www.leaderu.com/ftissues/ft9511/revessay.html

A merchant in Baghdad sent his servant to the market. The servant returned, trembling and frightened. The servant told the merchant, “I was jostled in the market, turned around, and saw Death.”

“Death made a threatening gesture, and I fled in terror. May I please borrow your horse? I can leave Baghdad and ride to Samarra, where Death will not find me.”

The master lent his horse to the servant, who rode away, to Samarra.

Later the merchant went to the market, and saw Death in the crowd. “Why did you threaten my servant?” he asked.

Death replied, “I did not threaten your servant. It was merely that I was surprised to see him here in Baghdad, for I have an appointment with him tonight in Samarra.”
http://www.tatanka.com/reading/humanity/parable/samarra.html

We have all heard stories such as the parable of Death in Samarra, or the tragedy of Oedipus, who was fated to kill his father and marry his mother and went on to do precisely that, despite his knowledge of the prophecy and his attempts to forestall it. These stories frighten us, and rightly so, by raising the specter of a future that is fixed, and that we each have an inexorable fate that, no matter how hard we run from, we only reach all the sooner.

Advocates of dualism sometimes claim that something very like this must be the case if materialism is true. After all, the argument goes, if we have no supernatural souls exempt from the principle of cause and effect, then our brains must be nothing but machines obeying the laws of physics, and if that is the case, then however complicated they are, their operation can in principle be predicted. Given complete knowledge of the state of the world, plus knowledge of the laws of physics governing how that state evolves, one could predict events arbitrarily far into the future. In this view our future behavior would be just as predictable in principle as the landing spot of a baseball thrown at a certain angle upward with a certain speed. And isn’t that a terribly gloomy, disheartening vision? Don’t we want to be more than thrown baseballs?

This intuition pump is what Daniel Dennett, in his book Elbow Room, calls the “Malevolent Mindreader”, an entity who always knows in advance exactly what you are going to do and uses that knowledge to foil you every time. Playing chess against a Malevolent Mindreader is a doomed proposition, since he knows exactly how the game will end. A variant is the “Nefarious Neurosurgeon”, who uses his knowledge not just to predict but actively to control you, typically by surreptitiously implanting electrodes into your brain that cause you to think, believe and decide just as he wishes, while leaving you the illusion of being in control. Are these sinister figures really waiting in the wings for us if materialism is true?

Let us explore this proposition in more detail with a thought experiment. Suppose that it is the year 2096, and the mad scientists at the Materialism Stereotypes Institute, thanks to a large government grant, are about to build the world’s first fully functional Prediction Machine. This machine is an extraordinarily sophisticated piece of hardware, possessing a wide variety of sensors that allow it to gather every conceivable piece of data about its environment and a computer brain programmed with all the laws of physics. The purpose of the Prediction Machine is to survey in complete detail the state of a person’s brain, then extrapolate that information to infallibly predict that person’s future actions, thus proving that we are nothing more than complicated but deterministic machines ourselves. Later versions may be able to predict actions days or years in advance, to make Oedipus or Death-in-Samarra scenarios possible, but PM Mark I is merely a proof of concept and will only predict decisions a few minutes into the future.

Nevertheless, this is enough to prove the point the mad scientists of the MSI are trying to establish. To demonstrate their sinister powers, they recruit test subjects who agree to play several rounds of rock-paper-scissors with the Prediction Machine. If their hypothesis is correct and our actions are predictable, PM Mark I will always win, since it will infallibly anticipate what sign a person will throw and then throw the correct sign that beats that one.

The first test subject is hooked up to the Prediction Machine, which scans his brain and makes its prediction. Then, on a count of three, they each throw their signs simultaneously.

The human throws rock. The Prediction Machine throws paper. The mad scientists grin and exchange high-fives.

But even at the Materialism Stereotypes Institute, they are well aware that repeatability is vital for science. In order to prove that the Prediction Machine’s victory was not due to chance, they continue the test.

In the second round, the human throws scissors. The Prediction Machine throws scissors as well.

The mad scientists’ grins fade. They put the test on hold, pop open the machine’s casing, check all its connections and recalibrate its software to make sure nothing has gone wrong. But it seems to be in perfect working order, so they dismiss the result as a fluke and continue the test.

In the third round, the human throws rock. The Prediction Machine throws scissors.

Seeing their chances at a Nobel Prize slipping away, the mad scientists give their machine the most thorough going-over they possibly can, but quickly discover that their efforts are to no avail. They are absolutely certain that there is nothing wrong with the machine, and yet it cannot win any more often than would be expected by chance. The more trials they run, the clearer this becomes. After all the millions of dollars and decades of research that went into building it, the Prediction Machine is no more accurate than a device that chooses signs at random. Like John Henry defeating the pile-driver, humanity has triumphed over the machine, and, it seems, retained its free will.

Something has gone wrong here, but what? Why doesn’t the Prediction Machine work?

To see what the flaw is, consider a similar project: the quest to build a Prediction Machine for the stock market. Taking into account the current prices and past trends of every publicly traded stock, this machine would infallibly predict which stocks would rise and which would fall, allowing anyone who used it to effortlessly make a killing.

Such a plan could never work, and a moment’s thought will reveal why: the mere existence of this machine would itself be an influence on the stock market that would have to be taken into account. If you used the knowledge it gave you to buy stocks, this will be a new causal factor on the market which would cause other traders to react differently than they might otherwise have done. In order for its original prediction to be accurate, the machine would have to predict this and incorporate that knowledge into its forecast. But that would change what the original forecast was going to be, thus changing what stocks you would buy in response, thus changing other traders’ reactions, thus forcing the machine to alter its original prediction yet again… and so on, in a recursive, endless loop, as the machine tried in vain to construct an accurate model of the world that included itself in that model. To include itself in its own model, it would have to include itself containing that model in its model, which would have to include itself containing that model containing that model in its model, in an infinite iteration. This is impossible, clearly. It would be like trying to store a box inside itself.

Now, there is one way out of this paradox: use the machine to predict the stock market, but do not act on its predictions in any way, not even to tell anyone else what they are. If this were the case, the machine’s predictions would not alter the state of the market and thus force it to recalibrate its own prediction. You would be the proverbial observer behind glass, always knowing in advance what the market would do, but utterly unable to act on that knowledge in any way at all, because the mere attempt to do so would render it untrue. In such a scenario, a Malevolent Mindreader could theoretically exist, but would be forever cut off from the rest of the world, unable to interact, an observer only.

But when it comes to the human brain, not even this strict separation can be maintained. When it comes to the stock market, one can learn information about a particular stock without actually affecting it. But imagine if this were not the case. Imagine if the only way to learn a stock’s price was to buy a share of it. Then the machine would have to take its own existence into account, in which case infallible prediction truly would be impossible. By terminating the infinite recursion of self-prediction at some arbitrary depth, one could force the machine to make a prediction, but it could never be more than an educated guess, and would never be the dreaded statement of unavoidable destiny enshrined in tragic literature.

Human brains are like the latter, not the former, type of stock market. To exactly describe the state of a person’s brain at a given time, one would have to measure the exact electrical potential of each neuron, the exact number of neurotransmitter molecules released from every synapse, and so on. But acquiring this level of detail would require an extremely sophisticated scan of the brain, down to the level of individual molecules.

In some worlds it might be possible to do this without changing the brain’s state in any way, but ours is not such a world. In our world the theory of quantum mechanics reigns, which says, among other things, that all events have a component of irreducible chance. QM has been used and misused in many ways when it comes to free will, but it has one uncontroversial implication that is relevant here. That implication is that the mere act of observing something unavoidably changes it. For example, to see something, you have to bounce photons off it. On the scale of macroscopic objects such as baseballs, the influence of this is so small as to not significantly affect the accuracy of our predictions. But on the scale of the very small, such as an atom or a molecule, an impinging photon represents a significant disturbance indeed – and scanning the brain in the level of detail that the Prediction Machine needs requires us to descend to this level. Merely by scanning a person’s brain, the Prediction Machine inevitably changes that brain’s state, forcing it to take its own influence into consideration when making its prediction; and this leads straight back to the problem of infinite recursion that reared its head when trying to predict the fluctuations of the stock market. This is why the quest of the mad scientists at the Materialism Stereotypes Institute was doomed to failure from the beginning. If they had only listened to us compatibilists, we could have told them that in advance and saved them a lot of work.

What is the point of all this? Advocates of dualist free will claim that theirs is a model where even complete knowledge of the state of the universe at time T would not make it possible to infallibly predict what a person would do at time T+1. But we have just seen that materialism has exactly the same consequence. And this means that our actions are genuinely not determined in the way that the flight of a baseball is.

Some readers may consider this a logical sleight of hand. Even if no one could possibly have known in advance that you would do X, the argument goes, doesn’t that knowledge still exist “somewhere”, in some hidden dimension of determinism? I urge them to reject this conclusion. The separation of “X was determined to happen” and “It was knowable in advance that X would happen” is illicit, because there is, by definition, no conceivable test that could differentiate between them; no experiment could show that one holds true but not the other. Therefore, despite the apparent difference in wording, these two propositions express exactly the same idea. If one is false, so is the other.

We can now see why there is no need to fear that we live in the world of Oedipus. His is a world of fatalism, a world of Prediction Machines, where certain events will happen regardless of what you choose. But compatibilism is not fatalism. Compatibilism means that certain events happen because of your choices. In a fatalist world you cannot use knowledge of the future to alter the future, but in a compatibilist world you can, because the mere act of providing information creates a new causal factor that alters what would have happened in the absence of that information.

We need not fear these lurking intuition pumps; on closer inspection, they simply evaporate. The Malevolent Mindreader cannot exist. Neither can his comrade the Nefarious Neuroscientist, because precisely controlling one’s will through external influence would require perfect knowledge of the state of one’s mind to know how it had to be changed. Like many nightmares, these two seem tangible only as long as they keep to the shadows. Daylight reveals that they are without substance.

Next: What does it mean to make a choice? Can human beings be responsible for what they choose in a universe where every event is subject to the law of cause and effect, and could we have chosen differently in any situation? Stay tuned…

Other posts in this series:

About Adam Lee

Adam Lee is an atheist writer and speaker living in New York City. His new novel, Broken Ring, is available in paperback and e-book. Read his full bio, or follow him on Twitter.

  • Mike K

    Well. this is very interesting. You have described in a lucid manner the problem with making very fine measurements, a problem that Heisenberg was to recognize and formulate into his famous Uncertainty Principle. But in real life we are not about making such measurements when we exercise what we believe is our freewill. The inability of a machine or an observer to calculate the precise movements of atoms in our heads, does not alter the fact that the movement of those atoms is determinined by antecedent causes. The fact that we are physically unable to measure in that level of detail does not mean that level of detail is not Deterministic. Even our attempt at making a measurement is itself Deterministic and becomes a part of the whole Deterministic chain of events in the world line of the event being investigated. The quantum factor does not insulate us from Determinism, it merely gives us a probability that we might choose A over B. Whichever we choose is still causally determined. I don’t see how the measurement problem in any way isolates us from Deterministic necessity.

  • BlackWizardMagus

    But the quantum factor is necessarily random; not determined. That’s the part that has made it hard to swallow for physicists; that they simply can’t know somethings. How Adam will make this into free will is not something I know either, but we are not automatons, for the sheer fact that there is random chance involved. And while that chance itself might merely mean there is one less biochemical reacion in one neuron, that might itself be enough to cause a drastic outcome, such as rather a desperate person decides to make the jump or not.

  • Mike K

    Yes BWM, the apparent random nature of quantum effects certainly do disturb physicists but we still have much to learn. From our present understanding, these effects appear random but they turn out to be probabilistic. It is the quantum probability that most (but not all) of the electrons travelling to your monitor will produce the desired image. If their behaviour was entirely random, all you would see is noise. In any event, whether they be determined, probabilistic or random, it doesn’t rescue freewill for us. For if our choice is to be seen as based on a probable or random quantum event, then it cannot be our free choice. The other problem here is that quantum effects are generally considered to be way too small to have any effect at the neuron level anyway so I guess the point is moot. I don’t wish to think we are mere automatons either, so I’m very interested in your view and also to see how Adam develops this argument.

  • BlackWizardMagus

    This is not something I have thought of alot, mostly because I see it as self-defeating; if we don’t have free will, then you are not really asking a question you care to know about anyway, and this series of posts is nothing more than a neccessary effect, etc. I don’t think it’s logically possible to not have free will. The quantum effect I merely brought up to show that there is something that prevents it from being perfectly mechanical. But, no, they are no probabilistic on the quantum scale; literally, if you try to pinpoint an electron, it cannot be done. It is completely random. That’s why there are electron “clouds”; they tend to be more in one place than another, but no pattern exists. However, electricity works not so much because there is a greater probability of this or that, but because we shove so many millions of electrons through that we simply must get the desired effect.

    I’m waiting for the end of this as well because this was the very topic that got me to email Adam several months ago and get to know him better. I never fully understood his stance, so I want to see how this works out.

  • Christian Y. Cardall

    I think I agree with the (anonymous?) author on the big picture—that is to say, I favor compatibilism—but I have three critiques of this post.

    First, it’s not clear to me that the infinite regression of observers necessarily holds. I am not persuaded that as a matter of principle a model cannot self-consistently include itself. (Such a model might never have been constructed, and it might be difficult to imagine, but that’s far from a non-existence proof.)

    Second, it’s not clear to me that quantum mechanics is of any relevance to human cognition. If the operations of the brain depended on quantum interference of small numbers of molecules this might be so, but it seems instead that neural connections depend on exchanges of boatloads of neurotransmitters—a phenomenon that I would guess qualifies as macroscopic, and describable by classical mechanics. I think raising quantum mechanics at all simply muddies the water and unnecessarily opens the door to the crackpot Dancing Wu Li Masters types.

    Third, in one of the last few paragraphs an argument is made that contradicts compatibilism, which I thought the author espoused. Two statements are coupled by the conclusion “If one is false, so is the other.” One of these allegedly falsified statements is “X was determined to happen.” But my understanding is that compatibilism does not assert that determinism is false; but rather, that there is a meaningful notion of free will in spite of determinism.

  • Gathercole

    The rock-paper-scissors analogy is fundamentally flawed. The author is right that a prediction machine (or any system) can’t include a predictive model of itself (Goedel’s Theorum) but in the rock-paper-scissors case, it doesn’t have to, because there is no feedback loop. For there to be a feedback loop between the brain and the PM, the state of the brain would have to affect the state of the PM, and the state of the PM would have to affect the brain. In this hypothetical example, the state of the PM does not affect the subject’s brain (the brain is just affected by the knowledge that the PM exists, a knowledge which does not change based on what’s happening inside the PM), and so for the PM to predict the brain’s future state is, by the parameters of the example, possible.

  • BlackWizardMagus

    The quantum aspect does hold, actually. This machine is not looking for probabilities or approximations, which is what mechanics is good for. It is looking instead for the absolute exact state of every single atom, electron even, in the entirety of the brain; where it is and what it’s doing. Right there, Heisenberg’s Principle should become apparant; no machine could ever know that much detail, because the closer you measure one, that harder it is to measure the other. Furthermore, to measure something, you have to be able to “see” it with something; photons, electrons, whatever, but you have to bounce something off of it to be able to detect it. That energy influx would, on a quantum level, slightly change things. 99% of the time, it probably wouldn’t be enough to matter, to really tip the scales, but every so often it would be, and it would be impossible to predict. Because it IS quantum chance, the machine couldn’t merely predict it; it would have to physically remeasure it, which would again cause a slight change. Ultimately, there would be a slight amount of uncertainty.

    Of course, this doesn’t prove free will, just an inability to predict or control every action of the brain perfectly. As to that last question of yours; I think, I don’t know for sure myself but I think, that compatabilism is saying that the idea of determinism that says there is a definite relation between stimuli, brain state, and outcome is true, just not that it’s 100% of the picture. That free will is somewhere in there, basically the ability to push for small deviations. But I’m not a student of this entirely, so hopefully Adam can clear it up.

  • http://www.spinozist.us The Spinozist Mormon

    (I commented under my full name above, Christian Y. Cardall, but now appear as “The Spinozist Mormon.”)

    BlackWizardMagus, I am not denying the uncertainty principle. I am suggesting that it is irrelevant to human cognition.

  • BlackWizardMagus

    It is irrelevant to cognition, but not irrelevant to the prediction of it. The very act of measuring changes the values one is measuring slightly. While this can be ignored when we are merely approximating, or when we don’t just don’t care if it’s wrong sometimes, it cannot be ignored if we are truly looking to be right, 100% of the time, for absolutely every thought that some test subject’s mind creates. It will slightly offset the original values, and the machine can only account for this with a new scan, which is a new offsetting of the values. It can be extremely accurate, contrary to Adam’s model, but not perfect. And the uncertainty principle in particular (I guess I should have started with that, but I figured my other point was recieved with similar skepticism) means it’s impossible to know the exact state, even if the scan itself didn’t affect the result. While the uncertainty is completely obliterated when we are talking about a car, a brain scanner trying to be absolutely perfect simply can’t be; it can’t know where every electron is and also know where it’s going so it can predict an outcome using natural laws loaded into it. It just can’t. All it can do is approximate and arrive at a probability.

  • http://www.spinozist.us The Spinozist Mormon

    BlackWizardMagus, I don’t dispute the unpredictability of the mind, but I do dispute that it is necessary to invoke quantum mechanics to have unpredictability.

  • BlackWizardMagus

    It’s one of two reasons you can’t be that perfect in scanning the mind or any system for that matter; the influence of the measurer, and the inherent uncertainty that stems from quantum mechanics. It’s not really that one has to invoke it; it’s that one has no choice. The theory itself absolutely requires a fundamental aspect of chance. But you’re right; one can argue the point without that, and merely look at the influence of the measurer. Although, that influence might be on the near-quantum level.

  • No one of consequence

    I want to inject a bit about quantum mechanics and physics (my field of study).

    A summary of what I want to say is that centuries of philosophy about the material world is now partially out of date with modern math and physics. The good news is that the new math and physics is much better at supporting the thesis about predictability that Ebonmuse is arguing for.

    First: Some criticism. The brain/mind does not appear to depend on quantum mechnics as a source of uncertainty. The temperature is too high, and the number of atoms is too large. There is no plausible mechanism for quantum effects to generate free will at this time. Try not to believe to much in the speculation.

    Look at it this way: If your decisions were strongly driven by intrinsic randomness then you would be unable to think or act in a remotely rational manner — you would think and act spastically, and also observe this in others.

    But we do not need quantum mechanics or the uncertainty principle to derive the needed arguments! Classical approximations are sufficient. I will present a condensed version of the modern argument:

    Take a very very very small system: 3 or 4 bounded real numbers being the whole state of the system. If the equations for this system’s evolution are non-linear then you can easily get chaotic or turbulent behavior. This means that if you start with 16 digits of precision then after time T you have 8 digits of precision, after 2T you have 4 digits, after 3T you have 2T, and after 4T you have 1 digit of precision left. You quickly lose all precision in a finite time. So the 18th century idea of a clockwork universe — where knowing the initial conditions allows arbitrary predictions — is simply false. In practice the feedback cycles in your brain are going to produce the same difficulties as non-linear equations, and no mind reading computer can be any more accurate than a weather forecast. And this limit is fundamental — you cannot possibly build something to do qualitatively better. (Note that the most accurately known physical constant in the universe is only measured to about 13 digits of precision. So 16 digits is wishful thinking.)

    What does this mean in practice? It means that I can never predict what I will do later, and no matter how closely I measured your brain, I can’t effectively predict what you will do. How well can you predict what you or someone you know well will do? A machine will not be able to do much better than that. It may be more honest, and may do better over short periods of time under experimental conditions, but it could never be anything like the thought experiments presented by philosophers and theologians.

    Also: The argument about modeling the effect of the predicting machine as part of the prediction is independent of quantum measurements affecting the measured system. Mathematically such feedback may or may not cause infinities or paradoxes. Depending on logical regression here is depending on mathematical unsolvability which requires theorem, not hand waving. The effect of feedback is that it adds to the complexity of the system, and complex systems of equations are more likely to become chaotic and turbulent. This feedback/complexity line of reasoning is still hand waving, but on more solid ground than appeals to paradox.

  • BlackWizardMagus

    Is this anything like the difficulty in mathmatically working with a three-body system, where every order of magnitude change in accuracy produces wildly varying results? Read about that years ago…

    I’ll cede to your superior knowledge of the field, if you’ll allow me a few curiosities; first off, why is this so? Why is it that, hypothetically speaking, knowing absolutely every variable and absolutely everyway they will interact not enough to be able to definitively know the outcome, with no room for error? I understood your example, I think, but that seemed to be an explanation of what happens, not an explanation of why that happens. Secondly, while the brain itself is large, why would quantum theory not get involved? The actual interactions in the brain are astoundingly small, using only a few neuro transmitters going down hair-lke projections (correct me if I’m wrong), so would quantum chance be quite small but still a factor? Finally, I do not see how the measuring machine would not automatically change the system. It is impossible to measure a system like this without interaction with it, and interaction, while it does not HAVE to change anything, cannot be completely devoid of the POTENTIAL to influence the outcome. Thus, it would affect things. Of course, I suppose the reason that that can’t be factored in before one starts goes back to the quantum aspect; that bouncing photons off of it is going ot change the outcome on the most basic level, and if that is not at all important, measuring would be either.

    Anyway, this is not my field of study, although I have read things here and there. So I hope you’ll excuse my questions.

  • Alexander

    I’d like to point out that although atoms and molecules in the brain are very small compared to our human scale, they’re still very large in relation to scales where the uncertainty principle makes itself felt strongly(such as trying to determine where all electrons, protons, and such are which would definitely be strongly perturbated by the uncertainty principle). So the uncertainity on the atom/molecular scales are insignificant. If that wasn’t the case, our cells’ molecular machinery won’t even be possible. I think that in theory, one can still get very accurate representation of the human brain on the molecular level, it would represent a massive data storage and some fantastical scanning technology but there’s nothing that I can think of which forbids extremely accurate data collecting on the molecular scale. In fact we do that all the time while working with various nanomaterials which require accurate knowledge of where atoms and molecules go.

  • BlackWizardMagus

    Well, yes, I don’t doubt that. Like I’ve said, I think that Adam is underestimating science here; it would be entirely possible to make an extremely accurate prediction machine. I am just arguing that it can never be 100%, never ever ever wrong. I think mechanics requires there be a slight chance involved, but certainly nothing close to making prediction impossible.

  • Montu

    I feel like you’re fishing here. I’ve read the three posts thus far that comprise this argument, as well as your essay on Ebonmusings “The Ghost in the Machine” and throughout both, your agruments sound like they should lead to determanism as the natural conlusion. However, when it comes time to actually make your conclusion, you shift gears, and attempt to say that just because things are predictable doesn’t mean they’re predicable. Your argument becomes cloudy, and disjointed when you begin to speak about compatiblism, simply because your evidence does not support your argument. I don’t understand the need to go all the way to quantum randomness as the grounds for free will, because that would insunuate that EVERYTHING is unpredictable, and can, in-effect, be or do ANYTHING because everything is made of atoms, thus would have the same radomness to it. However, we all know this not to be the case. My desk is a desk because there are determined principils that brought it to this state of being. Unless I, or someone else, break it down into something else, it will remain a desk. The same is true of the brain. It will remain a brain. Some things effect it, and cause it to act in a certain way, which is determained by its present state, and the outside stimulus. If you re-read all the case studies you mentioned in “The Ghost in the Machine,” they all point to the same thing. Just because we can’t messure it, does not deny that it is in deed determanistic, because, as you’ve mentioned, people act in predictable ways.

  • BlackWizardMagus

    Eh, please don’t put my argument onto Adam’s shoulders. The quantum thing has been something I’ve been pointing out mostly. It may or may not be correct, I’m not expert with it, so anyone is free to correct me with it. I don’t want anyone to think that just because I am agreeing with Adam and using this argument that he uses it at all. However, just to point out; no, your desk does not HAVE to be a desk. The chances are almost certain that it is, but quantum chance allows it not to be. Shroedinger’s Cat. But that’s more of a thought game than a meaningful analysis. I too am curious as to how this ultimately pans out though; I don’t know Adam’s final argument any better than you.

  • http://www.patheos.com/blogs/daylightatheism/ Ebonmuse

    The inability of a machine or an observer to calculate the precise movements of atoms in our heads, does not alter the fact that the movement of those atoms is determinined by antecedent causes.

    I recognize that, and I’m not arguing otherwise. In fact, I would argue that the reasons for our behavior have to be determined by antecedent causes – because the alternative is that our behavior is random, which not only does not grant free will, but denies it.

    If our decisions are in any measure randomly caused, there is always the risk that, in spite of all the reasons that move us, one day some randomness will twitch our strings the wrong way and we will suddenly be forced to ignore those reasons and do something we did not actually want to do at all. (Imagine having to occasionally roll a set of dice and make your decision from a chart based on how they turn up.) That’s definitely not free will. The fact that our decisions are made for reasons and also for causes is what gives us the stability and reliability of will needed to be truly free.

    But, at the same time, the fact that our decisions are made for causes does not imply that they are predictable. As I argued, they are in fact fundamentally unpredictable, and for this reason they should not be called deterministic. (I realize what sort of philosophical flak I’m likely to catch for asserting that some phenomenon can be wholly caused and at the same time not determined in advance. That is what I am arguing, nonetheless. Here I stand; I can do no other.)

    In this respect, I agree with the commenters who’ve claimed that quantum indeterminism operates at too small a scale to play a discernible role in the process of our cognition. I agree that it doesn’t, and lucky for us, because otherwise our decisions would be random and not free at all. But I do claim it would inevitably become relevant in any attempt to perfectly forecast human behavior. BlackWizardMagus put it perfectly, as far as I’m concerned:

    It is irrelevant to cognition, but not irrelevant to the prediction of it.

    To make my position clear, I don’t doubt that it’s theoretically possible to build highly accurate prediction machines. Indeed, even without the assistance of such a device we can often predict with high reliability the behavior of a close friend or someone we know well. My claim is that what is not possible is the construction of infallible prediction machines. In any non-chaotic macroscopic system, a precise description of boundary conditions allows that system’s behavior to be predicted arbitrarily far into the future (and even a chaotic system can in principle be predicted, if your knowledge is perfect – I recognize that acquiring knowledge of that degree of accuracy is hardly possible now and will probably never be possible, but I want to make a stronger point here). But no knowledge of boundary conditions, no matter how precise, enables the prediction of that macroscopic system called a human being.

  • No one of consequence

    This thread is about predictability of people as a physical system. Predicting physical systems is the definition of physics. We study what the limits are, both in principle and in practice. ( Some of us also argue about quantum mechanics and consciousness for fun, but we don’t know much more about that then you would. )
    To BlackWizardMagus : asking “why?” is a good thing. As it happens, you need to study physics and math to understand the equations that answer “why?”. It took centuries for scholars to discover chaotic dynmanics, so this blog comment won’t convey enlightenment. Why did it take so long to discover chaos theory? It only arises in system that are complicated, and understanding the simple systems does not prepare one to see the change of behavior to chaos: “The main catalyst for the development of chaos theory was the electronic computer.” Scholars had to see the computational experimental predictions go haywire first, then they went and discovered chaos theory (that Henri Poincare had glimpsed). Note the year: 1961. All early science and philosophy that talks about arbitrary predictive capability is wrong.
    I will respond to some hand waving speculation that followed my first post:

    Why is it that, hypothetically speaking, knowing absolutely every variable and absolutely everyway they will interact not enough to be able to definitively know the outcome, with no room for error?

    and even a chaotic system can in principle be predicted, if your knowledge is perfect – I recognize that acquiring knowledge of that degree of accuracy is hardly possible now and will probably never be possible, but I want to make a stronger point here)

    Hypothesizing inifinite capacity does no one any good. Operationally what do you mean? I will quickly explain why such statements have physical problems.
    If you knew all the digits of precision, where would you store them? Information storage requires mass-energy, and (a) you don’t have an infinite supply of matter, and (b) you can’t pack it too densely without forming a black hole. This has consequences:

    • Any system, e.g. a brain, contains a finite amount of information. But this finite limit is astoundingly huge.
    • Attempts to use a brain model to predict behavior require a world model. This required size and precision of the world model needed would grow as the prediction time is lengthened (adding more people, for instance).
    • Even knowing nearly all that information and running a nearly perfect simulation, your predictive precision will decay to zero in finite time. This is because, in complex systems, the errors increase exponentially.

    Again: The best measurments made by humans have less than 16 digits of precision.
    If you want hypothesize about doing a perfect job, then you need to consider quantum theory. Quantum mechanics can be considered from a standpoint of information, which reveals:

    • The universe is not like the computer on your desk. In fact quantum computers are more powerful.
    • The universe is reversable, which means no information can be created or destroyed.
    • You cannot make copies of quantum infomation (this is the “no cloning theorem”). The simple argument is you would be “overwriting” the old information in the destination, and this erasure is irreversable and forbidden.
    • You can make pretty good copies, but this will disturb the system being observed. The best generic cloning machine of an unknown quantum bit is limited to 5/6th (or 83%) accuracy. That is not even one full digit of precision. This is a mathematical result and hypothisizing anything better would be supernatural.

    So the best you can possibly do is make an imperfect, disruptive copy of a part of the world and try to simulate this on a quantum computer. This will diverge from the source of the copy after a short time. You cannot collect better initial conditions past a certain point due to quantum theory. The whole classical concept of getting arbitrarily more accurate initial conditions to make predictions must fail. Hypothesizing such a situation is supernatural (i.e. you need to exist outside the physical universe, e.g. be a deity). Let me emphasize that this limit is not hand waving, it is not technically surmountable, it is derived from the best physical theories we have.
    Physicists can only model complex system qualitatively, we cannot model a specific real system accurately. This is good enough to build airplanes, but so far it is not good enough to build fusion reactors. (Getting a flying cars is an engineering/social problem — don’t blame the physicists).

  • http://www.spinozist.us The Spinozist Mormon

    Ebonmuse, I agree that the distinction between predictabililty and determinism is useful, since it differentiates human knowledge of processes from the processes themselves (though, again, I would rather see the argument made in terms of the classical phenomena of chaos and complexity rather than enlist quantum mechanics).

    However I don’t see why it is productive to introduce a distinction between determinism and… well, you don’t give it a name, but describe it as the “the reasons for our behavior have to be determined by antecedent causes.” That seems to be a definition of determinism. Why do you eschew the label of determinism, and yet call yourself a compatibilist, when the embrace of determinism is a defining feature of compatibilism? Is this an idiosyncratic move on your part, or do other philosophers (Dennet?) claim to be compatibilists while avoiding the label of “determinism”?

  • BlackWizardMagus

    No-consequence; well, I suppose this is something where I’ll have to either spend alot more time researching it, or simply accept your point. Not that I think you’re wrong, I just don’t quite understand it all (although I have read about chaos theory previously).

    Mormon; I think Adam’s point here is that he accepts the part of determinism that says that stimuli–>outcome, but rejects the idea that that HAS to mean we are predictable machines, which is something dualists have imposed upon determinism. That’s the feeling I get, anyway.

  • Quath

    I can see that a human playing against the PM would have an incentive to try to beat it and maybe could come up with randomness as a result. But what about cases where people don’t care to out-predict the machine? For example, what if the machine tried to predict how people would vote. Knowing they have been read by a machine may do little to change their vote. (It may change some, but I think most people are too stubborn to change their vote just because they feel a machine knows their vote.) So we don’t have 100% certainity, but we have very good predictability.

    The PM could be generalized to a prediction machine. Each prediction of the future has a chance of changing that future. But some have a less chance than others. Say the PM predicts an earthquake. There is little that we can do to change that so there is a good chance it will happen. However, if it predicts an assassination, then there is a good chance it will not happen (if it is publicized.)

    I think we should compare people to computers in this. For example, if I told a computer it would overheat in 10 minutes (due to the PR) and if the computer knew that overheat meant it should ramp up the cooling fan, it may turn on its fan and change the future. But does the computer have free will because it violated predictability?

  • BlackWizardMagus

    I don’t think that was the idea. It wasn’t that the person would TRY to beat the machine, it’s that the machine is limited by the laws of physics as to how accurate it could ever be, for three reasons;
    Chaos theory
    Quantum chance
    The interference the machines physical scanning would cause. We all think of scanning as something magical in nature, that it just happens, but really, it requires phsyical interaction, just like everything else. A police radar to trap speeders will have no effect on the car itself, but if you attempt to scan the location of every electron in the brain, the scan itself might move some of them, or influence them in some other way. Hence, it would cause error.

  • Quath

    The assumption is that chaos, quantum mechanics and scanning would change the outcome. It doesn’t change the outcome for a computer, so why assume it will change the outcome for us?

  • BlackWizardMagus

    Well, for one thing, no computer is anywhere near as complex as the human brain. That’s one of the points about chaos; philosophers used to think that problems like this were merely quantitative; that basically, as long as we got more and more advanced, everything would work the same. But it doesn’t. At a certain point, things start breaking down, we start losing accuracy. Quantum effects would take place in a computer as well; we don’t have any computer scanning machine just like this brain scanning machine we are discussing. Scanning too could change the outcome of a computer, if we had such a thing.

    But still, the biggest aspect is probably the complexity of the brain. I don’t know the numbers exactly, but I know that the number of connections and neurons are several orders of magnitude. Also, standard computers don’t run like human brains. I believe they HAVE built extremely primitive ones that do, but your standard computer does operate by simple signals. We still aren’t entirely sure why a series of signals suddenly becomes the spoken word “blue” or the sensation “pain”. Computer follow very exact rules, even ones with some form of learning capability. They don’t learn or think anything like we do.

  • http://www.patheos.com/blogs/daylightatheism/ Ebonmuse

    It doesn’t change the outcome for a computer, so why assume it will change the outcome for us?

    Well, for one thing, computers are specifically built to be insensitive to these kinds of minor alterations. They are digital devices, and any influence on them below their activation threshold will not trigger a change in their output.

    But the human brain is definitely an analog machine. Rather than being designed to be impervious to minor influences, it uses them opportunistically as input – just like this circuit designed by an evolutionary algorithm, which appears to exploit subtle environmental effects like the minor currents induced by nearby logic cells. If something of this nature were scaled up to the complexity of the human brain, who’s to say that it too wouldn’t become chaotic and unpredictable?

  • Void

    I feel like you’re fishing here. I’ve read the three posts thus far that comprise this argument, as well as your essay on Ebonmusings “The Ghost in the Machine” and throughout both, your agruments sound like they should lead to determanism as the natural conlusion. However, when it comes time to actually make your conclusion, you shift gears, and attempt to say that just because things are predictable doesn’t mean they’re predicable.

    I think the discrepancy here is the confusion of the word “caused” with the word “predictable”, the outcome of a fair dice roll is caused, but for the average human it is unpredictable as predicting an outcome means assesing too many variables (the strength of the throw, any wind, the at what angle the table will cause the dice to bounce etc.) at once. Although it is theoretically possible to construct a machine that can account for every single variable in the human psyche, the act of finding the variables and predicting them, adds another variable to the human psyche that the person uses when making the choice and so the machine finds that and causes yet another variable and so on ad infinitum.

  • ex machina

    Maybe I don’t understand anything here, but what about this: When you play chess with someone you do the same thing that this machine does, only on a very small and puny scale. You take all you know and apply it to what you think your opponent’s necxt move will be. You also know that your opponent will know that you know and he may move differently becasue of it. This would cause an infinite loop in humans, but it does not – we will eventually make our move, perhaps stopping at some point in the logical progression and saying “screw it,” of possibly “doing what we were going to do in the first place.” Because we never are in that kind of infinite thought loop, wouldn’t that mean that the machine would not have to progress on into infinity when calculating the effect it would have on the observed?

    If the machine had perfect knowledge of your brain, it would know your knowledge of the machine would affect the outcome, but it would also know the point at which you would cease to consider that as a factor and act anyway becasue of the perfect knowledge of your brain. As in the rock paper scissors example, the human player would know that the machine could predict his move. Eventually he would just throw something, not based on a logical progression or strategy, but on some other criteria. The machine would be able to predict this without putting itself in an infinite loop. Right?

  • mike

    It’s not even about having perfect knowledge of a system in order to predict it. It’s simply the problem of systems that are sophisticated enough to express self-reference. Self-reference is the basis of two of the most fundamental results in mathematics in this century (Godel’s incompleteness theorem and Turing’s halting problem).

    Turing’s halting problem states, roughly, that there is no computer program which takes as input another computer program and correctly determines what that program will output. It is not a matter of perfect knowledge, measurement error, nondeterminism, randomness, or observational influence. There simply cannot be such a universal computer program which works (or even does something half-sensible) for all (or even most) inputs. In short, it can’t exist because the world explodes when you feed the prediction program to itself as its own input.

    So computer programs have a no-prediction-machines theorem. So can we say that computer programs (which are well-defined, mechanistic, deterministic) have free will?

  • keddaw

    This post fails on the most basic philosophical premise. It assumes that the physical world exists and is ‘real’.

    There is no way of knowing, and no practical difference, to having this universe we see being a computer construct (a la Matrix) or even having us as part of it (a la 13th Floor) which means an all knowing machine could exist outside our universe and have full knowledge of the state of our brains. Such a machine would apply the laws of physics as it has set them up consistently and would have full knowledge of our future (re)actions. Thus the quantum problems disappear as the external computer would have full knowledge of the exact state, location and velocity of every quantum particle.

    On a more realistic(?) level our action at T+1 is wholly determined by the state of our brain at T, which is determined by the state of our brain at T-1 etc. etc. Thus, it is impossible to insert free will into this system at any point. Unless you believe the state of the brain at some point is not dependant upon the state of the brain just before (plus any external input) in which case you might as well posit God as interfering.

    Free will is a psychological construct evolution gave us to allow the various semi-independant systems in the brain to communicate and make decisions (eat here, move on, chase animal etc etc.)

    To prove the non-existence of free will using the Prediction Machine you do not have to include the PM in the model – all you have to do is give the PM a situation where it has to make a decision or perform an action and see what it does. Then reset the machine and see if it does exactly the same. It will. Even if quantum effects are in play it will perform the actions probabilistically as predictable as the half-life of a radioactive isotope. This, of course, assumes the PM is a perfect representation of the human brain that was scanned but as we can’t reset a physical human brain this is as close as we’ll get.

  • Snoof

    Interestingly enough, Ebon, I vaguely recall hearing something about the scenario you described with a stock market prediction machine actually happening. What happened was that the creators were moderately successful, making reasonable amounts of money, until suddenly the predictions stopped working. What had happened was that _other people_ had figured out how they were doing it, and built their own stock market prediction models, the cumulative effect of which was to make them all useless.

  • Chigliakus

    Snoof you’re correct on the first part, such machines do indeed exist, but wrong on the second part, they’re far from useless and are allowing the big players on the stock market to game the system. Such trading should be illegal, and something like a penny tax per trade could render such systems unprofitable. In their current form I don’t see these systems as adding any sort of value, they’re just leeches and in a worst case scenario could destabilize the market.


CLOSE | X

HIDE | X