# How to Think Critically III: Randomness

I’ve written before that the human mind is a pattern-seeking engine. We are wired by evolution to seek cause-and-effect relationships in the world around us, and when there are relationships to be found, we often do very well. The problem is when there are no causal relationships to be found. People in such situations often develop what we call superstitions, erroneous conclusions about what sorts of causes are correlated with desired outcomes. In other words, we do not have an instinctive grasp of the null hypothesis: we are reluctant to infer randomness.

What follows is an array of 1024 random bits – not pseudo-random, as computers usually produce, but truly random. They came from the HotBits server, which generates a steady stream of bytes by timing the radioactive decay of atoms of cesium-137. According to our best understanding of quantum physics, radioactive decay is a fundamentally probabilistic process – in other words, it is random. Even a hypothetical superintelligence with total knowledge of all facts in the universe prior to that point could not predict whether the next bit in HotBits’ byte stream was going to be a 0 or a 1.

```0001110001011000000110000010100111010100000111010010001011110111
0000000101010100010101100110001001100001111111110110001001110110
1011100010111110000100110000111010101001111001101001111000100011
0111011110111011111001110010011111110111001001111110001101001011
1001101101110110001101100000111110010110001101111100000000011001
1011011111101010011010100001001101101110011110101110111010111110
0110110100010110000110001100110111111110101001000110001111101000
0111011001011101011010101000110111100101001000110010110100001010
1101111101011100001110110100011011000000011000111100011000010011
1001101101101011110100110110010000101011011110000111011010000111
1111000110101001100011010111111001001111000000110111100100100001
0010000110010100000010011111000100100000101100100110100100101001
1101010101000001110010010001010001111100101011001011011100100011
1100010010000111010010010001100110010000100111010111101110111011
1011011010001100101011110010111010001100010000101011100011010111
1010000101001000011000010010111001011111111011000010000111010100
```

Though this bitstream contains plenty of alternation, it also contains a surprising amount of clustering – including, in red, some runs of 7, 8 or even 9 consecutive 0s or 1s. “Surprising” is, of course, a relative term in this context. In statistical terms, this is far from surprising. But research has shown that human beings consistently tend to underestimate how much repetition and clustering there will be in a random sequence. We are not good random number generators. (In a psychic staring experiment, Rupert Sheldrake claimed to have obtained positive results, but detailed analysis showed that his supposedly random sequences of staring and not staring were biased against repetition – precisely the way we might expect people to guess.)

This human tendency is known as the gambler’s fallacy: the belief that independent events can influence each other’s probabilities. It is aptly named, for casinos make millions of dollars each year because of it. People who reason that a coin which has come up heads ten times in a row is “due” to come up tails are committing this fallacy; so are people who reason that a roulette wheel which has landed on red several consecutive times must be on a “hot streak” upon which they can capitalize. These are independent events with no “memory” of what preceded them. In a random stream of bits, the chance of the next bit being 0 is always 1/2, regardless of how many consecutive 0s came before it.

But the gambler’s fallacy has more subtle effects than staring experiments or roulette wheels. Due to the human reluctance to infer randomness, people often conclude that events which coincide must be causally connected. Sometimes this is a valid assumption; other times, it almost certainly is not:

The maths tells us that in respect of probability, people are much like tosses of a coin: we will see occasional clusters, even of cancer.

Tell that to the residents of Wishaw near Sutton Coldfield where a huge mobile phone mast overshadowed 35 of the houses, until someone pulled it down one bonfire night.

…”I was going through chemotherapy,” she says, “when I started meeting my neighbours in the hospital.

“It was then we started looking at the mobile phone mast and now we know for certain that it is responsible.”

In any random sample of data, we will inevitably see coincidental clustering. And the larger the pool of data is, the larger the clusters will tend to be. In a country the size of the U.K., it is very likely indeed that cancer cases, even rare forms of cancer, may cluster together purely by chance in at least a few places. Of course, it is certainly possible that a local cluster of cancer cases may have a common cause, and that possibility should not be overlooked.

The problem arises when detailed scrutiny turns up no common cause, yet sufferers understandably in search of an answer persist in believing there must be one, refusing to consider the possibility of random chance. The result, as we see here, is that innocent parties often get blamed by people who believe there simply must be some connection – such as concluding that cell phone towers cause cancer, despite the fact that these towers do not emit ionizing radiation which can damage DNA. A similar explanation was probably at play in medieval witch trials, where any unusual run of bad luck would often be blamed on black magic practiced by the town outcast. It is also seen today in so-called “healing” shrines, where the few spontaneous recoveries among vast crowds of suffering pilgrims are attributed to the miraculous powers of the shrine, rather than the inevitable occasional chance event.

Though the gambler’s fallacy is indeed a fallacy, there is a superficially similar phenomenon that is real: it is called regression to the mean. This principle states that a statistical sample whose characteristics significantly deviate from its parent data pool is more likely to be followed by a sample that conforms more closely.

The difference between the gambler’s fallacy and regression to the mean is subtle, but important. Consider again the random bit stream printed above. If I draw a sample from this pool of data and it contains a large imbalance of 0s over 1s, say, it is more likely than not that my next sample will have closer to an even split. That is regression to the mean. But it does not follow that any individual item in the sample is more likely to be a 0 or a 1 (that is the gambler’s fallacy) – it only follows that large groups of such items are, in a statistical sense, predictable, even though individual items are not.

Failure to understand this principle has given rise to many superstitious notions. Consider the “Sports Illustrated Jinx” – the belief that athletes who appear on the cover of Sports Illustrated often suffer a slump in performance thereafter, and that the former causes the latter. In all likelihood, this is an example of regression to the mean. An athlete who appears on Sports Illustrated is probably there because they have lately had an unusual – statistically unlikely – run of good performance. Since such lucky streaks are the exception and not the norm, regression to the mean tells us that the athlete’s subsequent performance is more likely to be closer to their average performance, which is perceived as a decline.

In both these cases, human beings are often brought to grief by inferring hidden causes when we should infer random chance. Our inability to innately comprehend statistical principles causes us to commit this fallacy. By treating chance as the default explanation, which it should always be, and learning to ask for evidence of an underlying cause before dismissing randomness as an explanation, we could shed many unnecessary superstitious hindrances.

Other posts in this series: