I’m having a delightful time at Rationality Camp, and I have two LessWrong posts to recommend today: “Absence of Evidence is Evidence of Absence” and “Conservation of Expected Evidence.” (Read these first, I’m gesturing at them, but not summarizing below). They start with this quote from Robyn Dawes’s Rational Choice in an Uncertain World:
Post-hoc fitting of evidence to hypothesis was involved in a most grievous chapter in United States history: the internment of Japanese-Americans at the beginning of the Second World War. When California governor Earl Warren testified before a congressional hearing in San Francisco on February 21, 1942, a questioner pointed out that there had been no sabotage or any other type of espionage by the Japanese-Americans up to that time. Warren responded, “I take the view that this lack [of subversive activity] is the most ominous sign in our whole situation. It convinces me more than perhaps any other factor that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbor was timed… I believe we are just being lulled into a false sense of security.”
Acts of sabotage would definitely have made Warren think it was more likely that there was a Fifth Column. If he also claims that their absence causes him to revise his estimate upward, he should already have a higher estimate. Imagine you thought there was a 50% chance a fifth column existed. If there was an act of sabotage in the next month, you’d revise your guess to 90%. If there was no sabotage, you’d change your guess to 60%.
In that case, the craziest thing you’re doing is claiming you only thought there was a 50% probability. No matter what happens during the month, you’ll think it’s at least 60% likely, so you have no excuse for not thinking it’s at least 60% likely now. In fact, your guess should be even higher, because if you started at 60%, you’d know you could only go revise upward during the month, and you should guess higher until you reached a point where you weren’t sure which direction would make you more accurate.
This is a pretty easy example. It’s a good way to point to how absurd this kind of reasoning is, but the example doesn’t feel very threatening to most of us. It’s hard to identify with Warren, not only because he’s saying something dumb, but because he sounds like he’s probably racist. That makes him a bad match for our own idea of ourselves. In a later exercise, there was a better example for catching rationality camp-types out in the same kind of error. We got a set of prompts to discuss and evaluate:
At a party, you meet a stranger who believes in astrology. After talking to you for long enough to build up a sense of your personality, they offer to guess your birth month and year. You agree to the test, and the astrologer guesses correctly. You reply that you still don’t believe in astrology because you’ve read a lot of studies showing that people can’t pick out their own newspaper horoscopes; plus, maybe he secretly looked you up on the internet beforehand.
What I said during the discussion, based on the LessWrong posts above, is that none of those caveats change the fact that you now think astrology is marginally more likely to be true than before he guessed right. If failing to guess right would be evidence against his theory, guessing correctly has to be conceded as evidence supporting him.
Admitting that the other side scored a point isn’t the same thing as saying that I’ve changed my mind or even that the marginal change in my beliefs was enough to get me to think more about astrology or spend time looking for new evidence. I haven’t lost anything by conceding, and it doesn’t mean that I’m more likely to find the next piece of paranormal evidence compelling.
Sometimes, being honest means updating in the ‘wrong’ direction. Although, in the long term, you should expect your beliefs to drift toward the correct answer, there’s no reason to expect that you approach that answer monotonically (always moving in the same direction).
We can have better arguments if both players understand this fact, so no one treats it as a humiliating defeat if you concede that a piece of evidence meshes better with your opponent’s position than your own. That’s the expected sometimes, no matter who is right. Concede the point and then explain why it’s not enough evidence to make you drift across a critical value where your beliefs and actions change. There’s nothing wrong with saying, “That’s not the kind of proof I’d need. It favors your side, but my prior expectation that you’re wrong (for these reasons…) is strong enough that it doesn’t change my expectations very much.”
Talking this way lowers the stakes of each new piece of evidence, so we can consider them reasonably instead of treating any revision of our estimates as a coup de grace.