Eli of Rust Belt Philosophy thinks my LARP Your Way to Truth advice is a snare and a delusion. Let me post some excerpts, but you should pop over and read the short takedown piece in full for fairness:
To begin, I cannot begin to imagine why Libresco thinks that curiosity only happens in the context of suspended disbelief. Perhaps she herself has that sort of cognitive limitation – though I doubt it – but it’s hardly something that the rest of us have to deal with. I also have a really hard time believing that hypothetical scenarios are best evaluated by imagining (or attempting to imagine) the effects of those scenarios on one’s own life. I mean, let’s say that we’re talking about the potential existence of sentient alien species on other planets – is that really something that is at all analyzable in terms of its effect on my day-to-day existence? If so, I fail to see how. Most importantly, though, Libresco totally misses the fact that humans are optimized for aggressive, raised-hackles, high-stakes reasoning. Asking us to reason with a constructive curiousness – to reason, that is, in a state of suspended disbelief or make-believe – is tantamount to asking us not to reason at all.
[Eli cites the recent study on choice blindness and says…] These poor people inadvertently did more or less exactly what Libresco is asking for: they didn’t bother to “think about all the reasons that it can’t be that way” but instead “just accept[ed] it as a premise” and went from there.
Once more, in summary: human reasoning faculties are practically weaponized in their forms and functions. But because it’s unpleasant to be at the pointy end of the stick, we rarely turn these weapons on ourselves, which means that we rarely force ourselves to achieve a rational response. We do this least of all when we assume the truth of an idea and engage in mere low-stakes, just-go-with-it play-acting – when, for instance, we want to build a relationship, or “curiously” explore an imagined world, or defend an idea that we attribute to ourselves (even if, as in the experiment, that attribution is fictitious). In short, the evidence indicates overwhelmingly that we reason best when we reason competitively, and that competitive reasoning (just like competitive anything else) usually only takes place when there’s someone else to compete with.* Libresco’s approach would have us abandon the competitive approach in favor of something warmer and fuzzier, but that’s literally the exact opposite of what the evidence indicates we should do. Our warm-and-fuzzy reasoning is so embarrassingly bad that it allows us to switch positions in a matter of minutes, and that’s what she thinks we should use to “think honestly”? Seriously?
Let me do a quick clarification. I don’t think the LARPing strategy or the Leave a Line of Retreat exercise should be used to prove something to yourself. Plenty of false worlds can be fun/thought-provoking to LARP in. My first leap of faith was the result of thinking about the Young Wizards mythology and discovering that, if it was true, I was pretty sure I wanted to offer it my loyalty. That realization meant I was willing to take the Wizard’s Oath; it didn’t blind me to the fact I wasn’t offered wizardry as a result.
The LARPing game is a prelude to the data gathering/evaluating process. We don’t usually know our opponent’s arguments as well as we thing, so imagining the argument/worldview from the inside sharpens our vision and helps us make sure that we’re addressing the actual strengths and weaknesses of our opponent.
It might be possible to overwhelm your ugh fields with a frontal assault, but I’m not very good at it. I’d love to have Eli post, here as a guest or on his own blog, about how he trains that strength and applies it. I’d also like to see the citations that show that the weaponized approach to thinking is the most effective. I’m pretty sympathetic to the LessWrong caution against treating arguments like soldiers.
One problem is that when I engage my defensive strength, it’s easy to lapse into thinking I’m defending my current best estimate of truth instead of truth. That makes it hard to notice I’ve erred and joyfully cede the field. Another problem is that a weaponized defense is better at keeping me unmoved than persuading others. I don’t need to provide a strong argument for my own position, I just need to keep showing that the other side is weak.
For example, when I read a study whose results I doubt or don’t like, I tend to flip to the methodology and look for anything that falls short of ideal procedures. Then I’m safe, because the study isn’t trustworthy. I bet that a really good study would have validated my position, and if that ideal study is logistically unfeasible, ah well. This kind of thinking is what drives confirmation bias. If Eli got to pick, I think he’d rather fight a Christian interlocutor who’s curious instead of just aggressive.
I’m fairer to my opponent and serve myself better if I get curious instead of angry. Ok, so this study is flawed. Is it flawed in a way that makes it useless, or can I think about which direction it’s biased in and my how much? If I actually had to research this problem, instead of critique papers in seminar, what would the proper study look like? If it’s unfeasible, how could I most accurately glean what I want to know? Do I want to actually try to run some of this on Mechanical Turk?
Aggression is satisfied when I beat a person. Curiosity is only satisfied when I beat myself and the whole world and know something that I didn’t know before.