I think most people don’t respect individual differences in intelligence and rationality enough. But some people in my local community tend to exhibit the opposite failure mode. They put too much weight on a person’s signals of explicit rationality (“Are they Bayesian?”), and place too little weight on domain expertise (and the domain-specific tacit rationality that often comes with it).
This comes up pretty often during my work for MIRI. We’re considering how to communicate effectively with academics, or how to win grants, or how to build a team of researchers, and some people will tend to lean heavily on the opinions of the most generally smart people they know, even though those smart people have no demonstrated expertise or success on the issue being considered. In contrast, I usually collect the opinions of some smart people I know, and then mostly just do what people with a long track record of success on the issue say to do. And that dumb heuristic seems to work pretty well.
Preach! I’m so glad Luke wrote this — I get alarmed when people talk about “rationality” as this magic superpower that allows you to figure out any question better than the experts on that topic. I’d describe rationality instead as the skill/habit set that makes you better at figuring questions out *for a given level of expertise* (i.e., holding expertise constant).
I’ve long had the sense that Luke was on the right side of this issue, and it’s good to see him saying so explicitly. Good to see Julia’s there too. Unfortunately, this is something I think the LessWrong community as a whole has a huge problem with. The “rationality as a magic superpower” thing is something you actually find in Eliezer Yudkowsky’s writings, most openly in some of his fiction but he’s pretty explicit that the fiction represents his actual views. He actually uses the phrase “acquire magical powers” to describe something he thinks his readers should be able to do.
I agree with Eliezer on the abstract point, that The Rules(TM) of science don’t describe how an ideal reasoner would operate. But I don’t think there’s any hope of us non-ideal reasoners parlaying that into superpowers. It’s something that’s occasionally useful to realize when someone tries to use The Rules as a trump card in a valid scientific controversy (say, over evolutionary psychology). But if you try to abandon time-tested principles of science altogether, instead of doing far better than mainstream science you’re going to do far worse.
And Eliezer, frankly, has some downright crackpot views, like the time he claimed that “Dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup.” He based this claim on the work of Gary Taubes, who reaches it by grossly misrepresenting what mainstream nutrition experts were actually saying.
Since actually moving to the Bay Area, I’ve encountered a lot more examples of the LessWrong community’s crackpot problem in person. There was the MIRI event where MIRI deputy director Louie Helm got up and declared his opinion that doctors were a fake profession. There’s the number of people I’ve met trying to treat psychological problems through methods whose scientific validity ranges from dubious (hypnotism) to downright pseudoscientific (“neurolinguistic programming”). And other downright weird examples I don’t even know how to explain in a blog post.
Anyway, here’s hoping Luke manages to lead the community around on this one.
Update: I ended up writing a follow-up to this post: Why I may not donate to MIRI/CFAR in the future.