The LessWrong/MIRI community’s problem with experts and crackpots

Luke Muehlhauser, executive director of MIRI, has a short but very important post up on the importance of expertise vs. intelligence and rationality:

I think most people don’t respect individual differences in intelligence and rationality enough. But some people in my local community tend to exhibit the opposite failure mode. They put too much weight on a person’s signals of explicit rationality (“Are they Bayesian?”), and place too little weight on domain expertise (and the domain-specific tacit rationality that often comes with it).

This comes up pretty often during my work for MIRI. We’re considering how to communicate effectively with academics, or how to win grants, or how to build a team of researchers, and some people will tend to lean heavily on the opinions of the most generally smart people they know, even though those smart people have no demonstrated expertise or success on the issue being considered. In contrast, I usually collect the opinions of some smart people I know, and then mostly just do what people with a long track record of success on the issue say to do. And that dumb heuristic seems to work pretty well.

Julia Galef, president of MIRI’s sister organization CFAR agrees (from Facebook):

Preach! I’m so glad Luke wrote this — I get alarmed when people talk about “rationality” as this magic superpower that allows you to figure out any question better than the experts on that topic. I’d describe rationality instead as the skill/habit set that makes you better at figuring questions out *for a given level of expertise* (i.e., holding expertise constant).

I’ve long had the sense that Luke was on the right side of this issue, and it’s good to see him saying so explicitly. Good to see Julia’s there too. Unfortunately, this is something I think the LessWrong community as a whole has a huge problem with. The “rationality as a magic superpower” thing is something you actually find in Eliezer Yudkowsky’s writings, most openly in some of his fiction but he’s pretty explicit that the fiction represents his actual views. He actually uses the phrase “acquire magical powers” to describe something he thinks his readers should be able to do.

I agree with Eliezer on the abstract point, that The Rules(TM) of science don’t describe how an ideal reasoner would operate. But I don’t think there’s any hope of us non-ideal reasoners parlaying that into superpowers. It’s something that’s occasionally useful to realize when someone tries to use The Rules as a trump card in a valid scientific controversy (say, over evolutionary psychology). But if you try to abandon time-tested principles of science altogether, instead of doing far better than mainstream science you’re going to do far worse.

And Eliezer, frankly, has some downright crackpot views, like the time he claimed that “Dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup.” He based this claim on the work of Gary Taubes, who reaches it by grossly misrepresenting what mainstream nutrition experts were actually saying.

Since actually moving to the Bay Area, I’ve encountered a lot more examples of the LessWrong community’s crackpot problem in person. There was the MIRI event where MIRI deputy director Louie Helm got up and declared his opinion that doctors were a fake profession. There’s the number of people I’ve met trying to treat psychological problems through methods whose scientific validity ranges from dubious (hypnotism) to downright pseudoscientific (“neurolinguistic programming”). And other downright weird examples I don’t even know how to explain in a blog post.

Anyway, here’s hoping Luke manages to lead the community around on this one.

Update: I ended up writing a follow-up to this post: Why I may not donate to MIRI/CFAR in the future.

  • Chris

    One thing I’ve noticed about LessWrong and related places (SSC, for example) is that there are a number of people everybody in the community “knows” are really really smart, but the assessment seems to be made, as far as I can tell, on the basis of virtually no observable evidence at all. I don’t necessarily want to name people so as not to anonymously insult, but some of these people are just straight up crackpots, or alternatively express very unorthodox (and usually wrong) views on subjects I’m trained pretty well in, with astounding certainty.

    But I only read these people online and don’t have experience with them in person, unlike you I think, so maybe I’m all wrong and people with no (observable) expertise really have solved problems in all sorts of diverse fields…

    • John_Maxwell_IV

      I would strongly encourage you to offer counterarguments if someone is saying something that you know to be wrong. I would definitely appreciate reading them.

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      The people who have a reputation on LessWrong, SSC, etc. for being really smart are, as far as I can tell, actually really smart. The problem is people not really grasping that being really smart does not mean you know anything about any particular issue. You can also be really smart and really irrational. I’ve touched on that issue here.

  • Axel Blaster
  • staircaseghost

    Isn’t that a bit like Nathaniel Branden writing a column lamenting the amount of selfishness and the tendency towards a cult of personality within his movement? Or Karl Rove complaining about how hyperpartisan our political landscape has become?

    Or like this classic?

  • Luke Breuer

    Have you come acrossDempster–Shafer theory, which is talked about in the 2003 paper Ignoring Ignorance Is Ignorant (citations)? If not, I suggest it. The first search result on LW for it has a helpful comment:

    I see so much on the site about Bayesian probability. Much of my current work uses Dempster-Shafer theory, which I haven’t seen mentioned here.

    DST is a generalization of Bayesian probability, and both fuzzy logic and Bayesian inference can be perfectly derived from DST. The most obvious difference is that DST parameterizes confidence, so that a 0.5 prior with no support is treated differently than a 0.5 prior with good support. For my work, the more important aspect is that DST is more forgiving when my sensors lie to me; it handles conflicting evidence more gracefully, as long as its results are correctly interpreted (in my opinion they are less intuitive than strict probabilities).

  • http://CommonSenseAtheism.com lukeprog

    Chris, I’d be careful throwing around the word “crackpot.” Not only is it easy to misinterpret what someone else is claiming or what their reasons for claiming it are, but even if someone very clearly believes in (e.g.) psy-powers for the usual dumb reasons, calling them a crackpot in public is a risky move. You never know who you’ll offend with that kind of language, and which doors you may have now closed that in the future you’ll regret having closed. I’ve suffered for this mistake before, as have others at MIRI, as have various people in the atheist/skeptic community that you’ve complained about on this blog.

    I don’t follow nutrition closely enough to comment on Eliezer statement. As for Louie, he clearly endorses vast swaths of mainstream medicine and medical practice, and spends far more time than most people using that mainstream medical knowledge and practice to improve his own health, and if he said something about doctors being a fake profession it was probably hyperbole meant to communicate some much more narrow point that is much less crackpot-sounding than what a phrase like that can sound like in isolation. Indeed, I think the comment you’re referring to was at a MIRI event sponsored by One Medical, which we decided to go ahead with because Louie went to the mainstream doctors there and liked their services quite a lot, so he could honestly recommend their services to others.

    I’d recommend toning down the post for the sake of your future self, though of course I’m motivated to make that suggestion due to a concern for Eliezer’s and Louie’s reputations as well.

    • staircaseghost

      “You never know who you’ll offend with that kind of language, and which doors you’ve now closed that in the future you’ll regret having closed.”

      Threats are the hallmark of a wicked creed.

      • http://CommonSenseAtheism.com lukeprog

        Eh? That wasn’t a threat. I don’t have any specific doors in mind, and I’m not planning to close any doors on Chris.

        • Anonymous Coward

          That’s right, you aren’t saying you or anyone you know is likely to do anything bad to Chris, but you have no control over what one of the dueling future AIs may do to Chris’s “future selves”… :D

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      Nope, not toning down the post. I didn’t expect you to like this post, Luke, but I think LessWrong’s problems in this area are severe, and I’ve already thought about the cost-benefit analysis of sugar coating it vs. not sugar coating it. I decided “don’t sugar coat it” was the right decision.

      • Robby Bensinger

        Luke didn’t say you should sugar-coat your criticism of Louie. He said that you’ve flatly misrepresented Louie’s views of the field of medicine. It might be that there are accurate ways to sum up people’s views in one-sentence memories of off-the-cuff speeches, but I’m pretty sure this doesn’t qualify.

        Since your thesis is ‘LessWrongers sometimes believe genuinely crankish things’, not ‘LessWrongers sometimes overstate things in ways that sound crankish’, you really do have to present Louie’s actual views honestly. E.g., by linking to some criticisms of medical norms in blogs like OvercomingBias or SlateStarCodex, which are presumably the concerns Louie had in mind.

        • UWIR

          Luke didn’t say you should sugar-coat your criticism of Louie.

          And Chris didn’t say he did. Luke did say:

          You never know who you’ll offend with that kind of language, and which doors you may have now closed that in the future you’ll regret having closed.

          That sure sounds to me like a recommendation to sugar coat.

          He said that you’ve flatly misrepresented Louie’s views of the field of medicine.

          No, he’s said that Chris too literally a statement that Louie “probably” meant hyperbolically.

    • http://kruel.co/ Alexander Kruel

      Chris, I’d be careful throwing around the word “crackpot.”

      There are many very clear examples of crackpottery everywhere on LessWrong. It’s just that you don’t notice them anymore. Because within your group certain beliefs don’t sound crazy.

      • Aris Katsaris

        Of course, to keep this in perspective, let me remind people that despite Alexander Kruel’s mention of “crackpottery everywhere” in LW, Alex also repeatedly proclaims LessWrong the most rational and intelligent community he knows.

    • UWIR

      Not only is it easy to misinterpret what someone else is claiming or what their reasons for claiming it are

      The speaker has a certain degree of responsibility to not be misunderstood. And at a certain point, expecting others to engage in massive steelmanning is itself a form of crackpottery.

  • John_Maxwell_IV

    So I found myself nodding along with Lukeprog’s post. But I don’t completely agree with yours. You quote Louie saying that doctors are a fake profession. This does seem like an obnoxious/hyperbolic statement that is likely to be false (or not even wrong). But when it comes to considering views that are much different from mainstream views, I try to keep in mind the points Paul Graham makes in this essay: http://www.paulgraham.com/say.html

    When I was growing up, I was a Christian because my parents were Christian. When I grew older, I became an atheist and realized that deciding your religious views based on what your parents think is not a good way to have accurate views about religion. (Even if you think *your* parents had accurate views, look at everyone *else*’s parents.) And in the same way, deciding your views about medicine based on what the society you grew up in thinks is not a good way to have accurate views about medicine. (Even if you think *your* society has accurate views, look at the views of *other* societies… see witch doctors, faith healing, etc.)

    So what does that mean? It means you have to think for yourself. Revolutionary, right? Yes, it’s reasonable to do critical examination of a few beliefs the society you’re in possesses and make the inductive argument that the rest of society’s beliefs are pretty solid. But you could also find counterexamples of things you think society is terrifically wrong about: try religion, the war on drugs… pick your poison. If modern American society believes something, that is moderately strong evidence to me that it’s true. But I also think that modern American society, like most every society that does exist or has existed, is wrong about plenty of stuff, and I try to keep in mind that this society is just the one that I happened to have been born in.

    Regarding medicine in particular, the statement “doctors are a fake profession” sounds especially plausible to me as contrarian statements go. The statement “doctors are a fake profession” is not saying that any particular fact confirmed by multiple meta-analyses and published in a medical textbook is a false fact. It’s saying that society is configured in a highly suboptimal way. And I’ve had that belief drilled in to me ever since I was a kid whiling the hours away in boring, useless school classes. It’s not exactly controversial that inefficiencies and distorted incentives are legion.

    • John_Maxwell_IV

      Also, regarding the Gary Taubes thing… you do realize Eliezer is linking to a New York Times article, right? So does that mean that the NY Times editors who let the article get published are also crackpots? Does just being very wrong about something make you a crackpot?

      “Crackpot” seems like a strong word for someone who agrees with a contrarian popular science writer you think has it all wrong.

      • http://patheos.com/blogs/hallq/ Chris Hallquist

        Re: Eliezer’s views on nutrition, it’s more than just linking to that one article. Eliezer is very serious about his views on nutrition. Awhile back, I saw him making other crazy claims on Facebook, and when someone who was doing graduate studies in the field pointed out some rather obvious flaws in his claims, he stuck to his guns. It’s kind of sad.

        • John_Maxwell_IV

          Hm, I don’t know anything about nutrition and I don’t know what thread you’re referring to, but that does sound plausible to me based on my in-person experiences of observing Eliezer argue. I actually wrote this post partially for him: http://lesswrong.com/lw/axn/6_tips_for_productive_arguments/

          I agree there is some epistemically bad stuff going on with LW (halo effect for high-status community members & a tendency to speak too confidently (in order to seem impressive?)), but I see a lot of good stuff as well. I hope the good doesn’t get thrown out with the bad. My request to anyone reading this is: see if you can discriminate the good from the bad for yourself instead of coming to identify as “pro-LW” or “anti-LW”. And I still think “crackpot” is too strong of a word, although I appreciate your willingness to risk social consequences by making your strong disagreement public.

    • John_Maxwell_IV

      Another note: the same groupthink effects that work at the level of all of society also work at the level of societal subgroups, and Less Wrong/MIRI is not immune in my observation.

      In my opinion, it is a fairly challenging lateral thinking problem to try and figure out what is actually true rather than just trying to figure out what subgroup to align yourself with. For example, on gender issues my current beliefs are that feminists are right about some things and mens rights activists are right about other things. But I’ve noticed an almost magnetic force on my beliefs to align more strongly with one group or the other, especially since I see almost no one taking the position of moderation I currently hold. It’s interesting what your brain will do when you aren’t keeping a close eye on it.

  • Alex SL

    “Are they Bayesian?” What the heck?

    I have met some odd colleagues in my life who are so enthusiastic about Bayesian analysis that they approach science with the attitude of “I have a hammer, so everything must be a nail”, and that already worried me a bit. But seriously, this post implies that it is even worse. This sounds like cult type behaviour.

  • bshlegeris

    I’m pretty sure Loiue didn’t actually say that he thought that doctors were a fake profession. He said something along the lines of “Our community has a reputation for being suspicious of medicine. In fact, I remember [some person] saying that he thought medicine was a disproved hypothesis.” You should be careful before putting words in people’s mouth like that.

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      Your memory sounds like something I also remember him saying—but I remember him saying the first thing too. IIRC he argued for the claim by saying that in the future doctors will be replaced by relatively simple computer programs. But that’s a bad argument, since we have good reason to think everything will be automated in the long run, and how soon that happens to a given profession is a poor test of its “realness.”

      • bshlegeris

        I think we should err on the side of caution when it’s a memory of something someone said six months ago, and you’re accusing them of crackpottery as a result.

  • dmytryl

    This piece by Muehlhauser seems relevant:

    http://lesswrong.com/lw/iao/common_sense_as_a_prior/9k9b

    Chris, what side of this issue they’re on is entirely dependent on whenever they’re dismissing experts or whenever they’re thinking of the time their self declared expertise got dismissed (which gets dismissed because it is is self declared and not in any way validated by the universe, unlike the expertise of physicists; if some types of expertise were mere hot air, the computer screen in front of you wouldn’t exist).

    edit: and as for Bayesian stuff… you know, they’ve been going on about doctor’s incompetence in this regard for as long as they existed as a community, but the medical tests I’ve seen – and I’m hardly in the most advanced country – had the age-dependent prior, and the test’s likelihood ratio. Given that they don’t generally have degrees in mathematics, may it be that they would be unlikely to recognize such a calculation if it doesn’t come pre-labelled with Bayes name?

  • RRand

    I like this article. However, I think it’s worth zooming in on where the problem lies. It’s not just Julia and Luke who recognize that “rationality” is not a superpower, and Bayes Theorem is not the Holy Grail. The actual people who work for MIRI and CFAR are, as far as I can tell, genuinely bright and reasonable people, albeit ones who give more credence to the claims that “AI poses a threat” and “I have the ability to change the way people think for the better” than I’d be inclined to. They generally don’t buy Eliezer’s nutrition claims or economic model, and will tend to hold nuanced views on topics like academic philosophy and politics.

    The issue is that there’s this community that has emerged around LessWrong and members of that community do take Eliezer’s posts (all of them) at face value. And then you get the inevitable out-of-place references to Bayes Theorem on Eliezer’s wall, and worse, the people who assign the same measure of authority to his opinions on nutrition as they do to his pieces about probability theory. (And when they pick up notions like “Eliezer’s body is fundamentally different than other people’s that probably true for me too” it becomes even more worrisome.)

    I don’t know how much of this problem is due to Eliezer’s (and maybe a few others’, I don’t know Louie Helm) penchant for hyperbole vs. an actual belief in their infallibility. I also don’t know how much Luke, Julia et al can do to combat this, given the distance between the core MIRI/CFAR people and, say, the poor souls who post on the LessWrong Facebook group. (Though you would think this would interest CFAR…)

    • RRand

      Also: “I actually agree with Eliezer on the abstract point, that The Rules(TM).” Is there something missing in this sentence?

      • http://patheos.com/blogs/hallq/ Chris Hallquist

        Oops, thanks.

  • http://krautscience.com/ Klaus Schneider

    Hi, I agree that expert opinions are important and that the LessWrong community holds some strange opinions.

    However, I have to disagree with you on Gary Taubes.

    First of all I wondered why you are referencing yourself as a source that Taubes is misrepresenting the evidence? There are lots of other sources that may give the statement more credibility. That’s just nit-picking though.

    More important, in your post you are mainly criticizing Taubes rhetorical tactics and methods and not his results. He may have misrepresented what medical experts say to foster his book sales. But he is still right about many things such as the lipid hypothesis, processed vegetable oils or that low-carb diets do show better weight loss results.

    Even though I am not a fan of Yudkowsky’s AI research, transhumanism, etc. I have to agree with him that the current dietary guidelines are flawed and high fructose corn sirup should receive much more blame than it does.


CLOSE | X

HIDE | X