Why I may not donate to MIRI/CFAR in the future

I spent much of the last two days writing a follow-up to my previous post on the LessWrong community’s crackpot problem. But then I looked over what I’d written and thought better of a lot of it.

I stand by what I wrote in my original post, and as a member of the effective altruism movement, I think it may sometimes be worth risking offense in order to give an honest assessment of non-profits and the cultures that surround them.

However, much of the evidence I could give for my assessment is from things people have told me offline, in e-mails, or on Facebook, contexts where I think people could reasonably expect not to get what they say signal-boosted on a very public blog. I won’t ignore the information I have in forming my own opinions, but I recognize that broadcasting all of that information may not be wise.

(No great scandals here. Just a variety of moderately embarrassing stuff that arguably isn’t all that surprising if you’ve followed the LessWrong community online. It was just different for me seeing it up close, I guess.)

I can say that I think the problems with LessWrong’s culture run deep. There are things in Eliezer Yudkowsky’s “Sequences” that didn’t hit me as problematic when I first read them, but now do, big time. His “Correct Contrarian Cluster” post, for example.

I tried to express a lot of what I think is wrong with LessWrong’s culture in a post on LessWrong a few months ago, titled “Self-Congratulatory Rationalism”. One thing in that post may not have been clear—the point was not to argue for humility or self-assurance per se, but to advocate being less dismissive of those not in your in-group, while also not taking for granted the sanity of those who are in your in-group.

Though he’s subtle about it, I think Robin Hanson has done a good job of pointing out the problems in the LessWrong meme cluster. Here’s a recent example. On Twitter, I suggested that his main point—be willing to just trust others’ judgments—was sound, but his “Don’t be rationalist” framing was off. Shouldn’t knowing when to trust others be part of rationality?

In response, he pointed out that for many people, “being ‘rationalist’ means not needing to listen to those not ‘one of us.’” Unfortunately, I think Robin’s right about that.

Alexander Kruel also makes good points. I wish I’d grokked his worries about LessWrong when I first encountered him online. He gets many Bayes points, as Eliezer would say.

I still like many of the people I’ve met through the Bay Area LessWrong community on a personal level. On the whole, they’re every bit as smart as advertised, and it’s a lot of fun to get to hang out with such smart people on a regular basis.

Unfortunately, I think the correlation between intelligence and rationality is weak at best. Also, that having a fun, quirky subculture, and a movement that actually gets stuff done, are goals that are somewhat in tension with each other.

How does this impact how likely I am to donate to MIRI, and its sister organization CFAR, in the future?

Let me start by summarizing my thinking about MIRI as of roughly late last year: MIRI’s current focus is on doing research on AI safety directly. I agree that this is potentially important work.

I’ve always been skeptical of MIRI’s idea that they’re going to try to be the ones to create a superintelligent AI directly. But Luke and other people at MIRI seem to be trying to make sure their work will be valuable to other researchers if someone else ends up creating the first super-AI.

This is a very good thing. In fact, I first donated to MIRI because Luke convinced me that the value of their work did not depend on any very specific assumptions about AI being right.

In general, I think Luke has done great work as executive director of MIRI, and MIRI is in much better shape than it would be if not for his work. He also seems to at least be pulling MIRI and LessWrong in the right general direction with respect to the community’s distrust of mainstream experts.

But I’m no longer sure it’s enough. I don’t have much direct evidence on what effective strategies for AI safety research might be. Donating to MIRI means trusting them to make those judgments. Though making a judgment like that is extremely complicated, seeing big problems with the overall LessWrong memecluster tip me towards not being able to trust them with those judgments.

I also worry about MIRI becoming isolated from the academic mainstream. The beauty of science is that it uncovers the truth without depending on the rationality of any one scientist. For examples, see Alexander Kruel’s list of highly intelligent and successful people with weird beliefs—almost all of them highly respected scientists, mathematicians, and philosophers.

If Eliezer Yudkowsky were an ordinary academic, his mix of very good and very bad ideas wouldn’t be much of a problem. Let the rest of the scientific community sort it out.

As it stands, can we count on a similar process happening with MIRI’s research? I understand MIRI is making an effort to interact with the academic mainstream, bringing in mathematicians from the outside, sending papers to conferences, but again, will it be enough? I don’t know.

Currently, I have no plans to further donate to MIRI in the future. I have a monthly donation set up for the Schistosomiasis Control Initiative. I’ve also been tentatively planning on donating to the Centre For Effective Altruism (CEA) around the end of this year (no good way for an American to set up a recurring donation AFAICT).

But this may change in the future. One worry about CEA: I haven’t seen the community around CEA in Oxford up-close, the way I’ve seen the Bay Area LessWrong community up close. Would a closer view of CEA leave me with similar worries about them? Whatever happens, it seems likely I’ll continue to change my mind about which charities to donate to in the future.


CLOSE | X

HIDE | X