The 2014 Effective Altruism Summit—who stood out

I’ve promised friends in the EA community a write-up of my thoughts on the 2014 Effective Altruism summit, which occurred last month. Even though it was just two days, there was a lot of stuff going on. Speakers from many different non-profits had talked scheduled, sometimes concurrently, and there was also a large chunk of time set aside on Sunday for 15 minute “lightning talks” by any attendees wanted to give them. (I’ve previously posted a blog post version of my own lightning talk.)

With so much stuff packed into that short weekend, I’m going to try to focus on the speakers and organizations that most stood out to me. With that in mind, I think I was most impressed by the talks given by Robert Wiblin and William MacAskill of The Centre for Effective Altruism. Robert’s talk was mostly about Effective Altruism in general. The beginning had a part called “five ways to think like an Effective Altruist,” which I really liked, and expect to refer back to frequently in the future. They were:

  1. Measure the good
  2. Think on the margin (e.g. about what you can personally provide, rather than the average across all resources)
  3. Consider counterfactuals (what would’ve happened if you didn’t do anything?)
  4. Use expected value (probability times benefit)
  5. Focus on the best

Robert also talked a bit about the boundaries of the Effective Altruism movement. One thing he suggested, as something that might be part of the standard of who counts as an effective altruist, was making a significant commitment to altruism—”significant” here might mean donating 10% of your money, as in the Giving What We Can pledge, or 10% of your time, or making a significant career change so as to better impact the world. Effective Altruism is also defined by open-mindedness: you’re willing to change your beliefs in response to evidence, and switch what cause you support if you become convinced that another cause area will do more good.

Robert ended his talk by saying a bit about the things he thinks Effective Altruism is not. For example, though may Effective Altruists are consequentialists and utilitarians, vegetarians and vegans, these things aren’t part of the definition of Effective Altruism. He also noted that there’s some disagreement among Effective Altruists as to how wide we should cast our circle of moral consideration. Most EAs think non-human animals should be included, but some are not convinced. Most EAs also think future generations should be included, but again, some are not convinced.

William MacAskill’s talk was more focused on the specific work that CEA does. The most interesting thing about his talk was hearing that CEA operates, not so much as one organization, but as incubator for effective non-profits. CEA is currently made up of Giving What We Can and 80,000 Hours, and has recently spun off Animal Charity Evaluators and The Life You Can Save. CEA also has two projects currently in development: the Global Priorities Project and Effective Altruism Outreach.

This is, I think, what really impressed me about Will’s talk. He seemed (and not just because of this) to be strongly committed to figuring out how to most effectively spend the marginal dollar donated to CEA. Acting as a non-profit incubator helps avoid the trap of, “well, we’re known for doing X, so we feel like all the money we take in donations has to go to X, even if X has become pretty well-funded and maybe not the best use of the marginal dollar.”

Though I missed his lightning talk, I really enjoyed meeting Paul Duan of Bayes Impact. Bayes Impact is a new nonprofit, funded by Y Combinator, that seeks to use data science to help other nonprofits do their jobs more effectively. Bayes Impact strikes me as having a really great combination of tractability (because we know how to do data science) and potentially huge ROI (at their core, they’re doing software, which means their work has the potential to massively scale to help many nonprofits cheaply). Definitely keep an eye on them.

Anna Salamon’s talk for CFAR mostly stands out for having hilarious metaphors—I’m sure it will be spawning in-jokes about geese for years to come. I’ve learned a lot about CFAR since I first donated to them at the beginning of this year (which I did largely on the strength of Eliezer’s recommendation), and so far am fairly well impressed with them. I’ll probably have a lot more to say about them after doing my CFAR workshop, which starts… eek, a week from now!

Eliezer Yudkowsky, representing MIRI, served as a lightning-fast overview of his work on what will be required to build a safe human-level artificial intelligence. In the talk and following Q&A, I think I figured out part of what bugs people a lot about MIRI.

Within the EA movement, it’s become common to evaluate causes on importance, neglectedness, and tractability. Eliezer (and Luke Muehlhauser) do a very good job of arguing that AI safety is important and neglected, but often seem to overlook tractability, as well as the more specific issue of whether MIRI’s strategy specifically is a good one. I’m sure they’ve thought about it and concluded their strategy is a good one, but this seems to get glossed over in their public arguments.

That said, I intend to keep an eye on MIRI. Among other things, I recently listened to the audiobook of The Lean Startup, whose techniques Luke has talked about applying to MIRI. Now that I know what the heck he was talking about with this, it now sounds like a really good idea, anytime you have a project whose expected payoff is well into the future (even a few years into the future). As far as I know, Luke hasn’t made much progress implementing lean techniques in MIRI, but I look forward to updates in this area.

Finally, among the talks I went to, I really like Holden Karnofsky’s talk for GiveWell. GiveWell is mainly known for their work evaluating charities that work on third-world poverty, but his talk focused on a project he called GiveWell Labs (which has since been renamed the Open Philanthropy Project), which tries to extend GiveWell’s style of analysis to harder-to-evaluate areas, including specific issues in government policy. Politics in general looks very crowded and not very tractable, but the payoff if you can fix e.g. immigration is so huge that it seems worth looking into.

I didn’t go to their talks, or learn much in the way of new information about them at the Summit, but I feel like I should give shout-outs to GiveDirectly and Population Services International (PSI). I expect most readers will have heard of GiveDirectly, as they’re one of GiveWell’s top-rated charities. PSI isn’t one of GiveWell’s top recommendations, but they seem quite concerned with cost-effectiveness, and do some things those other charities don’t, like helping provide family planning services. They seem like they definitely should be on the EA community’s radar.

Seeing all these organizations at the conference had an unexpected result: I convinced myself it made sense to make small donations to any and all organizations I wanted to keep tabs on, so I ended up making $50 donations to Animal Charity Evaluators, Bayes Impact, MIRI, and the Future of Humanity Institute. I also sent a $200 donation to CEA. The only reason I didn’t send at least $50 to CFAR is I already donated to them this year; I think CEA, CFAR, and GiveDirectly are the organizations I’m most likely to make large donations to in the future. But as always I may change my mind.

What Are Your Thoughts?leave a comment