The future, as Criswell was fond of saying, is where we will all spend the rest of our lives. And not only us but countless trillions of people who have yet to be born. At least potentially—that “countless trillions” assumes can avoid blowing ourselves up first. This suggests that anything we can do now to influence the long-term prospects of humanity for the better will be extremely high-impact.
The trouble is knowing how to actually do that. Positive impact on the far-future isn’t directly measurable in the way that lives saved through distribution of bed nets is. Deluding ourselves here is extremely tempting. If we admit the probability of having a positive impact is very very small, but argue for trying anyway because the potential payoff is so huge, it begins to look a lot like Pascal’s mugging.
But I think it’s important to recognize that “how effectively can we influence the far future?” is the correct question to be asking, when evaluating proposals to try to improve humanity’s long-term prospects. The absolute size of, say, risks from AI is in some sense a secondary question, compared to “what can we do about those risks?” Even small risks would be worth working to eliminate, if we had an effective means of doing so. But do we have that?
Here’s Nick Bostrom talking a bit about this: