When formulating principles and practices for forming good beliefs and avoiding bad beliefs, the first thing we must keep in mind is that consciously affirming a belief, consciously affirming a disbelief, deliberately avoiding believing or disbelieving are all actions. When we choose our standards for what propositions count as worthy of our belief, our disbelief, or our abstention from both belief and disbelief, we are choosing standards for actions. In this way, it is important to think about what kind of an action we are performing with each species and specific instance of affirmation, denial, and refusal to judge. As with all choices about actions, and choices to omit actions, we have to ask ourselves what is going to lead to maximizing the good in the long run.
I oppose both the implicit disposition to will to believe things that you perceive to be poorly supported by available evidence or outright contradicted by it, and, naturally, the willful acts of believing based on that disposition. I oppose these things because they increase the likelihood of believing falsely.
But what is wrong with believing falsely? It is irrational to believe falsely, but is that a reason by itself not to do it? What if one’s entire happiness depended upon believing falsely? Wouldn’t it be foolish from a personal point of view to surrender one’s own happiness out of an excessive deference to an abstract principle? We do not think art is bad even though it half tricks us with its falsehoods such that we have emotional reactions to fake people in fake situations.
The downsides to believing falsely when engaging with reality (and not with art) come insofar as doing so hinders our functionality in the world. A case can be made that some errors are more useful than truths but on the whole the more we understand the truth about reality, the more shrewdly we can adjust our plans to its constraints and the more we can use its resources to our advantage. There is also an intrinsic good in flourishing according to our rationality. We are the functions through which we exist and not only is functioning rationally one of our most excellent and powerful possibilities to realize but rationality coordinates, amplifies, and helps to shape in the first place many of our other powers through which our being is constituted.
In short: ideally realizing our rationality is a crucial part of ideally realizing ourselves, even where no further specific good comes from it. We should not live for pleasures and rewards but instead to be excellent, ever-growing, ever-self-overcoming people. Our living well is not a means to some other ends but our intrinsic good itself.
Insofar as believing what we perceive to be likely false is antithetical to realizing our fullest rationality, it is harmful to our overall flourishing and, so, is on net bad. And insofar as it hinders our abilities to effectively engage reality on behalf of our purposes, it is further counter-productive to our flourishing. And insofar as our bad beliefs and our bad influence over others’ habits of belief formation lead others to flourish less than they could as well, we do a further disservice to our own excellence, our own ability to maximize good in the world.
So, we should not believe by “faith” (which in my usage means very specifically the kind of believing I have just criticized).
But should we refrain from believing everything that does not have overwhelming evidence in its favor? Should we maybe even refrain from believing anything since no knowledge is entirely certain?
I think such extreme forms of skepticism are so excessively cautious that they are instances of bad rationality, however well-intended and however scrupulously they are adopted and adhered to.
The reason I say this is twofold. First, rationality is about proper ratios and proper rations. It is only logical that we proportion our beliefs to the precise degree of evidence. If there is, say, 59% likelihood a belief is true, then being strictly rational does not mean avoiding all belief for fear of error, but rather believing with 59% confidence. A 99% likely belief should be held with 99% confidence and a 10% likely belief should be actively disbelieved 90%.
That is proper ratio, a proper method for rationing beliefs, a proper rationality. The suggestion that we should drop out of believing altogether lest we risk believing too much unjustifiably assumes that it is less of a mistake to believe too little than it is to believe too much. But believing 50% too little is just as bad as believing 50% too much on the grounds of strict rationality. William James was right about at least that much in his famous essay on the will to believe (PDF) (even as he was wrong to oppose Clifford’s more scrupulous attempt to prevent what I define as faith).
Excessive denial out of an absolutist standard of certainty is a form of extremism and cowardice, however well meaning it is in aiming to be humble.
Some want to err on the side of refraining from belief or disbelief because they fear that making belief or, even worse, knowledge claims will lead to a slippery slope of overconfidence. It is true that sometimes when we are convinced a belief is likely and start treating it as knowledge we often slip into overestimating just how certain we are. But that only means we need to train ourselves mentally to remember just how tentatively and how firmly each particular belief warrants and to actively further investigate each belief or disbelief that has a significant chance of wrongness, with a willingness to revise it if necessary.
And outside of the goal of achieving rational proportion in belief for rationality’s own sake, an often more consequential issue is the matter of believing with proper proportion for engaging reality effectively. We have to make choices and these choices always require probability calculations (even if only implicitly). When it comes time to act, there is no avoiding belief and disbelief–one must choose the truest seeming account of reality, even as this still risks being false, because there is no option for avoiding action. Our deliberate omissions are just as much our responsibility as our deliberate actions are.
Now, this is not to say we should always act the same way as we believe is most likely. As I have explored in detail before, there are some cases where we should not believe something is true since it is only a little bit likely, but the dire consequences if we are wrong are so bad that even as we do not believe, we should take precautions as though we did. If there were a 1% chance of a bomb in the building, we should not believe there is a bomb in the building, but we should leave the building.
Most cases of engaging reality though are best solved by going with the greatest likelihood and actually believing it is true to the extent that the evidence merits it.
Another important reason we must believe and know things that are evidently likely is because believing, however tentatively and proportionately, helps us to explore our belief’s implications and likelihood more thoroughly, discover other likely truths, expose other likely falsehoods, and better confirm or disconfirm it itself by trying it out and testing it against the world.
And, finally, it seems impossible to say a priori that in practice the failure to believe something true (or disbelieve something false) has less harmful consequences than the excessive willingness to believe something likely false. Ignoring a likely truth means that that truth can hit you with negative consequences you ignored similar to the way believing a likely falsehood can. And, of course, we do not live in a moral universe, so sometimes, ironically, believing too little or believing too much might benefit people. But it is not reasonable to bank on that happening.