Believing Too Little Is As Bad As Believing Too Much

When formulating principles and practices for forming good beliefs and avoiding bad beliefs, the first thing we must keep in mind is that consciously affirming a belief, consciously affirming a disbelief, deliberately avoiding believing or disbelieving are all actions. When we choose our standards for what propositions count as worthy of our belief, our disbelief, or our abstention from both belief and disbelief, we are choosing standards for actions. In this way, it is important to think about what kind of an action we are performing with each species and specific instance of affirmation, denial, and refusal to judge. As with all choices about actions, and choices to omit actions, we have to ask ourselves what is going to lead to maximizing the good in the long run.

I oppose both the implicit disposition to will to believe things that you perceive to be poorly supported by available evidence or outright contradicted by it, and, naturally, the willful acts of believing based on that disposition. I oppose these things because they increase the likelihood of believing falsely.

But what is wrong with believing falsely? It is irrational to believe falsely, but is that a reason by itself not to do it? What if one’s entire happiness depended upon believing falsely? Wouldn’t it be foolish from a personal point of view to surrender one’s own happiness out of an excessive deference to an abstract principle? We do not think art is bad even though it half tricks us with its falsehoods such that we have emotional reactions to fake people in fake situations.

The downsides to believing falsely when engaging with reality (and not with art) come insofar as doing so hinders our functionality in the world. A case can be made that some errors are more useful than truths but on the whole the more we understand the truth about reality, the more shrewdly we can adjust our plans to its constraints and the more we can use its resources to our advantage. There is also an intrinsic good in flourishing according to our rationality. We are the functions through which we exist and not only is functioning rationally one of our most excellent and powerful possibilities to realize but rationality coordinates, amplifies, and helps to shape in the first place many of our other powers through which our being is constituted.

In short: ideally realizing our rationality is a crucial part of ideally realizing ourselves, even where no further specific good comes from it. We should not live for pleasures and rewards but instead to be excellent, ever-growing, ever-self-overcoming people. Our living well is not a means to some other ends but our intrinsic good itself.

Insofar as believing what we perceive to be likely false is antithetical to realizing our fullest rationality, it is harmful to our overall flourishing and, so, is on net bad. And insofar as it hinders our abilities to effectively engage reality on behalf of our purposes, it is further counter-productive to our flourishing. And insofar as our bad beliefs and our bad influence over others’ habits of belief formation lead others to flourish less than they could as well, we do a further disservice to our own excellence, our own ability to maximize good in the world.

So, we should not believe by “faith” (which in my usage means very specifically the kind of believing I have just criticized).

But should we refrain from believing everything that does not have overwhelming evidence in its favor? Should we maybe even refrain from believing anything since no knowledge is entirely certain?

I think such extreme forms of skepticism are so excessively cautious that they are instances of bad rationality, however well-intended and however scrupulously they are adopted and adhered to.

The reason I say this is twofold.  First, rationality is about proper ratios and proper rations. It is only logical that we proportion our beliefs to the precise degree of evidence. If there is, say, 59% likelihood a belief is true, then being strictly rational does not mean avoiding all belief for fear of error, but rather believing with 59% confidence. A 99% likely belief should be held with 99% confidence and a 10% likely belief should be actively disbelieved 90%.

That is proper ratio, a proper method for rationing beliefs, a proper rationality. The suggestion that we should drop out of believing altogether lest we risk believing too much unjustifiably assumes that it is less of a mistake to believe too little than it is to believe too much. But believing 50% too little is just as bad as believing 50% too much on the grounds of strict rationality.  William James was right about at least that much in his famous essay on the will to believe (PDF) (even as he was wrong to oppose Clifford’s more scrupulous attempt to prevent what I define as faith).

Excessive denial out of an absolutist standard of certainty is a form of extremism and cowardice, however well meaning it is in aiming to be humble.

Some want to err on the side of refraining from belief or disbelief because they fear that making belief or, even worse, knowledge claims will lead to a slippery slope of overconfidence. It is true that sometimes when we are convinced a belief is likely and start treating it as knowledge we often slip into overestimating just how certain we are. But that only means we need to train ourselves mentally to remember just how tentatively and how firmly each particular belief warrants and to actively further investigate each belief or disbelief that has a significant chance of wrongness, with a willingness to revise it if necessary.

And outside of the goal of achieving rational proportion in belief for rationality’s own sake, an often more consequential issue is the matter of believing with proper proportion for engaging reality effectively. We have to make choices and these choices always require probability calculations (even if only implicitly). When it comes time to act, there is no avoiding belief and disbelief–one must choose the truest seeming account of reality, even as this still risks being false, because there is no option for avoiding action. Our deliberate omissions are just as much our responsibility as our deliberate actions are.

Now, this is not to say we should always act the same way as we believe is most likely. As I have explored in detail before, there are some cases where we should not believe something is true since it is only a little bit likely, but the dire consequences if we are wrong are so bad that even as we do not believe, we should take precautions as though we did. If there were a 1% chance of a bomb in the building, we should not believe there is a bomb in the building, but we should leave the building.

Most cases of engaging reality though are best solved by going with the greatest likelihood and actually believing it is true to the extent that the evidence merits it.

Another important reason we must believe and know things that are evidently likely is because believing, however tentatively and proportionately, helps us to explore our belief’s implications and likelihood more thoroughly, discover other likely truths, expose other likely falsehoods, and better confirm or disconfirm it itself by trying it out and testing it against the world.

And, finally, it seems impossible to say a priori that in practice the failure to believe something true (or disbelieve something false) has less harmful consequences than the excessive willingness to believe something likely false. Ignoring a likely truth means that that truth can hit you with negative consequences you ignored similar to the way believing a likely falsehood can. And, of course, we do not live in a moral universe, so sometimes, ironically, believing too little or believing too much might benefit people. But it is not reasonable to bank on that happening.

Your Thoughts?

About Daniel Fincke

Dr. Daniel Fincke  has his PhD in philosophy from Fordham University and spent 11 years teaching in college classrooms. He wrote his dissertation on Ethics and the philosophy of Friedrich Nietzsche. On Camels With Hammers, the careful philosophy blog he writes for a popular audience, Dan argues for atheism and develops a humanistic ethical theory he calls “Empowerment Ethics”. Dan also teaches affordable, non-matriculated, video-conferencing philosophy classes on ethics, Nietzsche, historical philosophy, and philosophy for atheists that anyone around the world can sign up for. (You can learn more about Dan’s online classes here.) Dan is an APPA  (American Philosophical Practitioners Association) certified philosophical counselor who offers philosophical advice services to help people work through the philosophical aspects of their practical problems or to work out their views on philosophical issues. (You can read examples of Dan’s advice here.) Through his blogging, his online teaching, and his philosophical advice services each, Dan specializes in helping people who have recently left a religious tradition work out their constructive answers to questions of ethics, metaphysics, the meaning of life, etc. as part of their process of radical worldview change.

  • Adam

    When you say that you do not believe something, there is an implication that you believe there is an appreciable probability (as far as you can tell) that it is false. This belief requires justification just as much as any other. The justification might simply be ignorance about the subject and an unwillingness to make judgment without the appropriate authority. I don’t believe or disbelieve string theory in part because I cannot understand it, and because experts in physics are divided in what to think of it. (It’s also said that it’s untestable presently, but I take the word of authorities on this. I believe them because they all agree on this and they know enough to say this.)

    When someone says they don’t believe in anthropogenic global warming, even if they don’t disbelieve, they imply that they believe it has an appreciable chance of being false as far as they can tell. If they’re then made aware of the scientific consensus, they have to justify not trusting the consensus—justify believing the consensus of experts in the field has an appreciable probability of being wrong.

  • Stevarious

    I understand, at least in principle, what you are trying to say, and I’m inclined to agree… But can you give any specific examples of any general facts about the world that we only have, say, a 60% certainty about that we should just go ahead and accept? Or at least, (if it’s okay) any examples from your personal life where you’ve found that to be a reasonable course of action?

  • Hank Fox

    As I have explored in detail before, there are some cases where we should not believe something is true since it is only a little bit likely, but the dire consequences if we are wrong are so bad that even as we do not believe, we should take precautions as though we did. If there were a 1% chance of a bomb in the building, we should not believe there is a bomb in the building, but we should leave the building.

    I use this idea very often in my own life. If I have my very expensive camera with me and I have to pop into a store, even for an instant, I weigh it like this:

    What’s the chance it will be stolen?
    What will it cost me if it IS stolen?
    Can I afford to take the chance?

    The answer to the first might be: Near zero.
    But the answer to the second is close to $3,000.
    Can I afford an extra 3 grand right now to replace it? No.

    Solution: Hide the camera, lock the car. Or carry it with me, no matter how odd that looks. Or, if it’s a really chancy neighborhood, don’t even stop at the store.

    For the same reasons, I’m probably a bit more avid than normal on the subject of personal safety.

  • http://nwrickert.wordpress.com/ Neil Rickert

    I eschew this whole “knowledge as belief” bit. I see knowledge as abilities, not as beliefs.

  • Seymour Brighton

    The title of your post was poor logic, so I didn’t read the post. Lesson learned, you’re a moron.


CLOSE | X

HIDE | X