We hold beliefs with various degrees of justification and the demands of rationality dictate to us that we proportion our degree of belief to the degree of our justification. If I am looking at evidence for two sides of a position and I find that 60% of the evidence seems to favor side A, whereas 30% favors side B, and 10% is not terribly well accounted for by either side A or side B, then I only have the right to be 60% sure of side A. Rationally speaking I should hold the position of side A with 60% certainty. If any choices must be made which depend on either side A being true or side B being false, I rationally should act according to the inference that side A is correct. Since 40% of what I know about the issue at hand gives me reason to doubt side A, I should provisionally adopt side A and investigate the evidence for side B and ask myself for each piece of prima facie evidence for side B, “does side A give a sufficiently plausible alternative readings of that evidence?”
Sometimes the best explanation is not indisputably clear. Often we believe based on reasons that are compelling but not necessarily conclusive—in other words we think that we have better reasons than worse ones to believe for the time being but we remain open to new reasons (in the forms of new facts, the discovery of previously overlooked logical implications of the evidence we already have, etc.) This is not properly called faith but a provisional and tentative belief that is open to change. When we only have, say, 60% certainty, then our rational confidence should only be 60%. That we decide to believe with 60% certainty what is 60% likely, does not mean that we make a “leap of faith” but rather that we rationally calibrate our degree of confidence to the degree of likelihood of truth. When we have 99% certainty, then our confidence should rationally be 99%. When we hold a 99% likely belief with a confidence that is 99% sure, we do not have “faith” but a belief that is 99% likely to be true and a rationally required degree of confidence that correlates to that likelihood.
There are two general ways to measure evidence. 1. I can count the pieces of evidence. 2. I can weigh each piece of evidence’s strength. On side A there may be there might be 10 pieces of evidence each worth 3 “units of credibility,” whereas on side B there might be 4 pieces of evidence which are worth 5 “units of credibility” and 1 piece of evidence worth 10 units of credibility all by itself. Theoretically in that case, the evidence would split 50%-50% and I would have no reason to decide in favor of A or in favor of B, even provisionally. In such cases I should not believe either way. I should simply weigh which leads to the least harm and the most good on net—acting upon A or acting upon B and act accordingly while not believing either way. If the benefits and harms of acting both ways are equal then I should act however I like but not make any presumptions to believe A or B true.
Sometimes evidence may be inconclusive in the sense that it only favors one side but not the other. Say, the evidence for A was 52% and the evidence against A was 48%. In this case the evidence is inconclusive. If I opt to believe A, I am rationally justified in doing so. However, I am not rationally justified in holding A incredibly strongly. I am not rationally justified in treating the evidence against A as insignificant. 48% evidence against a position is enough evidence to think and act in many ways as if it was true.
What I believe and how I should act sometimes differ according to the different risks and benefits of acting as though A were true and acting as though B were true. For practical purposes, if there are worse consequences if A is not true than if A is true, I should still take whatever necessary precautions were necessary to prevent the bad consequences of A not being true from happening in case A is not true. For example, say there is a 48% chance that there is a bomb in the building we are in. While rationally we are required to tentatively believe there was not a bomb in the building (since the odds that there is not a bomb in the building are 52%), we would be irrational if we did not evacuate the building. In fact, even with a 15% or mere 1% chance of a bomb in a building, the rationality of our belief and our behavior would require different things of us. Knowing that there is a 1% chance of a bomb in the building, we should very confidently believe there was not a bomb in the building but should be exiting the building as fast as we can nonetheless. This case demonstrates that we weigh severity of potential consequences and not just rational likelihood in making decisions. Of course this works the other way as well. If a lottery ticket cost $1 but there was a full 1% chance of winning $1,000,000 you would be incredibly foolish not to put in the $1. Even though there is a 99% chance you will lose $1 and gain nothing, the benefit that comes if you are lucky enough to have that 1% chance turn up the gain is far worth the 99% risk of losing $1.
(Tangent: If the odds are 1% that you will win $101 dollars you rationally should purchase the ticket. If the odds are 1% that you will win $100 it is neither rational nor irrational to purchase it. If the odds are 1% that you will win $99 it is irrational to purchase it.)
Interestingly, it is emotionally and ethically appropriate to fear and to hope in both of these cases in a way that differs from the way that it is rational to believe. While it is irrational to believe that there is a bomb in a building where the odds that it is there are 1% and even where they are 49%, it is emotionally inappropriate not to fear that a bomb is there anyway. So our fears and our beliefs are distinguishable things. What it is rational to fear and what it is rational to believe can vary. And the same goes for hope (where hope means to anticipate a receiving a possible good the way that fear means to anticipate receiving a possible harm). It is also an ethical imperative to act as though what is feared is true in the situation involving the bomb in the building. You would be morally culpable if you discovered a 1% chance of a bomb in a building and did not immediately attempt to evacuate that building. But, rationally, you would be wrong to believe there was actually a bomb in the building as you evacuated it.
In situations in which the evidence for two positions splits 55% for A and 45% for B, my beliefs should calibrate to 55% confidence in A’s truth and 45% confidence in B’s truth. This means tentatively I should say that I believe A is true. But wherever there are risks that B is true and will have unfortunate consequences, I should still protect myself against such consequences in the 45% likely case that they are true (assuming that acting as though A were true in those cases would not prevent even more likely risks in those same situations). If there are some very good things that would come about if B is true and if I acted as though I believed in B, I would certainly be foolish not to act as though B were true in those cases (assuming that acting as though A were true in those cases did not increase my likelihood of benefits in those same situations). Wherever the consequences of acting in accordance with the supposed truth of A and not B presents no worse risks or no greater benefits than acting in accordance with B would, then since all things are equal, I should not only believe A is true but act as though A is true.
However, likelihood of truth counts in calculating how seriously we should take the possibility of harm or benefit. As established in the last paragraph, there are some cases where it seems 55% sure that A is true and B is not true, but it is shrewder to act as though B is true because the negative consequences are 20% worse if it is true and if I act as though it were not true and A were true. But when the odds are 80% that A is true and B is not true and the negative consequences of B being true are 20% worse, then the math says that the risk of B being true is not worth acting as though B were true. The 60% greater likelihood that A is true outweighs consideration of the 20% worse harms if B still winds up true and so I should act as though A is true. And in both cases, I should rationally believe that A is true, even though in the one scenario I should act as though B were true.
Finally, let me correct the post’s teasing title. When you should act as though B were true while you should not believe that B is true, you are not quite acting on what you don’t think is true. You are not acting on B itself, but rather you are acting on the rationally correct calculation that all things considered it is true that it is better to act on B. Your reasons for acting are true—the calculated risks or moral costs or benefits, etc. are all true calculations and rational reasons for action even though proposition B, which factors prominently in the calculations, is likely false.
In a future installment of the “Disambiguating Faith” series, I will explore how faith fits in with this picture of when and how we should rationally calibrate our beliefs to evidence and our actions to calculations of greatest harm and greatest benefit.
In the meantime,
For more on faith, read any or all posts in my “Disambiguating Faith” series (listed below) which strike you as interesting or whose titles indicate they might answer your own questions, concerns, or objections having read the post above. It is unnecessary to read all the posts below to understand any given one. They are written to each stand on their own but also contribute to a long sustained argument if read all together.