Rational Beliefs, Rational Actions, And When It Is Rational To Act On What You Don’t Think Is True

We hold beliefs with various degrees of justification and the demands of rationality dictate to us that we proportion our degree of belief to the degree of our justification.  If I am looking at evidence for two sides of a position and I find that 60% of the evidence seems to favor side A, whereas 30% favors side B, and 10% is not terribly well accounted for by either side A or side B, then I only have the right to be 60% sure of side A.  Rationally speaking I should hold the position of side A with 60% certainty.  If any choices must be made which depend on either side A being true or side B being false, I rationally should act according to the inference that side A is correct.  Since 40% of what I know about the issue at hand gives me reason to doubt side A, I should provisionally adopt side A and investigate the evidence for side B and ask myself for each piece of prima facie evidence for side B, “does side A give a sufficiently plausible alternative readings of that evidence?”

Sometimes the best explanation is not indisputably clear.  Often we believe based on reasons that are compelling but not necessarily conclusive—in other words we think that we have better reasons than worse ones to believe for the time being but we remain open to new reasons (in the forms of new facts, the discovery of previously overlooked logical implications of the evidence we already have, etc.)  This is not properly called faith but a provisional and tentative belief that is open to change.  When we only have, say, 60% certainty, then our rational confidence should only be 60%.  That we decide to believe with 60% certainty what is 60% likely, does not mean that we make a “leap of faith” but rather that we rationally calibrate our degree of confidence to the degree of likelihood of truth.   When we have 99% certainty, then our confidence should rationally be 99%.  When we hold a 99% likely belief with a confidence that is 99% sure, we do not have “faith” but a belief that is 99% likely to be true and a rationally required degree of confidence that correlates to that likelihood.

There are two general ways to measure evidence.  1. I can count the pieces of evidence.  2. I can weigh each piece of evidence’s strength.  On side A there may be there might be 10 pieces of evidence each worth 3 “units of credibility,” whereas on side B there might be 4 pieces of evidence which are worth 5 “units of credibility” and 1 piece of evidence worth 10 units of credibility all by itself.  Theoretically in that case, the evidence would split 50%-50% and I would have no reason to decide in favor of A or in favor of B, even provisionally.  In such cases I should not believe either way.  I should simply weigh which leads to the least harm and the most good on net—acting upon A or acting upon B and act accordingly while not believing either way.  If the benefits and harms of acting both ways are equal then I should act however I like but not make any presumptions to believe A or B true.

Sometimes evidence may be inconclusive in the sense that it only favors one side but not the other.  Say, the evidence for A was 52% and the evidence against A was 48%.  In this case the evidence is inconclusive.  If I opt to believe A, I am rationally justified in doing so.  However, I am not rationally justified in holding A incredibly strongly.  I am not rationally justified in treating the evidence against A as insignificant.  48% evidence against a position is enough evidence to think and act in many ways as if it was true.

What I believe and how I should act sometimes differ according to the different risks and benefits of acting as though A were true and acting as though B were true.  For practical purposes, if there are worse consequences if A is not true than if A is true, I should still take whatever necessary precautions were necessary to prevent the bad consequences of A not being true from happening in case A is not true.  For example, say there is a 48% chance that there is a bomb in the building we are in.  While rationally we are required to tentatively believe there was not a bomb in the building (since the odds that there is not a bomb in the building are 52%), we would be irrational if we did not evacuate the building.  In fact, even with a 15% or mere 1% chance of a bomb in a building, the rationality of our belief and our behavior would require different things of us.  Knowing that there is a 1% chance of a bomb in the building, we should very confidently believe there was not a bomb in the building but should be exiting the building as fast as we can nonetheless.  This case demonstrates that we weigh severity of potential consequences and not just rational likelihood in making decisions.  Of course this works the other way as well.  If a lottery ticket cost $1 but there was a full 1% chance of winning $1,000,000 you would be incredibly foolish not to put in the $1.  Even though there is a 99% chance you will lose $1 and gain nothing, the benefit that comes if you are lucky enough to have that 1% chance turn up the gain is far worth the 99% risk of losing $1.

(Tangent:  If the odds are 1% that you will win $101 dollars you rationally should purchase the ticket.  If the odds are 1% that you will win $100 it is neither rational nor irrational to purchase it.  If the odds are 1% that you will win $99 it is irrational to purchase it.)

Interestingly, it is emotionally and ethically appropriate to fear and to hope in both of these cases in a way that differs from the way that it is rational to believe.    While it is irrational to believe that there is a bomb in a building where the odds that it is there are 1% and even where they are 49%, it is emotionally inappropriate not to fear that a bomb is there anyway.  So our fears and our beliefs are distinguishable things.  What it is rational to fear and what it is rational to believe can vary.  And the same goes for hope (where hope means to anticipate a receiving a possible good the way that fear means to anticipate receiving a possible harm).  It is also an ethical imperative to act as though what is feared is true in the situation involving the bomb in the building.  You would be morally culpable if you discovered a 1% chance of a bomb in a building and did not immediately attempt to evacuate that building.  But, rationally, you would be wrong to believe there was actually a bomb in the building as you evacuated it.

In situations in which the evidence for two positions splits 55% for A and 45% for B, my beliefs should calibrate to 55% confidence in A’s truth and 45% confidence in B’s truth.  This means tentatively I should say that I believe A is true.  But wherever there are risks that B is true and will have unfortunate consequences, I should still protect myself against such consequences in the 45% likely case that they are true (assuming that acting as though A were true in those cases would not prevent even more likely risks in those same situations).  If there are some very good things that would come about if B is true and if I acted as though I believed in B, I would certainly be foolish not to act as though B were true in those cases (assuming that acting as though A were true in those cases did not increase my likelihood of benefits in those same situations). Wherever the consequences of acting in accordance with the supposed truth of A and not B presents no worse risks or no greater benefits than acting in accordance with B would, then since all things are equal, I should not only believe A is true but act as though A is true.

However, likelihood of truth counts in calculating how seriously we should take the possibility of harm or benefit.  As established in the last paragraph, there are some cases where it seems 55% sure that A is true and B is not true, but it is shrewder to act as though B is true because the negative consequences are 20% worse if it is true and if I act as though it were not true and A were true.  But when the odds are 80% that A is true and B is not true and the negative consequences of B being true are 20% worse, then the math says that the risk of B being true is not worth acting as though B were true.  The 60% greater likelihood that A is true outweighs consideration of the 20% worse harms if B still winds up true and so I should act as though A is true.  And in both cases, I should rationally believe that A is true, even though in the one scenario I should act as though B were true.

Finally, let me correct the post’s teasing title.  When you should act as though B were true while you should not believe that B is true, you are not quite acting on what you don’t think is true.  You are not acting on B itself, but rather you are acting on the rationally correct calculation that all things considered it is true that it is better to act on B.  Your reasons for acting are true—the calculated risks or moral costs or benefits, etc.  are all true calculations and rational reasons for action even though proposition B, which factors prominently in the calculations, is likely false.

In a future installment of the “Disambiguating Faith” series, I will explore how faith fits in with this picture of when and how we should rationally calibrate our beliefs to evidence and our actions to calculations of greatest harm and greatest benefit.

In the meantime,

Your Thoughts?

__________________

For more on faith, read any or all posts in my “Disambiguating Faith” series (listed below) which strike you as interesting or whose titles indicate they might answer your own questions, concerns, or objections having read the post above.  It is unnecessary to read all the posts below to understand any given one. They are written to each stand on their own but also contribute to a long sustained argument if read all together.

Faith in a Comprehensive Nutshell

 

How Faith Poisons Religion

 

What About The Good Things People Call “Faith”? (Or “Why I Take Such A Strong Semantic Stand Against The Word Faith”)

 

How Religious Beliefs Become Specifically *Faith* Beliefs

 

Faith There’s A God vs. Faith In God

Trustworthiness, Loyalty, And Honesty

Faith As Loyally Trusting Those Insufficiently Proven To Be Trustworthy

Faith As Tradition

Blind Faith: How Faith Traditions Turn Trust Without Warrant Into A Test Of Loyalty

Faith As Tradition’s Advocate And Enforcer, Which Actively Opposes Merely Provisional Forms Of Trust

The Threatening Abomination Of The Faithless

Rational Beliefs, Rational Actions, And When It Is Rational To Act On What You Don’t Think Is True

Faith As Guessing

Are True Gut Feelings And Epiphanies Beliefs Justified By Faith?

Faith Is Neither Brainstorming, Hypothesizing, Nor Simply Reasoning Counter-Intuitively

Faith In The Sub-, Pre-, Or Un-conscious

Can Rationality Overcome Faith?

Faith As A Form Of Rationalization Unique To Religion

Faith As Deliberate Commitment To Rationalization

Heart Over Reason

Faith As Corruption Of Children’s Intellectual Judgment

Faith As Subjectivity Which Claims Objectivity

Faith Is Preconditioned By Doubt, But Precludes Serious Doubting

Soul Searching With Clergy Guy

Faith As Admirable Infinite Commitment For Finite Reasons

Maximal Self-Realization In Self-Obliteration: The Existential Paradox of Heroic Self-Sacrifice

How A Lack Of Belief In God May Differ From Various Kinds Of Beliefs That Gods Do Not Exist

Why Faith Is Unethical (Or “In Defense Of The Ethical Obligation To Always Proportion Belief To Evidence”

Not All Beliefs Held Without Certainty Are Faith Beliefs

Defending My Definition Of Faith As “Belief Or Trust Beyond Rational Warrant”

Implicit Faith

Agnostics Or Apistics?

The Evidence-Impervious Agnostic Theists

Faith Which Exploits Infinitesimal Probabilities As Openings For Strong Affirmations

Why You Cannot Prove Inductive Reasoning Is Faith-Based Reasoning But Instead Only Assert That By Faith

How Just Opposing Faith, In Principle, Means You Actually Don’t Have Faith, In Practice

Naturalism, Materialism, Empiricism, And Wrong, Weak, And Unsupported Beliefs Are All Not Necessarily Faith Positions

  • Dave Smith

    In the past I’ve used a similar probabilistic model for describing how we should determine what to believe, except rather than weighing A vs B, I would weigh A vs the amount of “faith” required to accept it as true. So if after analyzing the evidence I found something to be 80% likely, this belief would require less faith than something else which I deemed to be 60% likely. And below 50% I simply found it irrational to believe.

    Maybe I’ve taken this approach because of some deep-seated desire to fit faith into the formula, but I think the bigger reason is that I’ve found it to be useful in discussions with theists. If I’m told that every worldview requires faith, even a materialistic one, I can say “OK, I may not agree, but I’ll grant you that for the sake of argument. Now then, if all beliefs require faith, how do you choose what to believe? It’s not the faith that’s the determining factor – rather, it’s the evidence that gets you above 50%.”

    But I really like where you’ve gone with this – it takes faith out of the picture entirely and simply replaces it with tentative belief. And furthermore, your incorporation of fear and hope into the calculation makes a lot of sense to me. This definition of hope, a perfectly rational one, is the last remnant of “faith” that I’ve held onto. And in my opinion the only virtuous meaning of the word.

    My only other thought here is whether or not this sort of reasoning might be used in support of Pascal’s Wager. If I’m 1% sure God exists, but the potential reward for believing is in effect infinite, then that infinity surely outweighs the 1%. Of course you might reason instead that the probability of God existing is in fact infinitely small. But I think a better argument here is that the infinite reward offered you is cancelled out, probably many times over, by the infinite torture of others. So while it may be a rational decision, it not a moral one.

  • Dan Fincke

    Exactly Dave, after posting this article I’ve been mulling over whether or to what extent I opened the door for Pascal’s Wager too! I think it’s what you said, the infinitely small likelihood that the Christian God exists, the countervailing equal or greater probabilities that OTHER gods exist who would punish me for believing in the Christian god, and, finally, the equal possibility that an evil God exists who wants us to be tricked into believing only to torture us if we do.

    Finally, there are these several issues: (1) in what I wrote above, I only allow that we may ACT as though the less than 50% beliefs are true, not that we should assent to believing they are. (2) It seems irrational to me for there to be punishment for people who make an intellectual error in judgment in good faith and it is unjust to posit a hell in which people are disproportionately punished for eternity for finite sins, and so the Christian God is incoherent and cannot possibly exist.

    Either there is a tyrant God who cannot be trusted at all or there is a just God who is not sending anyone to hell—the worst would be purgatory. (3) And, since any true God would recognize that our failings are not from “original sin” or an ability to have lived otherwise than with our evolutionarily bequeathed nature, the entire Christian mythos of a “fallen human nature” for which we can be punished seems to me clearly false.

    So, for these and other reasons, the Christian God is too incoherent a concept that I don’t think it can even be an infinitesimal probability of being true. A tyrant God might be an infinitesimal possibility and an impersonal “principle of being” God might be a reasonable possibility (as would a deistic one) but the God of Christian dogmas about eternal punishment is self-contradictory fantasy and not a possibility at all as far as I’m concerned.