The Argument from Reason Boils down to an Argument from Intentionality or Consciousness

The Argument from Reason Boils down to an Argument from Intentionality or Consciousness June 22, 2020

As I stated in order to introduce the Argument from Reason in my first post in this series (concerning doxastic voluntarism):

CS Lewis fashioned the Argument from Reason and it has since been taken on by Christian philosophers and apologists such as Alvin Plantinga (in the form of the Evolutionary Argument Against Naturalism – EAAN) and Victor Reppert. For a bit of background reading, it is well worth grabbing John Beversluis’s excellent C.S. Lewis and the Search for Rational Religion (UK). It is also worth reading “Anscombe’s Critique of C. S. Lewis’s Revised Argument from Reason” by Gregory Bassham.

The argument broadly goes like this, as Lewis quotes of JBS Haldane in Miracles:

If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true… and hence I have no reason for supposing my brain to be composed of atoms.

The idea is that naturalism as a worldview is either self-refuting or indefensible. It can be formalised as follows:

1. No belief is rationally inferred if it can be fully explained in terms of nonrational causes.

2. If naturalism is true, then all beliefs can be fully explained in terms of nonrational causes.

3. Therefore, if naturalism is true, then no belief is rationally inferred (from 1 and 2).

4. We have good reason to accept naturalism only if it can be rationally inferred from good evidence.

5. Therefore, there is not, and cannot be, good reason to accept naturalism.

Or, simply put, determinism renders naturalism rationally indefensible.

I went on to write about the Fallacy of Division and emergent properties, and then to talk about rationality as I did in my last post.

In this post, I want to include some conversation between myself and commenter Verbose Stoic (VS). The Argument from Reason appears to be, for theists, actually an Argument from Consciousness of Intentionality. (Intentionality is the “aboutness” of something and the meaning we can derive from it). Let me explain:

VS stated:

I was very busy yesterday but since I bugged you about these issues I should try to respond to this:

When we accuse another human of not being rational, we are essentially saying that they are not being logical.

No. Or, rather, not always. The main idea is what I talked about before, which is about being reason-responsive. When we say that a human being who is capable of being rational is not being rational, we are saying that their conclusions aren’t sensible given the existing facts. This could be because their logic is flawed, and so they aren’t coming to a proper conclusion. This could also be because they are ignoring facts that would change their conclusion. So it isn’t just that their logic is wrong, but that the combination of their reasons for the conclusion and the reasoning to it has gone wrong somewhere. And, in general, we don’t call people not rational for simply making mistakes, or not knowing things that they couldn’t have known.

But that’s the colloquial as applied to beings that are clearly capable of rationality. For not rational in general, we mean that the response isn’t reason-aware; in short, the response or conclusion is done without any appeal to facts and reasons at all. Which then leads to your definition:

If we parse this down, simply put, someone can’t believe something if they use logic to arrive at the belief and these logical processes are as a result of nonrational things.

This doesn’t work because it boils it down to logical processes, but logical processes aren’t the key here. It is, as I commented, that the processes are semantically aware that is the key. As per your computer example:

A computer is rational and adheres strictly (much better than humans) to logic and rationality, if the word is described in logical terms. Computers use AND/OR gates and all sorts of logical mechanisms and processes.

AND/OR gates aren’t rational, and actually even barely count as logic. In computer terms, they actually map more to BITWISE AND/OR than LOGICAL AND/OR. The reason is that all they do is read two potential inputs and if the right combination of inputs is received they produce an output. Like the neural nets I talked about in my last comment, these have no actual relation to any semantic — read: meaning — content at all. The one input could be symbolizing “Sharks graze mightily” and the other “I hear blue” and as long as both inputs are received you are guaranteed to get an output. If you’re taking that as “true” in this case, you’re going to be taking something as true that is in actuality nonsensical. So these things clearly aren’t doing anything like reasoning.

This also holds for the rest of your computer examples. Computers do not, in general, reason. They do lots of things that don’t involve reasoning at all. If I set up my computer to copy some files, and if it keeps doing that even after I’ve left, that doesn’t mean that it is reasoning without me because copying files is not reasoning. So it’s only when computers are doing things that are like reasoning that we are willing to say that maybe the computer is reasoning. And even then, that’s not safe. For example, we are inclined to think that a computer program that plays chess is reasoning. But if that computer simply has a huge database of board positions and on every move searches them for the one that an external chess grandmaster has given the highest points value to, we’d be right to suspect that it isn’t really doing reasoning at all, and perhaps even that it doesn’t actually understand how to play chess. If it is assessing the moves itself, then we might be willing to extend the idea of reasoning to it.

Which leads to an issue with you missing something from Lewis’ definition, by dropping “causes”. The idea, I think, is really this: if a purportedly rational response is fully and properly caused by a non rational process so that introducing any other causal element would lead to overdetermination, then it can’t be a rational response. So if the atomic level exhausts all the required causal interactions — as naturalism asserts, and is crucially important for materialism about mind — and atoms in and of themselves don’t change their behaviour based on reasons and facts and meaning, then our supposedly rational responses aren’t. Materialists need to be able to say how the causal chain changes based on the meanings of the things being considered, but there is no room at the atomic level for that.

That was my point about deciding to go back to university because I like cheeseburgers. If it can be the case that our supposedly rational decisions could be caused by facts and meanings that are totally unrelated to that decision, then we have a serious problem and would have to toss out all of our decisions, at least as per them really being rational ones.

That’s ultimately why deliberation is the gold standard for reasoning, because deliberation, at heart, is reason-responsive, as what it is by definition is considering the reasons for doing something using logic and direct relationships. God might not have to deliberate, but that’s only because He would know what the rational response is, the one that should be taken if deliberation occurred. Ultimately, if it is possible for our purportedly rational decisions to come about for reasons and meanings that don’t match what a reasoned deliberation would consider, then the processes aren’t rational … even if they always HAPPEN to align with that.

(And yes, without an understanding of the process of God’s omniscience, it is possible to claim that God wouldn’t be rational either, but that’s an entirely different kettle of philosophical fish).

I replied:

…this talks about intentionality and meaning, as looked at in the Chinese Room.

Also, you are talking about a simple computer, but we can image a computer that is immensely powerful that has learnt as human minds do (new jumps in machine learning) so that it has the full knowledge of language etc. Imagine some quantum computer that could replicate and simulate human neuronal processes.

What would you say is categorically different.

And this is where we get back to Carrier’s point [in the last piece]. This is no longer the Argument from Reason but one of consciousness or some other aspect of philosophy.

VS responded:

Yep, as I’ve noted in my general comments and overall discussion of rationality: reason is not just logic. Reason is facts and meanings organized using logic to produce true and justified conclusions.

I agree that a system that can learn is something that can possibly be considered rational, as again my comment on the other post mentioned wrt inference engines vs neural nets. Neural nets aren’t rational in that sense because they aren’t content or meaning aware (or, at least, that’s what I argue). This also addresses the quantum computer example because if all it’s doing is simulating neurons it also wouldn’t be producing it outcomes on the basis of the content or meaning of the propositions, which as I argued isn’t rational. Tying it back to Lewis, if the cause of the output isn’t meaning, content or reason aware, then the output isn’t rational even if it happens to be correct.

Ultimately, that’s the key to Lewis and the EAAN: their arguments are attempts to show that naturalism undercuts the JUSTIFICATION for our beliefs, including that of naturalism: the beliefs may be correct, but because the underlying processes don’t select on the basis of the meaning of the statements or their truth value, if they are correct it’s because they HAPPEN to be correct, not because they are justified in being correct by a rational and reason and content aware process. Which ties back to the cheeseburger example. It may be true that I should attend university, but if the underlying belief that triggers that is “I like cheeseburgers” my decision would just happen to be correct, and not due to any justification I actually made or have.

I said:

But justification can be given using computer-type logic and associated processes. After all, it is computer-like in my own thinking: you are justified in believing X if you hold to A and B reasons (evidential) etc.

He responded:

Only if those logic and processes are actually aware of those reasons and determined by them in a meaningful way. That was the point of the chess program example: if all it’s doing is picking a move based on a simple board “image” and a ranking given by a chess grandmaster, it’s not reasoning out what the best move is. If a computer program is working more like that than like, say, an inference engine, then it isn’t doing reasoning. That’s the reason that deliberation, as I noted, is the Gold Standard for reasoning, because each step is driven by a direct consideration of the reasons and what they mean in relation to each other and in relation to the final decision. Computers, then, that work more like deliberation are far more reasonably considered rational than those that do not, as per my comments on the AND/OR gates and on neural nets.

To which I said:

“and a ranking given by a chess grandmaster, it’s not reasoning out what the best move is.”

But this comes back to my original point about reasoning being deliberation that is merely inefficient reasoning as you mention. Deliberation is just thinking – RAM, if you like. All deliberation is here, is assigning a weighting to each piece of evidence or argument. So I decide to believe X because A has 75 and ~A has only 25, B has 60 and ~B has 40.

Unless you have a different definition to “consideration” than I am using.

I think you are really claiming that rationality is not only the ability to use reason, but to be aware of the meaning of that which is being reasoned. Something a la Chinese Room etc.

His final comment:

But this comes back to my original point about reasoning being deliberation that is merely inefficient reasoning as you mention.

Well, that’s (at least one of) the main disagreements: I think that deliberation is ACTUAL reasoning, and automatic responses are not necessarily reasoning. The reason is that deliberation is driven by both the meaning of the propositions and logic, and automatic responses may not be. That was what the chess example was trying to show, but we can also note it by noting the difference between someone who only memorized their times tables and someone who learned to multiple. Someone who has only memorized the tables will be really fast at producing the answers, but won’t actually be multiplying. And before it gets brought up, the difference in children who learned times tables is that they both memorized the tables AND learned how to multiply, so the memorization became a shortcut to what they could reason out on their own. A computer that did that could be considered to be capable of reason as well. But if it doesn’t or can’t, then that’s another story.

And mine:

And this presents the the forking road that is pretty axiomatic to where we, each of us, go. I can see why you would intuitively favour an account of reasoning not to include automatic decision-making, though this is actually a result of very efficient inductive reasoning (or pre-programmed input-output logic – if this happens, that happens). Personally, I think your account is till a case of being differentiated by deliberation, so reasoning appears to be, for you, thinking “about” something, rather than applying logic to evidence to reach a conclusion (belief/decision etc).

Intentionality is famously a bone of contention for naturalist philosophers, but I think this should be (it is) a different argument to the Argument from Reason.

So here you have a conversation that sets out what I think is the main issue theists have with the AfR that isn’t strictly one of reason, or at least it is supervenient on arguments about intentionality and consciousness.

What is fascinating is that VS and I have actually played out a common theme. Take the debate below between the very succinct Alex O’Connor and the waffly Max Baker-Hytch.

From 39:30 and then again at 51:00, the two get on to talk about computers and then to realise that it is actually more a discussion of consciousness and intentionality. The idea, then, is perhaps whether we have a computational theory of mind or not. “It seems to me we’re kind of talking about the development of consciousness here… some kind of faculty for …thinking about things.” O’Connor states. “Maybe I’m just misunderstanding why this needs to be so complicated.”

O’Connor takes an interesting approach about presupposition before looking at inductive reasoning and probability as well as what we would expect consciousness to look like if we assumed naturalism. And it looks just like we would expect it to.

 


Stay in touch! Like A Tippling Philosopher on Facebook:

A Tippling Philosopher

You can also buy me a cuppa. Or buy some of my awesome ATP merchandise! Please… It justifies me continuing to do this!

"I’ll search for the book on online to purchase. Sounds like a fun book to ..."

Word of the Day
"Breaking news: Andrew Breitbart is still dead and still not a credible source.You’re welcome."

Word of the Day
"So sorry. Maybe you’d like these better:https://abcnews.go.com/Poli...https://apnews.com/ecc0da5f...https://theintercept.com/20..."

Word of the Day
"Yes I’m sure whether people pronounce her name correctly keeps you up at night, hypocritical ..."

Holidays!

Browse Our Archives