As I stated in order to introduce the Argument from Reason in my first post in this series (concerning doxastic voluntarism):
CS Lewis fashioned the Argument from Reason and it has since been taken on by Christian philosophers and apologists such as Alvin Plantinga (in the form of the Evolutionary Argument Against Naturalism – EAAN) and Victor Reppert. For a bit of background reading, it is well worth grabbing John Beversluis’s excellent C.S. Lewis and the Search for Rational Religion (UK). It is also worth reading “Anscombe’s Critique of C. S. Lewis’s Revised Argument from Reason” by Gregory Bassham.
The argument broadly goes like this, as Lewis quotes of JBS Haldane in Miracles:
If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true… and hence I have no reason for supposing my brain to be composed of atoms.
The idea is that naturalism as a worldview is either self-refuting or indefensible. It can be formalised as follows:
1. No belief is rationally inferred if it can be fully explained in terms of nonrational causes.
2. If naturalism is true, then all beliefs can be fully explained in terms of nonrational causes.
3. Therefore, if naturalism is true, then no belief is rationally inferred (from 1 and 2).
4. We have good reason to accept naturalism only if it can be rationally inferred from good evidence.
5. Therefore, there is not, and cannot be, good reason to accept naturalism.
Or, simply put, determinism renders naturalism rationally indefensible.
I went on to write about the Fallacy of Division and emergent properties, and then to talk about rationality as I did in my last post.
In this post, I want to include some conversation between myself and commenter Verbose Stoic (VS). The Argument from Reason appears to be, for theists, actually an Argument from Consciousness of Intentionality. (Intentionality is the “aboutness” of something and the meaning we can derive from it). Let me explain:
VS stated:
I was very busy yesterday but since I bugged you about these issues I should try to respond to this:
When we accuse another human of not being rational, we are essentially saying that they are not being logical.
No. Or, rather, not always. The main idea is what I talked about before, which is about being reason-responsive. When we say that a human being who is capable of being rational is not being rational, we are saying that their conclusions aren’t sensible given the existing facts. This could be because their logic is flawed, and so they aren’t coming to a proper conclusion. This could also be because they are ignoring facts that would change their conclusion. So it isn’t just that their logic is wrong, but that the combination of their reasons for the conclusion and the reasoning to it has gone wrong somewhere. And, in general, we don’t call people not rational for simply making mistakes, or not knowing things that they couldn’t have known.
But that’s the colloquial as applied to beings that are clearly capable of rationality. For not rational in general, we mean that the response isn’t reason-aware; in short, the response or conclusion is done without any appeal to facts and reasons at all. Which then leads to your definition:
If we parse this down, simply put, someone can’t believe something if they use logic to arrive at the belief and these logical processes are as a result of nonrational things.
This doesn’t work because it boils it down to logical processes, but logical processes aren’t the key here. It is, as I commented, that the processes are semantically aware that is the key. As per your computer example:
A computer is rational and adheres strictly (much better than humans) to logic and rationality, if the word is described in logical terms. Computers use AND/OR gates and all sorts of logical mechanisms and processes.
AND/OR gates aren’t rational, and actually even barely count as logic. In computer terms, they actually map more to BITWISE AND/OR than LOGICAL AND/OR. The reason is that all they do is read two potential inputs and if the right combination of inputs is received they produce an output. Like the neural nets I talked about in my last comment, these have no actual relation to any semantic — read: meaning — content at all. The one input could be symbolizing “Sharks graze mightily” and the other “I hear blue” and as long as both inputs are received you are guaranteed to get an output. If you’re taking that as “true” in this case, you’re going to be taking something as true that is in actuality nonsensical. So these things clearly aren’t doing anything like reasoning.
This also holds for the rest of your computer examples. Computers do not, in general, reason. They do lots of things that don’t involve reasoning at all. If I set up my computer to copy some files, and if it keeps doing that even after I’ve left, that doesn’t mean that it is reasoning without me because copying files is not reasoning. So it’s only when computers are doing things that are like reasoning that we are willing to say that maybe the computer is reasoning. And even then, that’s not safe. For example, we are inclined to think that a computer program that plays chess is reasoning. But if that computer simply has a huge database of board positions and on every move searches them for the one that an external chess grandmaster has given the highest points value to, we’d be right to suspect that it isn’t really doing reasoning at all, and perhaps even that it doesn’t actually understand how to play chess. If it is assessing the moves itself, then we might be willing to extend the idea of reasoning to it.
Which leads to an issue with you missing something from Lewis’ definition, by dropping “causes”. The idea, I think, is really this: if a purportedly rational response is fully and properly caused by a non rational process so that introducing any other causal element would lead to overdetermination, then it can’t be a rational response. So if the atomic level exhausts all the required causal interactions — as naturalism asserts, and is crucially important for materialism about mind — and atoms in and of themselves don’t change their behaviour based on reasons and facts and meaning, then our supposedly rational responses aren’t. Materialists need to be able to say how the causal chain changes based on the meanings of the things being considered, but there is no room at the atomic level for that.
That was my point about deciding to go back to university because I like cheeseburgers. If it can be the case that our supposedly rational decisions could be caused by facts and meanings that are totally unrelated to that decision, then we have a serious problem and would have to toss out all of our decisions, at least as per them really being rational ones.
That’s ultimately why deliberation is the gold standard for reasoning, because deliberation, at heart, is reason-responsive, as what it is by definition is considering the reasons for doing something using logic and direct relationships. God might not have to deliberate, but that’s only because He would know what the rational response is, the one that should be taken if deliberation occurred. Ultimately, if it is possible for our purportedly rational decisions to come about for reasons and meanings that don’t match what a reasoned deliberation would consider, then the processes aren’t rational … even if they always HAPPEN to align with that.
(And yes, without an understanding of the process of God’s omniscience, it is possible to claim that God wouldn’t be rational either, but that’s an entirely different kettle of philosophical fish).
I replied:
…this talks about intentionality and meaning, as looked at in the Chinese Room.
Also, you are talking about a simple computer, but we can image a computer that is immensely powerful that has learnt as human minds do (new jumps in machine learning) so that it has the full knowledge of language etc. Imagine some quantum computer that could replicate and simulate human neuronal processes.
What would you say is categorically different.
And this is where we get back to Carrier’s point [in the last piece]. This is no longer the Argument from Reason but one of consciousness or some other aspect of philosophy.
VS responded:
Yep, as I’ve noted in my general comments and overall discussion of rationality: reason is not just logic. Reason is facts and meanings organized using logic to produce true and justified conclusions.
I agree that a system that can learn is something that can possibly be considered rational, as again my comment on the other post mentioned wrt inference engines vs neural nets. Neural nets aren’t rational in that sense because they aren’t content or meaning aware (or, at least, that’s what I argue). This also addresses the quantum computer example because if all it’s doing is simulating neurons it also wouldn’t be producing it outcomes on the basis of the content or meaning of the propositions, which as I argued isn’t rational. Tying it back to Lewis, if the cause of the output isn’t meaning, content or reason aware, then the output isn’t rational even if it happens to be correct.
Ultimately, that’s the key to Lewis and the EAAN: their arguments are attempts to show that naturalism undercuts the JUSTIFICATION for our beliefs, including that of naturalism: the beliefs may be correct, but because the underlying processes don’t select on the basis of the meaning of the statements or their truth value, if they are correct it’s because they HAPPEN to be correct, not because they are justified in being correct by a rational and reason and content aware process. Which ties back to the cheeseburger example. It may be true that I should attend university, but if the underlying belief that triggers that is “I like cheeseburgers” my decision would just happen to be correct, and not due to any justification I actually made or have.
I said:
But justification can be given using computer-type logic and associated processes. After all, it is computer-like in my own thinking: you are justified in believing X if you hold to A and B reasons (evidential) etc.
He responded:
Stay in touch! Like A Tippling Philosopher on Facebook:
You can also buy me a cuppa. Or buy some of my awesome ATP merchandise! Please… It justifies me continuing to do this!