The Full Richard Carrier Interview

Over a 24 hour period from June 14-15, I conducted 18 three hour written interviews with various interesting people on a wide range of topics. Almost all of the interviews were conducted simultaneously with one, two, or even three other interviews at the same time. (Here is a full list of the interviews I conducted, with links, in case you missed any of them.) 

I regret that in my extreme haste to get each interview published a couple times there were glaring errors that needed to be fixed wherein a portion of an interview would wind up missing in the published post. Zinnia Jones caught such a mistake and I fixed it quickly. In the case of Richard Carrier’s interview the entire first question and his long reply to it were missing entirely and so was the second question (but not his answer at least). SO, to rectify this disastrous error, I have not only fixed the original post but I am hereby posting up the entire interview in this new post today. I want to thank Richard not only for taking his time to answer my questions but for being so gracious about my error. 

Daniel Fincke: You and I both share a philosophical position I don’t think all our colleagues do at Freethought Blogs. We both agree that many of them run philosophy blogs. What is philosophy and how could it possibly be that “non-philosophers” are doing it while outright denying it (in some cases)?

Richard Carrier: Philosophy most simply means “love of wisdom,” and thus the pursuit and embrace of wisdom. Which leaves us to answer what wisdom is. Which is what Aristotle first sorted out, establishing what we mean by philosophy as a field of thought and study. As he rightly argued (in my opinion), and as history has confirmed (in every century since), philosophy means a worldview and how one thinks about their (or anyone else’s) worldview, which means one’s understanding of science, metaphysics, epistemology, ethics, aesthetics, and politics and how these six categories are all interconnected, both logically (analyzing the meaning of terms and their implications) and factually (discovering and analyzing what really exists and has really happened, and what its implications are for all six branches of your worldview).

Put this way, everyone is a philosopher. The only question left is whether they are good philosophers or bad ones, or doing philosophy well and carefully, or poor and carelessly, or thinking about how to do it better, or not. I think many here at FtB are actually pretty good philosophers, at least in areas they specialize in (Greta Christina on feminism and sexuality, for example, or Natalie Reed on gender and culture). They just don’t do “big philosophy,” which means, talking about the six fundamental parts of a worldview and how they are interrelated. Which inevitably gets you into deep abstractions about reality, which is what most people associate with philosophy, even though that’s really just a part of it.

I think that’s one reason some people want to deny being a philosopher. They equate it with big abstract issues, formal logic, and other tools and stepping stones of philosophy, see that that isn’t what they are into, it’s not what they do, so surely they can’t be philosophers. The same thinking can also lead to a common belief that philosophy is bullshit, that the big questions can’t be answered or it’s all just opinions anyway. And so philosophy becomes a bad word people want to avoid. Part of the problem here is that most philosophy is bullshit–because it’s done badly. But just as pseudoscience doesn’t discredit real science, bad philosophy shouldn’t be allowed to discredit good philosophy.

I worry there may be another reason, and that’s that to admit you are doing philosophy is to admit you should work at studying how to do it well. Once you admit you’re a philosopher, the skills of formal logic start to look like a job requirement, and learning that is work, and some people want to avoid that. They shouldn’t feel that way, but I think some people do. Likewise the need to read up on and contemplate other philosophy, and foundational philosophy, is also work, and leads to the same feeling. And in part there is justification in this, since we can’t all be experts at everything, and specialization and division of labor is required. But that’s where admitting you are part of a philosophical community can be helpful.

For example, Reed does pretty well with reporting and analyzing the semantics, morality, and science of gender. She rests on certain epistemological assumptions in doing that, though, that happen to be correct (e.g. how to tell good science from bad and why good science conveys reliable knowledge), but she might be even better at this if she studied the epistemology that verifies these assumptions. Because that study can also qualify or refine those assumptions, and give you a better understanding of why they are correct, and when they might go wrong. And of course, if someone attacks her epistemology, she may have to punt to an epistemologist, handing the baton off as it were to a specialist who backs her assumptions. Which ultimately we all have to do eventually, precisely because we can’t be experts at everything. That’s how we make use of science (not being ourselves scientists; but even scientists have to rely on other scientists, as when biologists must defer to physicists or sociologists).

But it helps to acknowledge that your philosophy is built on top of the foundation laid by other philosophers, who have the job of defending that foundation. And that requires admitting that you are a philosopher, and you are doing that. If we see ourselves as a community of philosophers working together and relying on each other to establish and develop our parts of the worldview we all share, I think we could kick a lot more ass than we already do.

Daniel Fincke: You talk a lot about formal logic. But how can logic itself bridge the gap to reality. For example, I always see people calling fallacy on this or that, jumping up like lawyers with an objection in a trial, and it often strikes me as rather simplistic. Because essentially, to me, every species of proper reasoning done wrongly becomes a fallacy. Knowledge is rooted so much in induction, but there’s a fallacy there. Sometimes, in the right moral argument used the right way, it is appropriate to appeal to emotions but, uh oh!, someone’s going to call “ad misericordiam” on me. You can go down the line on this. So, to me, often just saying “fallacy!” often commits the fallacy of begging the question. The question of judgment involved in the boundary between a good use of a reasoning technique and its illogical abuse is very hard and not settled by formal logic classes or surely Alvin Plantinga and his acolytes would be atheists by now.

This is of course different than just calling out flat out contradictions. But even contradictions require clear definitions. And the philosophical issues at hand again quite often hinge on definition disputes in the first place.

Finally, many religious people develop extremely internally logical systems of thought. Some of the nuttiest beliefs are the result of the most steely eyed willingness to be logical within starting premises.

So do you have any insights into how logic gets from being purely formal to actually deciding any of the essential questions in a way that cannot be evaded?

 

Richard Carrier: 
There’s a difference, of course, between “can be evaded” in the sense of “there is an irrational way to avoid admitting the truth” and “can be evaded” in the sense of “there is a legitimate way to reject the conclusion.”

The former is not a matter of reason, fact, or logic, but of how to manage lunatics. I’m being slightly facetious. But the hard truth is, avoiding the truth by irrational means is delusional behavior, and that is a form of insanity, even if in any particular case it may be mild insanity (I discuss the different degrees of madness based on the different degrees of delusionality we all suffer in my Skepticon talk on Christianity as a Delusion). How to manage crazy people is an art and science of its own. But by and large, we have to leave them alone, and simply publicize the evidence of their irrationality so the sane know to steer clear of their nonsense and stop taking them seriously. Such people (and they include prominent intellectuals) need to be marginalized as what they are: irrational advocates of patent delusions.

But once we set aside the irrational people and ask how we can employ formal logic to answer essential questions in ways that cannot be evaded by rational people, the question is easily answered: that’s what the science of formal logic was developed for. And that’s one big reason why we should study it and know it well and be able to explain or deploy it in ways any sensible person can understand. The other big reason is that we need to make sure we ourselves are reasoning soundly, and formal logic is a big help with that, too. Hence it doesn’t just prevent other rational people from evading the truth, it prevents you from evading the truth. But we need to actually use it, consistently. Just knowing logic doesn’t make you logical.

You said the “boundary between a good use of a reasoning technique and its illogical abuse is very hard and not settled by formal logic classes,” but I disagree. Or at least, I don’t believe that is truly said of formal logic as a subject field. Whether the way it is taught in college sucks, I don’t know. Perhaps indeed logic classes are failing to teach effective ways to demarcate the use of logic from its abuse. If so, then that is an issue of pedagogy, not philosophy. It would require a call for educational reform in the field of philosophy, which in my opinion is much needed anyway for many other reasons, and if your concern here is also true, then that only adds one more reason to call out academic philosophy as a failing venture (Susan Haack makes a similar same case in Manifesto of a Passionate Moderate, as did Mario Bunge in Philosophy in Crisis).

I think the question of when a fallacy has actually occurred is easily definable and clear. Abusing the accusation “fallacy!” does not change that fact. For example, some people will dismiss all statistical arguments as bogus because “you can say anything with statistics.” But that’s only true if you abuse the science of statistics. If you stay honest and logical, you can’t say “anything” with statistics. You can only say what the numbers actually show. There is a huge difference between a bogus use of statistics and a valid one. And that difference demarcates good science from bad. So, too, fallacies. So you can’t go around saying “fallacy!” to every peer reviewed science paper and thus dismiss all science as bunk just because it’s “possible” they’ve manipulated the numbers to lie about what happened (this would be the fallacy I call possibiliter ergo probabiliter, which I describe and define in Proving History, pp. 26-29; and in a way the whole book is a demonstration of the same point).

As in science, so in philosophy. If someone misuses the accusation of “fallacy,” then they are simply making a factually false claim. You can do that outside of formal logic, too. So it’s not an abuse of logic, it’s just an abuse of the truth. And no truth can be deduced from a factually false premise. The Fallacy Files analysis of the argument from authority is an example of demarcating valid from invalid applications of that fallacy, for example, and this fallacy is indeed often abused. What they say there colloquially can be reduced formally to a deductive Bayesian argument about the probability of an expert being wrong in a given case. When that probability is not all that low, appealing to them is a fallacy; but when it is indeed very low, it is not. The only thing they don’t include there is the case when what an expert says is plainly contradicted by observable facts (e.g. a weatherman saying it’s raining out and you can see it’s not), which would be an example of evidence that greatly reduces the probability of that expert being right in that case (thus rendering an appeal to them fallacious). And repeated instances of these kinds of failures would begin to reduce the probability of that expert ever being reliable. You are then generating a case for them being a quack.

As to connecting logic to reality, that’s of course achieved through the premises, which are fact-claims, which are ultimately arrived at via logical reasoning from basic facts of our own direct experience (as I discussed in Epistemological End Game). Internally coherent systems are not plausible if they rely on demonstrably false premises. Nutty religious worldviews that are internally coherent always rest on demonstrably false premises. But I think that’s moot, since in my experience, nutty religious worldviews that are internally coherent are extremely rare. Most are blatantly irrational. They have to be, if they want to be able to maintain their beliefs when constantly confronted by facts that contradict them. That simply requires fallacious reasoning. And religious nutters are masters at it. But they also are fond of false premises, too. So you’ll find plenty of both in any religious worldview.

Indeed, in my experience, the closer a religious worldview is to atheistic naturalism, the fewer fallacies and false facts it has to incorporate. Which should tell you something right there.

Daniel Fincke: I recently saw that you had an exchange of some sort with Alonzo Fyfe in which you argued against desirism. Can you give a concise argument for why we should believe in objective naturalistic goods which are independent of our desires? What are they? How do we know they are there? How do we discern particular instances of them? And how do we most efficiently and logically settle conflicts between disputes over what the good is?

Richard Carrier: I actually argue that the correct moral theory (what I call goal theory) is a subset of desirism (aka desire utilitarianism), and is what you end up with when you make desirism fully coherent with itself and the facts. Which means I actually agree with Fyfe that all moral facts are derived from our desires, and not independent of them. I just don’t think he has brought that analysis to its full and correct conclusion. (See Goal Theory Update for details and links.)

Describing my own theory, objective naturalistic goods are goods that best satisfy our most fundamental desires. They are naturalistic because desires are naturalistic. And they are objective in two ways.

In the most direct sense, they are objective because they are objective facts about what will and what won’t fulfill our fundamental desires, and those facts remain true even if we do not know what they are or are mistaken about what they are. Even if we “desire” something else, that is a superficial desire the satisfaction of which would actually undermine the satisfaction of our own higher-priority desires. So what is moral is not merely what we desire, because we can be wrong about what we really desire, as when we think money will make us happy so we desire money and then let that desire cause us to do things that ultimately destroy our own happiness. It was really happiness that we desired, not money per se. Whether and to what extent (and in what ways) money can be used as a tool to get the thing we really want (a satisfying and meaningful life) is an objective fact of the world that we have to empirically discover like anything else. It is not a matter of opinion.

In a broader sense, objective naturalistic goods are objective because they are universally true. But I think it’s a mistake to call those things “objective,” they should instead be called “universal.” There are goods that are not universal and thus not “objective goods” in the sense you might mean. For example, cheesecake is in one sense of the term “good” for some people and not for others (like me: I cannot digest dairy and have a powerful olfactory aversion to it). That it is delicious to an individual is an objective fact about them (about which they can even be wrong, for example if they have never tried it and erroneously believe they won’t like it), and one I can even confirm by objective means (from watching their brain under a functional MRI as they eat it). But it is more correctly called a subjective good, because it is good only in their subjective experience.

But a subjective good everyone shares is a universal good. Theists often conflate “objective” with “universal” and thus make arguments to the effect that if something is “subjective” that it is not “universally true” or not in any sense “true” (as if it were not “true” that you liked cheesecake when in fact you do, which is nonsense). Subjective truths can be universal truths (e.g. human color experience), and are certainly still true.

Take monogamy, for example. That would be meaningless to an asexual species or a species that reproduces by pollination or that requires more than two genders for reproduction. So there cannot be any “objective” sense in which monogamy is a universal good. It can only be a universal good (if it is a good at all) for a species that is constructed a certain way, and even then only if it promotes the subjective goods of that species. And if that is the case (and it is), then we have to inquire whether monogamy is a good even for us, since it is not automatically a given that it is. Perhaps our construction entails something else would work better and bring us all more satisfying and fulfilling lives. Answering that question requires examining the facts and what they entail. The answer is therefore in that sense an objective fact, about us and the natural world.

What objective moral values are, then, are those values which universally serve our most fundamental subjective desires. In other words, the values that do this for every one of us, and do this as a matter of objective fact (and not as a matter of opinion or wishful thinking). All values (universal or not) are just persistent, ingrained desires. And desires are neural circuits that cause feelings that in turn cause behaviors. Moral values thus exist as neural circuits, and moral facts exist as facts about the relation between the behaviors those neural circuits will cause in an organism and the consequences of those behaviors to that organism. (Which I explain in detail in Moral Ontology.)

All of this can be confirmed empirically in the cognitive sciences (psychology, neurophysics, etc.). That’s how we know they are there. And the relation between those values and their consequences we observe empirically in the social sciences (sociology, history, etc.). That’s how we know they are true. By which I mean, that those desires really are best for us (as opposed to our mistakenly thinking they are). But we need an even more developed and focused science of this, to ascertain more clearly the connections between certain values or behaviors and their consequences (personally and socially, physically and psychologically). I discuss this in The End of Christianity  (pp. 333-64) with more practical discussion of how such a science could construct its research program in Sense and Goodness without God (sec. V.2.2.5, pp. 334-35).

The question of how to “most efficiently and logically settle conflicts between disputes over what the good is” is not one of philosophy but communications. There is a difference between how you prove a conclusion is true, and how you convince someone that you have done that. For example, how scientists prove their conclusions in journals is very different from how they communicate their results to the public and how they convince the public that their conclusions are most likely true. This is difficult when the public is irrational and set against conclusions it doesn’t like (like that the earth is four billion years old, or that they are the descendants of fish and biologically related to bananas, or that abortion improves the lives of children, or that legalization of drugs would reduce both crime and taxes, or that gay marriage is good for society, or that prostitution is not immoral nor any more harmful than various modes of employment in the food industry, and so on).

This brings us full circle to your earlier question: how do we persuade irrational people? There are methods of doing that (which exploit their own irrationality, such as framing, appeals to emotion, etc.), but that’s not doing philosophy…however appropriate doing it may be; as long as what is being communicated is openly demonstrated to be true in the proper way first, using non-rational methods to persuade non-rational people of this is fine. It becomes an evil only when these techniques are used to sell a lie, or a conclusion not honestly known to be true, as down that path lies a dysfunctional society, which none of us would prefer to live in.

But we must first demonstrate our conclusions to be true. And that requires reasoning logically (in the proper and not fallacious way) and doing or drawing on the science necessary to establish which premises are true. For example, what really are the consequences of lying, and how do these consequences differ for different kinds of lies, or lies told in different contexts? That is an empirical question of objective fact. And this is a question about a system, and therefore the answer can change if that system is changed. That’s why marriage as an institution has repeatedly changed throughout history, as have systems of government: systems can adjust as a whole, not just piecemeal. But there are still objective facts about that.

For example, if everyone believed lying was moral, everyone’s behavior would change to accommodate the fact that anyone can and will at any time be lying. The consequences of this are predictably undesirable: society will become incredibly lonely and inefficient, as enormous resources are spent fact-checking (and in other ways defending against deception), a great deal of knowledge is rejected as being too suspect to trust, friendship ceases to exist as a social institution (along with all social relationships of any kind, loving or collegial). No one would want to live in that society. They want to live in a society where trust has social utility (because only out of trust can we build all the incredibly useful social institutions we depend upon, from schools and libraries to friendship and marriage), and such a society can only exist to the extent that we also are trustworthy. It’s like littering: you can’t expect to live in a litter-free neighborhood if even you are littering it, and you can’t expect others not to litter if even you are doing it. You have to comply, and police others who don’t. And the more who comply, the better your neighborhood is, which benefits everyone.

Thus society is a cooperative enterprise, no less than rowing a boat or building a bridge. And that is an objective fact, empirically discernible.

——————————————————————–

Your Thoughts?

  • sjufyw

    Good interview.

  • http://skepticalmath.wordpress.com skepticalmath

    In the discussion of *formal* logic and fallacies, you both seem to be talking only about *informal* fallacies (like authority, emotion, etc.), not *formal* fallacies.

    Am I missing something here?

    • http://freethoughtblogs.com/camelswithhammers Daniel Fincke

      Well in part the problems I tried to raise also relate to deductive fallacies because of the problems with interpreting premises in ways that can make them subject to direct deductive connections.