September 9, 2021

In a sense, this is about the most important piece I can write because out of it comes my whole philosophy. What you are about to read pertains to any abstract object: morality, meaning (of life or words), maths, purpose, and so on.

I am just about to release a new book called Why I Am Atheist and Not a Theist in which I build my whole worldview up from the bottom. It’s a touch more philosophical than some of my other religious treatments in establishing my ontological framework, which then informs my epistemology, natural theology and then morality. The problem is, most people do it backwards, or start in the middle. I often talk about bottom-up construction of worldviews rather than top-down.

I don’t want this to be a screed against certain people, but here is an example of ignorance of this topic from regular commenters. In this case, Catholic Jim Dailey appears not to have read my previous 100 pieces on the topic. Except I’m pretty sure he has – or at least he has commented on them – because his comment shows a willful ignorance of my repeated claims.

Jim Dailey stopped reading my piece on pro-life being anti-choice by his own admission:

Stopped reading at
“ The Consitution and the Bill of Rights are just old bits of paper. You can amend them get rid of them, supersede them – whatever.”
I will preserve this quote from you, and bring it out on every single piece you ever write on American politics ever again. It shows you have no understanding of the social compact between government and the governed.

Thanks for this.

This really annoys me for a couple of reasons: he is willfully not understanding me (he just doesn’t want to get it) and he is quote-mining me. Let’s present it all:

I’ve told you this a million times before. it’s how all abstract objects work, including morality and rights. We construct them so they exist conceptually, and in our minds, and then we codify them into law. This only becomes meaningful if the law is enacted and robustly defended.

The Consitution and the Bill of Rights are just old bits of paper. You can amend them get rid of them, supersede them – whatever. They only become meaningful in a pragmatic sense when codified, enacted and enforced.

This is, of course, trivially true. The Constitution literally is an old piece of paper (written by a bunch of long-dead old [white] guys). It literally includes amendments and changes. It literally is encoded into law, enacted and enforced, thus giving it practical importance and implication. To add to this, the Supreme Court routinely tests and modifies interpretations of the statements in both documents.

It’s the ideas that are important, silly.

But, as stated, ideas are only practically meaningful when they have ramifications or influence: in other words, when they are codified and acted upon.

This whole debate is where these ideas exist. Someone like JD, here, thinks there is either…

  1. some Platonic realm where there is the right to bear arms (that in this realm, there is some absolute moral law or diktat that sits there, as an abstract idea, naturally),
  2. or, as a Catholic, that perhaps abstracts exist in the mind or existence of God.

The broad point for me is that unless you set out your ontology of abstract ideas, you simply cannot do epistemology (how you know stuff, or go about seeking truth and what this means, since “truth” is an abstract concept), you cannot then do natural theology (whether gods exist, and how to make them coherent) and you cannot do morality (the ultimate of abstract objects).

All 2nd Amendment advocates, anti-choicers, religious zealots, and other such people do is argue from God outwards, or morality downwards; but they make a hash of abstracts, and their whole framework falls apart.

Not just those people. I wrote a post on moral skepticism the other week and plenty of humanists and atheists didn’t like it. They’re wrong too (in my humblest of opinions, sorry!). They are arguing from a realist ontology of morality – an assertion that there are indubitable moral facts that exist irrespective of human/sentient minds, or some such thing. This may be the case, but they present no cogent argument for how this is so. Thus it is merely wishful thinking.

I don’t want to set my case out in full again. But I will say this – and hats off to Luke Breuer because I’m sure he understands this and it is why he is using my ontological framework (conceptual nominalism – that abstracts only exist in the minds of the conceivers) as target practice. I will admit, if I am wrong at the bottom, everything else falls apart and I will have to rebuild my foundations to see where it leads me (don’t start with the conclusion and work back!). The only thing that has come close – though it might be a paradigm shift rather than a complete deconstruction – is idealism.

Theists are wrong at the bottom.

And so they are wrong all the way up.

Hopefully, you will see it in my book. It started off as just a project to compile old essays into an anthology so as not to waste them. It has become probably my most important book. My opinion is that every philosopher and theologian should be dealing with the ontology of abstracts before beginning to learn about anything else. It is that important.

Instead of laying out my case again (read the related posts below), I am going to produce an infographic or three to show that, even if the theist can establish either a Platonic realm where (even if it has no spatio-temporal location…) such ideas immutably exist or a God in which such ideas exist, then they are still no closer to solving the problem due to the epistemic barrier ever-present. And that’s given that when I invent the idea of a grashextiquet (the refraction of early morning sunshine through of a droplet of dew on a honeysuckle plant into the eyes and perception of a badger), it must somehow then pop into the Platonic realm or exist in God’s mind, and this must still take into account people’s disagreement with this definition, perhaps a changing thereof, maybe over time with the development of language and its application by a wide range of people. So on and so forth. See dictionaries and encyclopedias of everything, including philosophy. We construct and create frameworks of ideas – change and adapt them to suit different causes – and use these to navigate our way through reality.

But we don’t dig into the sands of an ethereal dimension like abstract explorers to discover already-existent individual ideas or frameworks of abstract ideas. We are builders, not discoverers.

Because we cannot access either God or the Platonic realm (PR). As such, there is an epistemic barrier, first formulated by Descartes: all we can know indubitably is cogito ergo sum – “I”, as a thinking or experiencing entity, exist. Past that, anything could be happening. The Matrix, Descartes’ Evil Daemon, a simulation, idealism, and so on. So we build up our epistemologies, but these are often measured on their success in pragmatic terms. We don’t know that our thoughts and beliefs entail 1-to-1 correspondence to an ultimate, foundational reality.

We all live in our worlds of our minds and hope that other minds think like ours. Often, we change our minds, and often we seek to change others in the endless journey towards the ideal of cognitive alignment.

If God exists, or a PR, we simply can’t know that our idea of morality is correct; 42,000 denominations of Christianity will show you this. Revelations depend on epistemologies, and epistemologies depend on abstracts and how we see the fundamental and ontological nature of reality.

This is how I see things:This is JP. He is a conceptual nominalist. He thinks more people should be like him.

We build up a map of reality, construct it from the earliest of moments to our present one.

This is a potential reality, too:

Here, the agent believes that there is another abstract reality outside of the undoubted abstract reality of their mind. For instance, the theist believes that their idea of God and their idea of morality is ideally a 1-to-1 correspondence to what is in the pink box. The problem is, all religious people believe this (or are trying to achieve this, involving some progressive revelation), but they mostly believe different things – different gods, different moralities, different revelations or understandings thereof (theologies).

But this is really what is going on with them – even if their god or PR actually exist!

Even if they do happen to hit the jackpot by “getting their god right” or mapping a moral rule accurately, they cannot know this! As a result, they still have to map everything out using their own conceptual mechanisms and frameworks!

This is one of the famous areas of issue with Divine Command Theory – the epistemic criticisms.

Christian: Goodness comes from God’s nature. What he commands is by definition good.

Skeptic: God commanded rape in the OT.

C: No he didn’t. At worst, that’s a human mistake in the Bible.

S: But how do you know God wouldn’t countenance rape?

C: Because God is good and rape is bad.

S: But in order to tell me that God wouldn’t command rape, you are using secular moral reasoning to tell me rape is bad such that it wouldn’t be part of God’s nature since it seems he did command it in the Bible!

This and many other connected issues show that we do morality and apply to gods, we don’t derive morality from gods. Our divine revelations throughout the world have been so wildly at odds with each other over time and place. There is simply no way of knowing you are accessing God’s moral dimension of a PR.

But this doesn’t have to be about God, it is also about, say, the 2nd Amendment and the right to bear arms. Where does that “inalienable right” exist? How does it compete with my asserted right to not be surrounded by people with guns? How do the rights of a blastocyst trump the rights of a fully-grown, perhaps voting and civically engaged (or not) woman? Who gets to be the arbiter of the accuracy of any correspondence?

Of course, they never follow through with the argument. I have asked several times, and was met with obfuscation, assertion, bait and switch, but not with establishing the ontology of abstract (rights).

See:

Back to God, personal revelation won’t cut the mustard because every religion in the world in all places and times have adherents who believe they have received such. The Christian doesn’t believe the Muslim’s personal revelation and vice versa.

The atheist believes neither and doesn’t welcome in special pleading and double standards.

So I just jettison all that bother and do it myself, and in doing so, try to make the world a better place. There is no heaven for me bribing me to act in a certain way, colouring every moral action so that they are, in effect, entirely self-centred. Likewise, there is no threat of hell, beating me with the worst stick in human conception, making me act a certain way.

Instead, I develop moral frameworks that are coherent, pragmatic and useful, and philosophically legible.

I suggest you do the same.

RELATED POSTS:


Stay in touch! Like A Tippling Philosopher on Facebook:

A Tippling Philosopher

Please support by sharing or donating, or both! You can buy me a cuppa. Or buy some of my awesome ATP merchandise! Please… It justifies my continuing to do this!



July 9, 2021

Yesterday, I posted a piece about the inaccuracy of any text of the era of the Bible containing as much speech as it does. I was challenged on this by a few comments, but I want to clarify some things because I think I am being misunderstood on my claims. I have never said that oral transmission cannot be an accurate way of transmitting content X 1000 years down the line. This may or may not be true. I am saying that content X is not accurate in the first place, and there is no way it will be. I am also saying that content X is usually a different genre and type of content than actual historical speech.

Let me remind you of the Thucydides quote from The History of the Peloponnesian War.

In this history I have made use of set speeches some of which were delivered just before and others during the war. I have found it difficult to remember the precise words used in the speeches which I listened to myself and my various informants have experienced the same difficulty; so my method has been, while keeping as closely as possible to the general sense of the words that were actually used, to make the speakers say what, in my opinion, was called for by each situation.

Translation: When recording speeches, Thucydides made things up that he felt fit the overall picture.

Here is a quote from eric:

Small quibble; I would expect the storytellers in an oral storytelling culture to be spectacular at remembering exact speech. The modern analog would be a professional actor remembering many entire plays of lines (…or us normal people remembering hundreds if not thousands of lines of song and commercial lyrics). The oral storytellers of the day probably had The Odyssey and other epic poems down nearly verbatim, and likely would’ve been good at listening to a local speech and remembering it. I would also have expected that non-literate people in general would be better at memorizing speech than us literate folks. That’s no different than saying people who do a lot of math in their head (vs. excel, calculators) are going to be better at retaining numbers and equations in their head.

However AFAIK Thucydides was not an oral storyteller, and neither were any of the 13 disciples of Jesus who, presumably, passed on their conversations with Jesus to others. And I’m not claiming perfection here; just better than us. So as I said, quibble.

To which Luke Breuer added [my bold emphasis]:

It would be helpful if people were to consult actual scholarship on ‮laro‬ traditions—which seems to match what you’re saying here, eric. From N.T. Wright:

Bailey has argued effectively for a position midway between the extremes represented by Bultmann and Gerhardsson. Bultmann proposed that the ‮laro‬ traditions about Jesus were informal and uncontrolled.[23] The community was not interested in preserving or controlling the tradition; it was free to change this way and that, to develop and grow. Gerhardsson and Riesenfeld, by contrast, suggested that Jesus taught his disciples fixed forms of teachings which functioned as formal and controlled.[24] From his wide and prolonged firsthand study of middle-eastern peasant culture, undertaken while working as a theological teacher in various countries in that part of the world, Bailey allows that there are such things as informal and uncontrolled traditions: they occur when rumours, for instance of atrocities, spread like wildfire and become grossly enlarged and reshaped in the process. There is also, to this day, a middle-eastern tradition of formal and controlled tradition, as when Muslims learn the entire Koran by heart, or when Syriac-speaking monks can recite all the hymns of St Ephrem. In between the two, however, Bailey identifies informal and controlled ‮laro‬ traditions. They are informal in that they have no set teacher and students. Anyone can join in—provided they have been part of the community for long enough to qualify. They are controlled in that the whole community knows the traditions well enough to check whether serious innovation is being smuggled in, and to object if it is.[25]

Bailey divides the traditions that are preserved, in this informal yet controlled way, into five categories. There are proverbs, thousands of them (in comparison with the average modern westerner’s knowledge of maybe a few dozen). There are narrative riddles, in which a wise hero solves a problem. There is poetry, both classical and contemporary. There is the parable or story. Finally, there are accounts of important figures in the history of the village or community. The control in each case is exercised by the community. Again Bailey has categorized this into its different patterns. Poems and proverbs allow no flexibility. Some flexibility is allowed within parables, and recollections concerning historical people: ‘the central threads of the story cannot be changed, but flexibility in detail is allowed’.[26] More complete flexibility is allowed when ‘ the material is irrelevant to the identity of the community, and is not judged wise or valuable’.[27] (Jesus and the Victory of God, Chapter Four)

Jonathan’s disbelief that anything like the above could happen may be connected to his conceptual nominalism and “discontinuous ‘I'”, which he later blogged about: The “I”, personhood and abstract objects. It would, perhaps, be philosophically disastrous for the amount of continuity-of-identity you are suggesting, to be possible. And perhaps, it would be disastrous to consider that maybe a people’s identity could be built on facts, rather than fictions.

So, literally none of this is what I am talking about, apart from the last bold category – accounts of important figures in history (itself a problematic claim, since this is what is contested). Also, though I previously mentioned Acts, I am really focusing on the Hebrew Bible. I am not disputing any of what both commenters said here necessarily, it’s just that we are talking past each other about different things.

My issue is about speech. Does it accurately reflect what was said – even if we concede that these figures actually existed anyway.

I have recently been writing on this very topic for my book project, and I could furnish you with stuff by Gunkel, Noth, and others, but I will try not to put in too many quotes. I’ll probably fail. Let’s start with a few, though.

This is how Baruch Schwartz sees it in The Oxford Handbook of the Pentateuch [Schwartz in Baden & Stackert (2021), 19.57 (Epub).]:

[I]t repeatedly becomes clear that independent narrative texts—not oral traditions, but complete written documents—have intenti|onally and ingeniously been woven together. This discovery is the key to understanding how the Torah was compiled, for pursuant to these findings it emerges that the same narrative threads are present over the course of the entire Torah; that is, the threads that may be detected within a given passage are in fact the continuations of threads that are already intertwined prior to it.

The most prominent source, admittedly, of transmission across time was oral tradition. At least, we have no autographs and no writing at all from any of the Pentateuchal period (the time of the content) – and our earliest extant fragments of the Torah come from 250 BCE.

There are arguments for and against oral transmission as a form of telephone game (Chinese whispers) over prolonged periods of time. But the accuracy of transmission from person to person says little about the truth of what a community started with.

Joel Baden and Jeffrey Stackert discuss this in their opening chapter to The Oxford Handbook to the Pentateuch:[1]

The Neo-Documentarian theory posits four independent sources that contain significant overlaps in content, on the level of the narrative macrostructure and that of the individual episode. Yet because of the general lack of precise linguistic correspondences among them, the theory also holds that the J, E, and P sources were essentially unaware of each other. The explanation for overlaps among the sources thus falls on the existence of a substantial oral tradition standing in the background of the literary texts. Parallel narratives, such as that of Moses getting water from a rock—from J in Exodus 17 and from P in Numbers 20—are attributed not to one source’s knowledge of the other, but to a common oral tradition (one that in this case appears also in poetic form in Deut 33:8). This extends at times even to traditional phrasing, such as…“land of milk and honey,” which appears in the wilderness portions of all four sources. This phrase is understood to have been a long-standing element of the wilderness tradition underlying all the sources equally. The recollections of the plagues in Egypt in Pss 78 and 105 similarly suggest the existence of traditions held in common with pentateuchal sources, without necessarily requiring direct literary relationships between texts.

And yet…[2]

Though the transmission-historical approach owes an enormous debt to Noth’s work, it has largely jettisoned any significant role for oral tradition in the development of the pentateuchal text. Where parallels exist, whether episodic or stylistic, they are assumed to be the product of strictly literary development: one author or redactor writing in awareness and response to another (Ska 2009). Part of the rationale for this view is that oral traditions are fundamentally unrecoverable; it is thus impossible to base a theory on evidence that cannot be confirmed or even accessed. Where parallels exist, it is more reasonable, according to the transmission-historical model, to assume that they are the result of conscious literary reuse. For example, while Noth identified separate Jacob–Esau and Jacob–Bethel traditions, he set them on the preliterary level, and understood them to have been combined before they were taken up and rendered in written form by the authors of the pentateuchal sources. Blum (1984), however, identified the same traditions within the precise wording of the biblical text itself, positing literary development and combination.

We really can’t even begin to tease out with any robust certainty what the original oral segments and trditions were, anyway.

Form criticism is the term dedicated to trying to identify, analyse and explain forms such as oral tradition. We should, however, be wary of claiming that a written output presupposed an oral tradition that always formed its foundation. This is not held as a view:[3]

The notion that genres are pure and the notion that written literature was preceded by an oral phase of tradition have both been roundly debunked, and the evolutionary trajectory imposes an order on history that it will never fit.

The Pentateuch, or Torah, comprises a wide range of different genres. A core literary type that features heavily is law and so one of its main objectives is to instruct.  Genesis is the only book in the Torah that is not built around a body of law. These law collections involve universal and absolute commands, prohibitions, and codes, as well as ritual instruction (that can also function as royal propaganda and organised collections of knowledge).[4] Is this accurate history? Recorded accounts of what was said? Blended in with these legal sections are narratives, though the distinction can be fuzzy, especially since any narratives concern ways in which characters receive legal instruction or break the laws themselves.

One of the main genres, though, is myth.

Myth. Such an evocative word and that conservative, literalist scholars simply do not like. Angela Roskop Erisman, Hebrew Bible scholar, elucidates:[5]

Myth tends to be defined in terms of its main subject matter—deities and their exploits—or its function to explain and legitimize ideas about the human condition or social institutions. Biblical scholars long tended to downplay the role of myth in the Pentateuch because it was viewed as a primitive mode of thought characteristic only of polytheistic peoples (e.g., Childs 1960), but we now understand mythmaking to be a “learned and literary act” (Fishbane 2003, 20) and can see that narratives like creation (Genesis 1), the flood story (Genesis 6–9), and the sea crossing (Exodus 14) engage myth and other genres—including law—in profoundly creative and productive ways and constitute some of the most sophisticated and beautiful literature in the Hebrew Bible (Erisman 2014a, 2014b).

Don’t forget that the Bible contains what would otherwise be understood in other contexts as magic, and not least in the Exodus account, as we have earlier seen. There are a number of biblical stories that, if you read them in a different context, you would have absolutely no compunction in calling them myth. And nor would the conservative literalist. Such an approach has not a little hint of double standards, with a side-serving of special pleading.

Form critics like Noth noticed that the repetition of stories across the Torah spoke of multiple sources, and originally multiple oral traditions. So, again, we are not so interested in the accuracy of getting X from A to B in time, but the accuracy of X in the first place. And already we might have 3 varying accounts of the same thing.

What Is My Main Point, Though?

Okay, I’m getting sidetracked on a very interesting detour. Studies in oral transmission are fraught with problem, but there is an awful lot of reason to be dubious of oral transmission being the basis of much of the Torah anyway (I am focusing on the Hebrew Bible right now). But it is also impossible, even if you do think a story or set of verses had oral tradition underwriting them, to begin to measure their accuracy from A to B. You just can’t.

But this still has very little to do with my actual point.

My point is about the source content right at the beginning of the journey.

Speech.

We have Moses going all over the place. Noah over there. We have Abraham here, there and everywhere. He is almost certainly not literate, given the time and place and context.

Who is writing these conversations down? These are often conversations, not great speeches. Think of every instance of speech in the Bible. For these to be accurate representations of what was said and not just made up by storytellers, authors and redactors, one of two things has to happen:

1) The speech has to be recorded at the time, right away. All of it. In literary form. (Who by? How?) This then gets committed either to oral memory (because there are famously literally no original autographs of any of the Hebrew Bible – none whatsoever) and passed down over hundreds or perhaps thousands of years, representing exactly what was originally said. Each and every conversation.

2) The speech has to be committed to memory then and there. And straight after this, it needs to be committed to oral tradition to retain its original accuracy. But this must be ongoing throughout the lives of all the people involved.

Or some kind of variation of these. Do you not see how absolutely ridiculous these scenarios are?

We are not talking about accurately orally transmitting some epic poem from point F to point G in time. Yes, people may be amazing at that. And the speech in those stories may be transmitted with 100% accuracy from F to G. But this is not what I am saying.

I am talking about the genre we are starting with, and whether the content in that particular genre is history – objective and accurate – and then whether the speech contained in that content is accurate, historical reflection of what was actually said.

Again, I am struggling to think what I said yesterday.

Look, as far as I am concerned, the Torah probably has elements of cultural memory that have distant kernel of historical truth. Could there have been a few Canaanites (since Israelites are arguably a subset of Canaanites – see Canaanites by Jonathan N. Tibb – my current read) coming out of slavery in Egypt? We have some evidence of this (on a small scale – letters detailing one or two Canaanites escaping from Egypt past border fortresses). Could this be the kernel of the Exodus story, built into an epic during the exile of the Judahites into the Babylonian Empire? Very plausibly. Did the Exodus in the Torah happen? Absolutely not.

Is the speech in the Bible historically accurate?

Absolutely not and, as I am attempting to argue, nor could it plausibly be in any case.

I could go on and have put in a lot more detail but I am in a major rush today – apologies.

NOTES:

[1] Their chapter “Introduction: Convergences and Divergences in Contemporary Pentateuchal Research” in Baden & Stackert (2021), quote from 8:36 (Epub).

[2] Ibid., 8:38 (Epub).

[3] Erisman in Baden & Stacjert (2021), 29.23 (Epub).

[4] See Angela Roskop Erisman’s chapter “The Genres of the Pentateuch and Their Social Settings” in Gertz et al (2016).

[5] Erisman in Baden & Stacjert (2021), 29.7 (Epub).


Stay in touch! Like A Tippling Philosopher on Facebook:

A Tippling Philosopher

You can also buy me a cuppa. Or buy some of my awesome ATP merchandise! Please… It justifies me continuing to do this!



June 6, 2021

I have listened to the first speaker in this debate so far but will continue to watch the whole thing. But I just thought I’d share a couple of early thoughts. First, this is why debates about conceptual nominalism and realism are important. Once you understand the distinction, you will see how important one’s understanding of ontology can be. In terms of this debate, what I would like to have heard at the beginning of the debate is a definition of gender and what it is used for. This is a word that has uses. How do we use it? This will define, quite possibly, who will be correct.

And, sorry to bore you, but this defers to the nominalism vs realism debate. There is only the mind-dependent concept of gender. So, therefore, it is how we use it and how we agree it should be used.

Second, the first speaker made a really good point about how many people we have walked past or acknowledged in our lives who were transgender but we didn’t realise, and therefore saw them as the gender that they thought they were. It really reminded me of this quote from that trans person who identifies as a woman but who looks very much like a traditional understanding of a man (beard etc.). Alex Drummond is a trans woman who is “widening the bandwidth of how to be a woman”:

If all you ever see is trans women who completely pass and are completely convincing as natal females, then those of us who just don’t have that kind of luck won’t have the confidence to come out.

For the people who don’t pass I can say “don’t be afraid” because what I’ve discovered is you don’t need to pass, what you need is to act authentically. And if a child sees me and thinks, “Bloody hell, so it’s not as simple as pink or blue or football or ballet – there must be 101 possibilities in between,” then maybe I can serve the greater good.

Lots to think about there. Essentially, though (pun intended, you Thomists – especially since one of the debaters is a Thomist philosopher), this is a debate about ascribing properties to terms. This is a subjective procedure. We are only 2right” when we agree by consensus on what a term is and how it should be used. And even then, we can be wrong and end up changing the term. It’s how dictionaries and encyclopedias evolve, and how laws, statutes and constitutions change.

The debate:

(And just continuing and now listening to Bronwyn Winter, she appears to be a feminist who adheres to gender existing as a social construct and not being biological in basis, and so takes the position of some feminists in finding a problem in trans people ascribing to a gendered label through naturally feeling it, when such labels are socially constructed. I will continue.)

 


Stay in touch! Like A Tippling Philosopher on Facebook:

A Tippling Philosopher

You can also buy me a cuppa. Or buy some of my awesome ATP merchandise! Please… It justifies me continuing to do this!



May 31, 2021

Here is a guest post from author Gunther Laird, who takes aim at Edward Feser and his Thomistic philosophy. Please grab Laird’s excellent book that takes Feser head on. In the meantime:

The fact that so many intellectuals don’t share their peculiar ideas has been a thorn in the side of Thomists ever Catholicism fell out of vogue amongst the intelligentsia—so probably around the time of the Protestant Reformation, at the very least. The reason this poses such a problem for them is due to one of their more peculiar ideas—that man is created in the image of God by virtue of his intelligence, or “rational nature” above all else—his ability to understand and consciously act on Forms and Final Causes (re-read some of the previous entries I’ve written for ATP, particularly “Actuality, Abortion, and the SCOTUS”, to understand what those things are and why they’re significant). Feser himself neatly lays out the nature of this paradox in a recent blog entry of his, “Intellectuals in Hell”:

You might at first think, then, that intellectuals in the modern sense of the term would be the most Godlike of human beings, and the most likely to achieve salvation.  Not so.  Indeed, if anything, scripture (with which, naturally, Aquinas agrees) implies the opposite…“I will destroy the wisdom of the wise, and the cleverness of the clever I will thwart” (1 Corinthians 1:19, 27)

Does this indicate that the Bible, and by extension Aquinas, might possibly have been wrong? Not at all, says Feser, and he explains to us benighted masses how people who apparently possess the most Godly attribute (intelligence) so often fall away from God. It’s all in the Will, you see:

…like everything else in nature, the intellect and will have final causes that determine what makes for a good or bad specimen of the kind.  The intellect is naturally directed toward knowledge of truth, and the will is naturally directed toward pursuit of what is good.  But truths and goods are hierarchical, with some more important than others.  A good intellect is one focused on the highest of truths and a good will is one set on the highest of goods.  Naturally, then, an intellect is deficient to the extent that it is in error or its attention is distracted by lesser truths, and a will is deficient to the extent that it fails to pursue what is good or aims at what is in fact evil…to the extent that an intellect or will is directed away from the true or the good, and especially to the extent that they are directed away from God, they are corrupt and directed away from salvation.

But how could that happen in an intellect that is powerful?  How could it fall into such grave error?  The fundamental way it can happen is if the will is misdirected.  For when that occurs, an intellect is less likely to arrive at truth, and likely instead to seek out rationalizations for the evil it has fallen in love with.  This is one reason why intellectuals may actually be more likely to be damned than less intelligent people.  Having more powerful intellects, they are better able to spin clever sophistries by which they can blind themselves to the truth. [Emphasis added]

Demonstrating things like God’s existence or the natural law foundations of traditional morality is not, after all, that hard.

This is all well and good at first glance, though if I dare say so myself, my own book has proven it’s far from easy to demonstrate God’s existence even if you accept Feser’s Platonic premises. But there’s a bit of trouble when you look closely at two of his assertions, which I’ll lay out as I understand them: A, that the intellect is or ought to be focused on the “higher” goods, with the highest being God, and B, the implication that Thomists (and Platonists generally) like Feser are any less inclined to spin self-serving sophistries.

Pages 235-236 of my book, The Unnecessary Science [UK], provide a good critical analysis of Feser’s assertion A. I’ll just repeat what I said there:

…if one accepts that the body parts/behaviors/etc. of organisms have functions, it’s not hard to tell what those are. The fangs of a spider have the purpose, or telos, of piercing prey and injecting venom. The sharp teeth of squirrels, on the other hand, have the telos of cracking open nuts and acorns. This is so the organism survives, I think Feser would agree with this.

So when we look at human beings, we notice something interesting. Humans don’t have many natural weapons of our own. Our teeth are relatively harmless, they can’t inject poison like a spider’s or crack open tough matter like a squirrel’s. We don’t have any other weapons, like fangs or claws or anything else that wolves or lions or whatever possess. The only thing we do have is our minds, or more specifically, our capacity to comprehend Forms (again, assuming Feser’s theory is correct). Understanding the Form of Fire helped us cook food, understanding the Form of Triangularity helped us make better arrowheads to hunt prey, and so on, and so forth. So it seems our rationality is equivalent to a spider’s fangs or a squirrel’s incisors–the weapon we use for our survival.

But that implies the telos of our rationality is to help us survive, not to seek truth for its own sake. A spider’s fangs are made for envenomating, but venomation isn’t a goal in and of itself, the process of poisoning prey is to help the spider eat and survive. A squrrel’s incisors are made for cracking open nuts, but it doesn’t crack open nuts and acorns for the sake of it, that’s not a good in and of itself. It’s that cracking open nuts helps the squirrel eat and thus survive. So, by the same token, the human rational faculties have the telos of gaining truth, but only to help us survive. There’s nothing beyond that, there’s nothing inherently “better” about “understanding deeper truths,” just as there’s nothing inherently “better” about a spider producing stronger poison or a squirrel biting through harder material–all that matters is whether or not it helps those organisms survive.

Good for us—but you’ll notice I haven’t mentioned God once. It seems, taking an evolutionary view of our “Nature” as “rational animals,” that rationality itself is, first and foremost, a tool we used to survive. Perhaps it’s a good thing that we can comprehend God exists, but if so, that would be merely an “accidental characteristic”—the actual end, or “final cause” of our rationality is to help us survive, reproduce, and keep our children fed. And we arrive at this distinctly un-Feserian conclusion by relying entirely on his own metaphysical methodology. Again, a spider’s fangs have the function of killing prey—that is its immediate final cause. But its true final cause, the sake for which it exists at all, is to kill prey so it can be digested and help the spider survive and reproduce. A squirrel buries nuts—that is the immediate final cause of its instincts. But the true final cause of its behavior is to store food so it can survive winter. And human beings are the same way. Our minds are capable of grasping truth because we lack fangs or claws and needed something else to help us survive, and an intellect that could grasp, say, triangularity or predictable animal behavior would allow us to create pointy arrows and spears and hunt dangerous animals better. From this perspective, then, the final cause of our intellect is not necessarily to know God, but to know just enough to ensure our survival and propagation.

In other words, even if we accept the mind is “directed towards” truth (because otherwise there’s no way we could assume anything we perceive or believe is actually true), nothing about that entails it’s “directed towards” specifically “the highest of truths.” Even if we assume God exists, that wouldn’t entail He wants us to know about Him or that we’re under any obligation to know about Him—Feser admits as much in Five Proof, where he states “Aristotle famously thought that the divine Unmoved Mover of the world contemplated himself eternally, but took no cognizance of us.” (300). And under this light, the tendency of the moderns to disregard “higher truths” in favor of empiricism actually fulfills their “telos” even if they themselves might deny teleology. In The Last Superstition, page 175, Feser laments that “the moderns…sought to reorient intellectual endeavor to improving man’s lot in this life, and to defusing post-Reformation religious tensions.” If, as I argued above, the telos of our intellect is to find truth that helps us survive and reproduce rather than find truth for the sake of truth, then modern intellectuals fulfill that telos better than teleological Thomists or Platonists, who waste more time contemplating “higher” rather than practical truths. Now, Feser might argue that modern intellectuals don’t help much when it comes to reproduction, unlike Catholics with tons and tons of children (page 174 of TLS), but remember that survival counts for something as well. A “degenerate” modern intellectual with 1 or 2 kids, both of whom survive to adulthood, is not much worse off than a “traditional” peasant with 6 kids, 4 or 5 of whom have died of smallpox and/or getting caught up in something like, say, the Thirty Years War.

(As an aside, a critical reader, Verbose Stoic, has responded to this point with the claim that an organism’s individual faculties can be “co-opted” by evolution, and that those faculties still have their own ends regardless of survival, meaning the purpose of our minds is to seek truth and it just happens that helps for survival. My response is that even in this loose sense, Verbose Stoic is still equivocating between ‘truth’ in general and ‘higher truths’ in particular. He and Feser would have to prove that the human mind is specifically directed towards the latter rather than a different subset of the former like ‘useful truths’ or ‘practical knowledge,’ and so far as I know neither have done that)

So much for Feser’s assertion A. What about B, that intellectuals don’t reject God because atheism itself is a mark of intelligence, but rather than intelligent people are better able to construct elaborate, convincing, but ultimately hollow rationalizations for their corrupted will?

One wonders if a similar accusation could be leveled against philosophers of Feser’s bent, which he labels “ur-Platonists.” He says in his blog post,

Powerful intellects are much less likely to become corrupted to the extent that their view of things falls within what is sometimes called the Ur-Platonist family of philosophical positions.  And they are very likely to become corrupted to the extent that they depart from this family – say, by being drawn to materialism, or mechanism, or nominalism, or relativism, or skepticism” and in the *other* linked post explains why “Ur-Platonism” is supposedly so great. Now, I’m not an expert on Plato, so I can’t say whether or not Feser is misrepresenting the man here. But his defense of what he claims to be the Platonic Tradition against (what he also claims to be) its modern opponents is this:

“What is difficult is cutting through the enormous tangle of sophistries by which modern thinkers have obscured our view of what was clear enough to intellects like those of Plato, Aristotle, Plotinus and the like.  You have to deny vast swaths of common sense (concerning matters like causality and teleology, for example) in order to get skepticism about these things off the ground.  Less intelligent and well-educated people find it harder to do that, which is why they are less likely to reject traditional morality and religion.  It takes a high degree of intelligence to develop narratives and theories that not only defy common sense, but make it seem reasonable to do so.

Ah, the old chestnut of common sense—isn’t it curious that it always seems to be rarest among the opponents of whoever’s claiming to be its champion. But even if modern thinkers are sophists cooking up rationalizations for their supposed sins, does that mean Platonists—going all the way back not only from Feser to Aquinas but to Aristotle and Plato himself—are necessarily honest purveyors of wisdom? Not at all. In fact, I very much wonder if their supposed accession to “common sense” just made it easier for them to fool common people.

Feser might be right when he says a strong intellect makes it easier to break with tradition. But a strong intellect also makes it easier to fool “traditional” people as well. Imagine a small, peaceful village that has lived the same way for thousands of years. One day a loony philosopher arrives spouting crazy theories that go against everything they’ve always believed, and insisting they all have to serve him because he’s so smart. Naturally, the villagers, not smart enough to understand his arguments, just toss him out and that’s that.

The next day, a different philosopher arrives, this time reassuring the somewhat slow-witted populace that everything they already believe is absolutely true. Things really do have Final Causes and Forms, and the benevolent Gods are watching over them. This philosopher speaks well, the people are pleased to hear their beliefs validated by a “wise man,” and they are very open to hearing more of his views. Now, this fellow claims that being such an intelligent and perceptive scholar of Common Sense, he has the right to rule the entire village as, say, a sort of chief or king—why, one might even call him a Philosopher King! The people agree, and soon enough the second philosopher becomes the absolute dictator of the little village, living in its best house and eating all of its finest food (he deserves it, as he “raised common sense to a new level”), and executing anyone who complains or even finds such a state of affairs suspicious—such people must have corrupted wills, after all.

A skeptical outsider, looking at these developments, might—just might—suspect that the second philosopher is every bit as venal and malicious as the first—it’s just that by appealing to “common sense” and established tradition rather than flouting them, it was easier for him to fleece the unfortunate commoners. And a perceptive reader might—just might—suspect that the second philosopher in this scenario sounds a lot like an “Ur-Platonist,” at least in Feser’s telling. And thus can we see the problem with Feser’s assertion B—the fact that Platonism seems at first blush to be more “common-sensical” than modern philosophy doesn’t mean it’s true, it just means it’s more appealing, and that flaws in Platonist reasoning are often harder to spot because of it.

As we have seen, the two important assertions Feser makes in his blog post are, under closer examination, controversial (in the case of the first) and can just as easily apply to his own side (in the case of the second). This might lead one to suspect that perhaps modern intellectuals just might have better reasons to reject Feser’s “traditional morality” and/or religion than he lets on. If you find yourself harboring that suspicion, then my book, The Unnecessary Science, just might be enough to prove those suspicions well-founded. I’ve already gone over length in this entry, but I describe the significant number of significant problems in Feser’s arguments for God and “traditional morality” in that monograph, and if you enjoyed this entry, you’ll love the book! Check it out here.


Stay in touch! Like A Tippling Philosopher on Facebook:

A Tippling Philosopher

You can also buy me a cuppa. Or buy some of my awesome ATP merchandise! Please… It justifies me continuing to do this!



February 13, 2021

This was the question I sought to answer in my book of the same name, with the subtitle of Countering William Lane Craig’s Kalam Cosmological Argument [UK]. I often think this is my best book. It’s tight, aimed at just the right level and audience in a successful manner (in terms of writing for an audience), has fantastic reviews to attest it, and succeeds in the intentions I had for it. I was also very happy for Jeffery Jay Lowder to provide the foreword.

Here are some of the arguments contained within:

Please help out by grabbing a copy.

I forgot that people review on Goodreads too, so here are some to share with you:

A solid philosophical refutation of the nonsense that is William Lane Craig’s “Kalam Cosmological Argument” (a rehash of the classic cosmological argument with extra-Craigian holes). JMS Pearce is a new author to me, and the look of the book had me worried (looks like something they’d have for sale at revival meetings). But, as I find WLC’s K.C.Argument to be worthy of a good debunking, I had to check this out. And I am happy I did. Pearce does a really good job of considering the philosophical and physics-related issues that together tear up and toss away WLC’s cheap attempt to sound like an academic philosopher. The book is intense enough that it would help if you actually had a couple of undergrad philos. classes to follow the reasoning completely. But, I think the arguments are clear enough even if you don’t. And if you do happen to be a born-againer, you should read Pearce’s discussion of the topic just to see if you have a grasp of the complexities involved in what WLC tends to present as simple, obvious, and logically convincing (mostly what WLC presents is “simple” in another sense of the word). 
And (this one is on Amazon, too):

Did God Create the Universe from Nothing? Countering William Craig’s Kalam Cosmological Argument by Jonathan MS Pearce

“Did God Create the Universe from Nothing?” is a very good intermediate-level book that critically addresses the Kalam Cosmological Argument (KCA) to satisfaction. Jonathan M.S. Pearce provides the readers with a user’s guide on how to counter William Craig’s defense of the KCA. This useful 143-page book includes the following seven parts: 1. The Background, 2. The Argument, 3. Premise 1, 4. Premise 2, 5. The Syllogism’s Conclusion, 6. Potential Objections, and 7. Conclusion.

Positives:
1. A well-written, well-researched book.
2. Pearce tackles William Lane Craig’s favorite argument for the existence of a “God”, the Kalam Cosmological Argument (KCA).
3. Great use of reason and a strong philosophical background to dismantle the KCA.
4. Provides the history behind the KCA.
5. If you are going to debunk the KCA, might as well dissect it from its most prominent defender’s point of view, William Lane Craig.
6. Goes over the three steps of the KCA in detail, the syllogism.
7. Goes over key concepts that will help the reader understand the premises.
8. Goes over the many problems of the KCA. “We have no experience of the origin of worlds to tell us that worlds don’t come into existence like that.” “Craig is relying on mere human intuition, and this argument is at best, then, an intuitive argument (albeit one that he uses observation to support).”
9. A look at causality. “So the causality of things happening now is that initial singularity or creation event. As I will show later, nothing has begun to exist, and no cause has begun to exist, other than that first cause—the Big Bang singularity.”
10. The circularity of the first argument. “Firstly, the only thing, it can be argued, that “has begun to exist” is the universe itself (i.e. all the matter and energy that constitute the universe and everything in it). Thus the first premise and the conclusion are synonymous—the argument is entirely circular.” “Nothing comes into existence but is transformed from already existing matter or energy.”
11. The fallacy of equivocation. “This amounts, then, to a fallacy of equivocation whereby the author is using two distinct meanings of the same term in a syllogism. This makes the argument logically invalid or fallacious.”
12. Explains why the KCA and libertarian free will are incompatible.
13. A look at Hilbert’s Hotel and why its use is flawed as it relates to the KCA. “The story of Hilbert’s Hotel simply highlights another such property that distinguishes actual infinite collections from finite ones: just knowing that an infinite subcollection has been removed from an infinite collection of objects does not allow one to determine how many objects remain. But this property itself does not entail that actual infinite collections are impossible.”
14. Profound statements. “Therefore, if creation out of nothing (ex nihilo) is beyond human understanding, then the hypothesis that it occurred cannot explain anything.”
15. A list of scientific theories that the second premise must debunk in order to be upheld. “That an oscillating universe is impossible.” “However, it is worth noting that Sean Carroll, in his aforementioned debate with Craig, stated that there were over a dozen plausible models for the universe, and this included some eternal ones!”
16. One of the strongest arguments of this book and worth sharing, “In case after case, without exception, the trend has been to find that purely natural causes underlie any phenomena. Not once has the cause of anything turned out to really be God’s wrath or intelligent meddling, or demonic mischief, or anything supernatural at all. The collective weight of these observations is enormous: supernaturalism has been tested at least a million times and has always lost; naturalism has been tested at least a million times and has always won.”
17. Provocative questions. “If God was and still is perfect, what need, or why intend the creation of the world?”
18. Why science makes it worse for the KCA. “When it comes to physics, one physicist has told me they know of no working physicist who holds to Craig’s Neo-Loretnzian interpretation.”
19. Provides a chapter where Pearce sets out to what he thinks an apologist like Craig might claim as counter-arguments and proceeds to defend his arguments from such objections.
20. A satisfactory conclusion. “Indeed, the real aim of this book was not to disprove that God created the universe but to show that the KCA cannot prove that God did, using those premises and the resulting conclusion.
21. Notes and formal bibliography provided.

Negatives:
1. I fear that this book will have a limited audience because philosophy is not everybody’s cup of tea. Furthermore, this book’s focus is solely on the KCA.
2. Despite being a book that at worse is an intermediate level book, some concepts are still hard to follow.
3. Lack of visual supplementary material that may have helped the layperson better understand the concepts presented.

In summary, this is a solid effort from Professor Pearce. I like that he goes after William Lane Craig’s best arguments in defense of the KCA and has the integrity to state what we currently know and what we don’t know. In some respects, Craig makes use of the God of the Gaps fallacy to insert “God” where at best we must acknowledge our common agnosticism. Not the easiest topic to follow but those interested in it will find this book to be worth the read. I recommend it!

Further suggestions: “The Problem with “God”: Classical Theism under the Spotlight” by the same author, “God’s Gravedigger: Why no Deity Exists” by Raymond Bradley, “Unapologetic: Why Philosophy of Religion Must End” and “Christianity In the Light of Science” by John Loftus, “The Portable Atheist” by Christopher Hitchens, “Godless: How an Evangelical Preacher Became One of America’s Leading Atheists” by Dan Barker, “A Manual for Creating Atheists” by Peter Boghossian, “Jesus Interrupted” by Bart D. Ehrman, “Why I Am Not a Christian” by Richard Carrier, “The Soul Fallacy” by Julien Musolino, “The Big Picture” by Sean Carroll, “The Illusion of God’s Presence” by John C. Wathey, “The Not-So-Intelligent Designer” by Abby Hafer, and “The Universe” by John Brockman.

Finally, this balanced review:

This reviews the main modern cosmological argument, Craig’s Kalam. The argument is
1) Whatever begins to exist has a cause
2) The universe began to exist
3) Therefore, the universe has a cause

There are four main sections in the book: 1) premise 1 (causality), 2) premise 2 (beginning of the universe), 3) conclusion of the argument, and 4) potential objections. I thought the first two sections were very poor and the last two sections were very good and made the whole book absolutely worth reading for anyone interested in the argument.

Section 1 was a weak discussion on causation (both efficient and simultaneous causation) and a mostly pointless discussion on abstract objects (which Craig has written multiple books on). As expected, quantum mechanics and free will were invoked (incorrectly, in my view) as points of contention. Section two was a discussion that was not representative of the cosmology of origins, including some quote mining of physicists and leaving out important theorems and information. He included the worthless Quantum Eternity Theorem by Carroll and yet neglected the Generalized Second Law by Aron Wall and various proofs on finitude of supposedly eternal models.

The next two sections were extremely good and challenging. The third section’s discussion of causality and temporality was extremely helpful, and brought out why characterizing “cause” as “efficient cause” in premise 1 could be problematic for Craig. The most helpful part of this section was on the discussion on theories of time. This brought out the serious issues that the Kalam faces by being dependent on an A-theory of time. Relativity (special and general) strongly implicate B-theory, and Craig himself identifies the argument as depending entirely on A theory. Some philosophers seem to be trying to work on a kalam on B-theory, but Craig and some atheists are convinced that this is a fruitless endeavor because in effect creation is meaningless on B-theory. This is something I will need to look into, as I did not realize how out-their Craig’s neo-Lorentzian interpretation of special relativity was.

The final section was great for its discussion on mereological nihilism and making a parody argument with material causation to disprove creation ex nihilo.

All in all, the book is unquestionably worth reading.

 


Stay in touch! Like A Tippling Philosopher on Facebook:

A Tippling Philosopher

You can also buy me a cuppa. Or buy some of my awesome ATP merchandise! Please… It justifies me continuing to do this!



January 3, 2021

The following is a quote by Verbose Stoic, a regular commenter here with whom I often disagree but who substantiated his claims with detail and, well, substance. The sort of commenter I like – he probably does this because he has a blog of his own, and sometimes takes me to task. I only wish I had more time to discuss matters with him on the threads here.

His comment is one that seeks to take Richard Carrier, with whom I have recently been debating free will, to task (in its entirety, not pasted here). I am not so interested in that, but I am interested in what it says about me and confirmation bias. I will put my quotes in italics and will bold emphasise his points to discuss:

In fact, the more I read that paragraph, the angrier I get with Carrier for some disingenuous tactics (or sloppy work). And I like Carrier a lot.

The only reason I can think of for why you haven’t noticed his disingenuous tactics and sloppy work in the past is because he in general agrees with you. On my blog I’ve taken on some of his posts on topics that I care not one whit about — polyarmory and some mythicism (although from reading Carrier and others around the topics I have come to care about mythicism far less and come to the conclusion that polyamory is actually morally inferior to monoamory despite not caring at all about it going in) — precisely to show that his work is indeed sloppy and that he constantly misrepresents those he argues with while constantly and often viciously insulting them.

A prime example of that here is him constantly claiming that hard determinists are basing their conclusions on things determined from “the ivory tower”, while ignoring that a great many hard determinists rely on empirical results like the Libet experiments instead….

The whole comment is worth a read.

I rate Richard Carrier a lot. I love so much of his work, such as what he has written on the Nativity and Resurrection of Jesus particularly, to the point that I reference an awful lot of his work in my book on the Nativity and my upcoming book on the Resurrection. I am convinced his work is good and thorough. His work on Luke in Not the Impossible Faith is superb.

But then there is the issue that we all suffer from confirmation bias: do I think that his work is so good because I broadly agree with his conclusions? Perhaps. We are not perfect.

On the flip side, I am not a mythicist, though am fairly ambivalent and don’t think a mythicist position has any meaningful difference to my own historicist position as detailed here. This could indicate that whilst I rate him, I don’t always agree. But, to counter this, perhaps I am happy to disagree only where it doesn’t really matter – my disagreement is rather toothless.

We disagree quite publicly on free will as documented in my recent ongoing series. However, much of this disagreement is actually semantic and without huge ramification.

The problem is, to check on, say, Carrier’s work (but this could apply to any scholar or source with whom you agree), one would effectively have to do the work oneself, all over again, to verify the claims that he is making. Which is completely unpragmatic and defeats the object of short-cutting to an expert in the first place.

I suppose it is an inductive thing: if I have found a given expert to be useful and reliable on previous cases, then I am epistemically warranted to continue relying on them. But, is it that I ignore or give little value to counter-cases because it is just too much hard work to deal with and would undercut my reliance on that person in the past? It’s so hard to tell. Can someone be really good in a majority of cases but get it wrong occasionally here and there? Especially if they write on a vast landscape of topics? In other words, I shouldn’t throw the baby out with the bathwater. Perhaps.

Being skeptical and being rigorous is a tonne of work. Being cognizant of the problems is at least a healthy first step. But doubting everything one reads and potentially relies on leads to a Pyrrhonian Skepticism that can be paralysing.

We should always steel man a position and we should always check our sources. But how far should this go? To what level of verification to we work? What are the best tools to arrive at the most robust and accurate conclusion? How do we mitigate confirmation bias without having to do all of the work again ourselves?

Questions, questions, questions. I’m leaving the answers up to you.

 

December 9, 2020

I was recently involved in a discussion/argument that started off with a Christian stating:

We want freedom from stupid govt. lockdowns for a non-pandemic.

[I will edit these quick Facebook comments only for grammar/punctuation.] Which then distilled down to a conversation about abortion. Because that’s what Christians do, right? I remained true to form:

If you think it’s decent not to have abortions, why is it that your God has literally designed into the system and allowed, for hundreds of thousands of years, literally billions of spontaneous, natural abortions? See my article God Loves Abortion.

It’s the classic comeback to which the Christian only ever has one answer, and, in this case, they obliged:

God gives life and has the right to take it away naturally at any time He wants. We are His creatures and are only allowed to take lives when authorized by Him to do so, such as self defense.
You can predict my next move, if you know me well enough:

Next, you will be using that to justify killing your children. I would like you to establish what that right is, what its ontology is and how it works. I’m serious. Literally, what is a right made of? Is it just a magic way to allow God to be a complete bastard?

The reply did not answer the question:

Scripture is His self-revelation about His own nature, our nature and limits and the history of redemption.
The Book Of Job says “the Lord gives and the Lord takes away, blessed be the name of the Lord.”
Short answer: God has that right because the Bible says He does.
My reply, in calling out his might-makes-right approach:

Are you serious? How does that answer any of the questions? What is the ontology of a right? Why do you think God not only doesn’t have to follow his own rules, but he literally bakes the opposite into the system – designs and creates, knowingly, a system where up to 75% of fertilised eggs naturally die? Because he can? Because he has a right to be a “baby murderer”?

That’s the world’s worst answer.

He was still not having it:

Killing someone ISN’T murder if you have the right to do so. That’s why killing in self-defense and capital punishment are not murder. Those are circumstances where we have the right to take someone’s life.

Since God always has the right to take someone’s life whenever He pleases, his killing of someone is by definition never murder, since I get my principles and definitions on this from Scripture.

It’s not a matter of “not following His own rules”. Saying He has to “follow the same rules” presupposes He’s on the same level metaphysically as man. Those rules apply to us just as much to Him, right?… but that is precisely the issue in dispute: whether He exists as a transcendent Creator or not.

As far as baking into the system: there are all sorts of ways that Creation works that we may not like. That doesn’t mean God doesn’t exist.

The person who owned the thread, an atheist philosopher, was adding his own claims about intrinsic vs extrinsic rights, as the Christian was arguing we had intrinsic rights; the atheist said:

No, you clearly do not understand the word “intrinsic.” If God “gives us” our right to life, then we merely have an “extrinsic” right to life. Unless we have an INTRINSIC right to life, then you are simply a nihilist. You don’t actually believe in morality. You believe might makes right.

To say this is “There is no God and I hate Him” only further confirms what I said about you earlier. You’re a sophist pushing polemics from a sense of conviction rather than reason. You aren’t actually doing philosophy. You’re playing at it as a bad faith actor.

I mean, if you want to use that terminology then fine we have extrinsic right to life. But then you’re just playing word games.

It isn’t “might makes right”. The night that God has is Holy Righteous might, not arbitrary might.

I’m simply saying what your objection *amounts* to. I’m not saying you literally believe that.

For those unread on this. An intrinsic right to continued existence implies a CATEGORICAL right to life not merely a CONDITIONAL right to life ie we have a right to life IF AND ONLY IF God wants us to have one. No, a categorical right to life implies a right that is not conditioned upon facts about any subject.

The reason I am laying this out is that it lays the groundwork for a discussion I then had with my fellow atheist that shows atheists adhere to vastly differing views of moral philosophy, and I will set out something of his view tomorrow.

The Christian came back with:

Then we only have a conditional right to life, and it’s conditioned by God.

And my fellow atheist pointed out:

Then that’s nihilism.

That’s might makes right.

That’s arbitrary.

Welcome to moral philosophy.

Reason > authority.

Reason is, by definition, not arbitrary. Authority is, by implication of not being identical with reason, arbitrary [among other reasons].

The Christian opined to my fellow atheist:

I use reason but have an ultimate authority.

So do you. But your ultimate authority is different than mine. Your ultimate authority is your own human autonomy. My authority is God.

Quit characterizing this as Reason Vs Authority because we both have both of those.

The atheist replied:

If what matters morally depends on the arbitrary (ie morally unjustified) will of God or any other subject, then we would have no intrinsic right to life because God could justifiably take our lives at will. Since we do have an intrinsic right to life, this view could not be true.

My “ultimate authority” are considerations that count in favor of believing and acting in certain ways. Your “ultimate authority” is obeying the commands of a commander. This is absolutely a disagreement about reason and authority. You are flailing under the weight of the arbitrariness objection because you are insisting on the authority of God rather than reason-implying morality.

This is the classic problem with Divine Command Theory and the Euthyphro Dilemma (see my 16 Problems with Divine Command Theory). I suggest reading those arguments!

I chimed in with my usual:

I don’t believe rights “exist” – they are, like all abstract ideas, conceptual constructions with no ontic reality. I am a conceptual nominalist.

This should explain it all: “Human Rights Don’t Exist until We Construct and Codify Them

Perfectly encapsulated by the 2A debate. I don’t have that right, as a Brit. I don’t want that right, either. Indeed, I want the right not to be surrounded by people with guns.

That right is particular to the US because they conceived it, it is codified into the Consitution, and enacted into law.

 Finally, for the purposes of this piece, this is where we are at, with me replying to the Christian:

“If rights don’t exist it’s senseless to act on them.”

They do exist, just not in an ontic sense; they exist conceptually, in our minds, and we argue about them. When we agree on them, we write down our agreement in your case, the Constitution etc., and this then gets codified into law by your government. This, then, is only meaningful if enacted and enforced by the authorities.
This is DEMONSTRABLY what happens over time and geography; it is why you have the “right” to carry a gun and I don’t; it’s why some countries outlawed slavery and others didn’t; it’s why you have certain rules and rights in one state and can then cross an arbitrary line and they don’t maintain.
That’s what we do: argue, agree by consensus, pass laws by majority, enforce the laws; rinse and repeat and hopefully refine. But sometimes we go backwards.
Your position is this: rights exist in the mind of God. We have to guess what they are and then enact them. But even Christians disagree on this, so we have the problem of a lack of clarity and divine hiddenness.
If you are arguing they exist not-God-subjectively (outside of his mind), then we have the Euthyphro Dilemma.
This is what Benjamin [the atheist] is partly getting at. It is might makes right, unless God can defer to moral reasoning. If he can, then we don’t need God for the rights and reasoning – we have the moral reasoning objective and separate to God.

So, to return to the beginning, what are rights made of, ontologically, and how do they work?


Stay in touch! Like A Tippling Philosopher on Facebook:

A Tippling Philosopher

You can also buy me a cuppa. Or buy some of my awesome ATP merchandise! Please… It justifies me continuing to do this!



October 22, 2020

Here is a guest piece from author Gunther Laird, who recently wrote The Unnecessary Science: A Critical Analysis of Natural Law Theory (UK), which I edited and consulted on. Please grab a copy! This piece is particularly pertinent given what is going on in the Supreme Court right now. 

Actuality and Abortion

Gunther Laird

As readers of A Tippling Philosopher are well aware of by now (having read the previous entry I wrote for this blog, “The Problems of Pure Act”), Edward Feser is one of the most popular and prolific defenders of the Catholic religion writing today. Those same readers will, of course, be aware that I contest his efforts directly in the new book I have recently published, The Unnecessary Science: A Critical Analysis of Natural Law Theory. While Feser has written mostly on metaphysics, he has addressed matters of morality as well, and as you can expect, my book attacks his worldview on those grounds as well. Of particular interest to contemporary readers, given the current fracas over Roe vs. Wade, are Feser’s arguments that abortion is immoral. These arguments—the sort that other Catholics such as Amy Barrett prefer–might seem quite different from most other pro-life claims because they are actually based on pre-Christian philosophy, specifically the Greek thinker Aristotle’s theories revolving around essentialism, actuality and potentiality. This entry is an excerpt from chapter 3 of my book that refutes Feser’s pro-life position based on his own Aristotelian reasoning.[1]

For new readers who haven’t read my previous entry, “Problems of Pure Act,” I must first provide a brief overview of these Aristotelian concepts. Rest assured that I go into much more depth on all of these topics in the actual text of the book, but I must be as concise as possible for the purposes of a single blog entry.

According to Aristotle (and his teacher, Plato, and their intellectual descendants, such as Feser), there is no way to make sense of the world around us without accepting the reality of Essences, which can also be called Forms. An Essence or Form is what defines a thing and distinguishes it from everything else. The specific wavelengths of light associated with the color green, for instance, define it and distinguish it from other colors, and the equidistance of all points on its surface from its center defines a sphere and distinguishes it from other shapes.[2] Now, it’s not just colors and shapes that have Forms. According to natural law adherents, everything has a Form. Concepts (there’s something that defines justice and distinguishes it from tyranny), artifacts (there’s something that defines a PlayStation 5 and distinguishes it from a Game Boy), and even living things (there’s something that defines a human being and distinguishes her from a squirrel or jellyfish).[3]

For the purposes of argument and saving space, let’s skip over the arguments Feser presents for essentialism (and against its alternatives, such as nominalism) and concede that he’s right. This, by the way, is one of the strengths of my book—in the Unnecessary Science, I concede many of Feser’s starting premises, but use them to refute the arguments he makes using them—by generously agreeing to fight on his own ground and on his own terms, my own position is shown to be that much stronger. Anyways, it’s one thing to say that we can distinguish between colors by referring to wavelengths of light, or shapes by referring to their mathematical properties, but how can we distinguish between living things? What, precisely, makes a human different from a squirrel or jellyfish? According to Feser, the answer lies in actuality and potentiality (sometimes called act and potency).

Act and potency are more Aristotelian terms that were initially created to explain how change could occur. In Feser’s view, they also prove the existence of God, but I’ve critiqued that specific argument in “The Problems of Pure Act,” so I’ll ignore that now—let’s focus entirely on ethics for the moment.

To put it as concisely as possible, “act” or “actuality” is how a given thing is or behaves right now, and “potency” or “potentiality” is how it will (or might) be or behave in the future, and both actuality and potentiality are determined by its Form or Essence. For instance, the Form of a rubber ball entails it is shaped like a sphere and made out of rubber, which defines and distinguishes it from, say, erasers (which are made of rubber but shaped differently), ball bearings (which are spherical but made of metal), or plastic triangles (which are both made of different materials and shaped differently entirely). Now, “being made of rubber” and “being spherical” are the ball’s actualities—they are what it is right now. But being a rubber ball—that is to say, having the Form or Essence of a rubber ball—also entails it has certain potentialities, or things it might possibly do in the future. For instance, if our rubber ball is sitting motionless on the floor right now, it is potentially rolling across the room or potentially flying through the air, if someone in the future were to give it a push or pick it up and throw it. Note as well, however, that there are some potentialities it does not have, because it will never display those sorts of behaviors or have those sorts of effects. For instance, a rubber ball will never produce nuclear power like a rod of uranium might, nor will it ever float off to the moon or chase someone around a room all by itself. It simply doesn’t have those potentialities.[4]

Now, according to Feser, the Form or Essence of a human being, that which objectively defines us and distinguishes us from everything else, is that of a “rational animal.” In the Aristotelian view, humans are the only creatures on Earth that have ever existed with the capacity for rational thought. Obviously, that’s debatable, but again for the purposes of argument let’s accept it. Now, review the previous paragraph: A thing’s Form or Essence entail what it is right now and what it might possibly be in the future (its potentialities). One corollary of this is that you can tell something’s Essence by what potentialities it possesses, because actuality (what something is right now) always grounds potentialities. So, Feser asks us to look at the human fetus under this metaphysical schema. According to him, a human is a rational animal, which means that anything which is either thinking right now or can think in the future is a rational animal (and thus deserves the rights attending to rational animals, or in other words, human rights, most notably the right to live). At first glance, it might seem like a zygote, embryo, or fetus definitely isn’t a rational animal because such tiny, underdeveloped organisms can’t think like adults or even children can. But according to Feser, they have the “potential” to reason in a way nothing else does. In “the natural course of things” a fetus will be born, grow up, and start to reason. That differentiates it from, say, hair or skin cells, which will never grow into thinking beings (barring the invention of some sci-fi cloning technology), or individual sperm and egg cells, which contain only half of a chromosome or blueprint for an actual human being. In other words, a fetus can reason in the future, even if it’s not reasoning right now. And if future behavior is one of a thing’s potentialities, and you can tell a thing’s Essence from its potentialities, then it follows, in metaphysical terms (even if it’s not obvious), that a tiny little fetus is indeed an actual human being. This is because human beings are (by dint of our Essence) the only creatures who have the potential to think. And therefore, killing a fetus is killing an innocent human being, which is absolutely forbidden in all circumstances.

There are just a few clarifications to be made before advancing to my critique of Feser’s position. First, he believes in the death penalty, which also involves killing a human being. Isn’t that an inconsistency? Not according to Feser, because of the key word I mentioned above—innocent. Assuming a fetus is human, it has committed no crime, because it hasn’t intentionally done anything wrong (you could argue absorbing nutrients from its mother is sort of a theft, but the fetus had no choice in the matter, as it can’t stop doing that even if it wanted to). Thus, a fetus is innocent, whereas grown criminals assumedly aren’t, and thus, since they are not innocent, it is morally licit to kill them. Again, we don’t have time to get into that branch of ethics here, so let’s concede it to Feser for the purposes of this entry. Secondly, even taking that into account, some philosophers, particularly “consequentialists” might argue it is permissible to kill innocent humans in some circumstances—in this case, a mother’s right to bodily autonomy outweighs the right of an innocent fetus to life, which it gains only after it’s born (and no longer needs someone else’s body to live). Feser considers consequentialism to be an abominable ethical system; in his preferred “natural law” theory, certain actions (killing an innocent rational animal, in this case) are absolutely condemned under all circumstances no matter what.

Yet again, there’s not enough room here to closely compare consequentialism and natural law theory, so we can concede even this point to Feser as well. In this blog entry, and in The Unnecessary Science itself, I will argue that a thorough Aristotelian analysis implies that zygotes and fetuses, before birth and the severing of the umbilical cord, do not actually possess the potential to reason and thus do not possess the Essence of “rational animality.”

First let’s take a closer look at Feser’s anti-abortion argument in The Last Superstition. He states,

…the features essential to human beings…being able to take in nutrients…to think, and so forth—are not fully developed until well after conception. But that doesn’t mean that they aren’t there…Rationality, locomotion, nutrition, and the like are present even at conception…as inherent potentialities…[this] doesn’t even mean “potential” in the sense in which a rubber ball might potentially be melted down and made into something else, e.g. an eraser. It means ‘potential’ in the sense of a capacity that an entity already has within it by virtue of its nature or essence, as a rubber ball qua rubber ball has the potential to roll down a hill even when it is locked in a cabinet somewhere. And in this sense a zygote has the potentiality for or “directedness toward” the actual exercise of reasoning…that a rubber ball doesn’t have, that a sperm or egg considered by themselves don’t have.[5]

The most obvious problem with Feser’s argument, in my view, comes with the example he uses right at the very end of that paragraph. How, precisely, could a zygote be much more “directed toward” becoming a rational animal than an individual egg or sperm cell could? This might sound strange to you and me, dear reader, but it shouldn’t sound strange to Dr. Feser. As he describes elsewhere in The Last Superstition (which I critique very heavily and at much more length in the sections on gay marriage in The Unnecessary Science), one of his major hobbyhorses is the idea that our sexual faculties “are directed towards” the production of more human beings. If that really were the case, then every egg cell, at least, would be an actual human being. Feser himself would say that the only reason eggs exist is to create new human beings; if we didn’t have sex (say we reproduced by budding or parthenogenesis), we wouldn’t have those egg cells. That means the egg’s final cause (another bit of Aristotelian jargon, explained at length in “The Problems of Pure Act”) is to become a human being, which also means it is “directed towards” human rational activity, it just “hasn’t yet fully realized that inherent potentiality.” The only thing the little egg needs to realize that potential is a little help from a little sperm, followed by nine months in mommy’s tummy. What makes the egg’s situation metaphysically different from the zygote’s? The only meaningful distinction between an unfertilized and fertilized egg, in terms of the potentialities towards which they are directed, seems to be the split second when the sperm hits the egg. Unless Feser can provide some account of why that exact moment represents a tectonic shift in the “directedness” of the egg cell, he would be forced to concede that a mere unfertilized egg is an “actual human, just one waiting to actualize its potentials” in the same way a zygote is.

Might that tectonic shift be a matter of chromosomes? One of the points for a zygote having its own distinct Form (that is to say, being a unique human being instead of a mere part of one, like some skin cells) is that it has its own distinct set of genes, different from those of its mother. The egg has an X chromosome from the mother, the sperm carries either the father’s X or Y chromosome, so when they come together, the resulting girl (if an X-sperm created an XX zygote) or boy (for an XY zygote) would have a distinct genetic blueprint that differed from both of the parents. But on closer thought, this is not the whole story. Both eggs and sperm have distinct “blueprints” by themselves. There are always slight variations in the single chromosome of the sperm and eggs created by the father and mother—these gametes are never just clones or identical templates, so to speak, the way cells of other body parts are. Even without a background in biology, this can be easily understood by thinking about siblings. If every egg cell and every sperm were exactly alike, every male and female child of a single couple would be identical to his or her brothers or sisters, because the exact same X and Y chromosomes would be creating them. In reality, of course, that’s not the case—except for identical twins (which are two people made from a single egg), there are always little differences among fraternal siblings. This is proof that each individual egg and sperm has a slightly different set of genes, which means they really do possess genetically distinct Forms, in the sense of being distinguished and individuated from others of their general type. Given their “directedness towards” becoming human beings, they would therefore seem to be actual human beings in the same sense a zygote is. But even Feser would admit this would be absurd.

Similar problems arise with Feser’s conflation of a certain substance being “intrinsically directed towards” a certain thing (in this case, a rational animal, or more specifically, reasoning at some point in the future) and actually being that thing. To again riff off of one his favorite examples, imagine a glass of water sitting at room temperature. That water is “intrinsically directed” towards becoming ice at cold temperatures. There is something inherent to water that gives it the “potential” to be cold and solid—if it were to remain a liquid at 0 degrees Celsius, or turn into violets, or explode or anything like that, it would not really have the Form of water and therefore would not actually be a sample of water. But the fact that a glass of water has “iciness” as a potentiality does not mean it actually is a block of ice until the temperature has lowered and it has actually frozen.

Imagine how silly it would be if you asked a waiter at a restaurant for some ice in your lemonade and he instead brought you a glass of water along with it, his excuse being, “well, this water is an actual block of ice, just one that hasn’t fully realized its inherent potentials.” I somehow suspect even Edward Feser would have a tough time tipping the guy extra for being an astute Aristotelian. Unhappily for the pro-lifers, the same reasoning applies to zygotes. There may be something intrinsic, a potentiality or blueprint “directed towards” rationality in a zygote, but only in the same sense that there is something intrinsic, a potentiality or blueprint “directed towards” ice in water. Until that potential is actually realized, it seems as silly to treat a zygote as an actual human being as it would be for a waiter to treat a glass of lukewarm water as an actual block of ice.

Aristotle’s doctrines of “primary actuality” and “secondary actuality” are of little help to Feser here. Earlier in The Last Superstition, Feser describes the distinction as such:

Since you are a human being, you are a rational animal; because you are a rational animal, you have the power or faculty of speech; and because you have this power, you sometimes exercise it and speak. Your actually having the power of speech flows from your actually being a rational animal; it is a ‘secondary actuality’ relative to your being a rational animal, which is a ‘primary actuality.’

What this means is that “the zygote, given its nature or form, has rationality as a ‘primary actuality’ even if not yet as a ‘secondary actuality.’”[6]

The key phrase there is “given its form.” We can agree that a zygote has a “primary actuality” of rationality only if we agree it is an actually rational animal in the first place. But as the examples given above should hopefully show, it is far from obvious that a precursor to a rational animal actually is a rational animal itself. That being the case, if the zygote possesses a different Form, even if it has the potential to become a rational animal, it does not in and of itself have rationality as a primary actuality. The example of the block of ice comes to mind again: Ice has the “primary actuality” of being cold and solid, and also has the secondary actuality—that is to say, an ability that flows from its primary actuality but is not necessarily always expressed—to cool a drink. Any block of ice will have this power even if it isn’t in a glass of lemonade. But it must be frozen into a proper block first—a glass of lukewarm water does not have the primary actuality of iciness and therefore no secondary actuality of a capacity to cool. Only when the water has been given the Form of ice through freezing, and only when a zygote has been given the Form of a rational animal (a soul) through gestation and birth, can either be said to actually be icy or rational.

Admittedly, there may be a disanalogy here: The zygote is a living thing, whereas a glass of water would be an inanimate object. Doesn’t the zygote have some inherent principle of growth and operation that makes it different from water, which only has a principle of operation and no inherent tendencies of growth or autonomous behavior? This is most likely the argument that the Thomists John Haldane and Patrick Lee would use, and Feser relies quite heavily on their analysis to buttress the ones he gives in Aquinas and The Last Superstition. On closer inspection, however, a sharp-eyed reader can see that Haldane and Lee’s arguments are not entirely airtight either.

The pair tells us that “the case of foetal development involves an intrinsic principle of natural change in a single substance. This change involves the internally directed growth toward a more mature stage of a human organism, and so the cause of this change, the embryo itself, is already human.” According to the authors, an embryo can be said to be “internally directed” thanks to its “epigenetic primordium.” The term derives from two words: “the ‘primordium’ [is] ‘the beginning or first discernible indication of an organ or structure’, while ‘epigenetic’ is used to mean ‘being developed out of without being preformed.’”[7] Since this primordium—the first discernible indications of organs which will gradually develop as part of a final cause—is present only after the sperm hits the egg, that moment of conception can be considered the moment at which a new human being is formed (or Formed, or ensouled, whichever you like).

But at the same time, Lee and Haldane also mention, “In  mammals,  even  in  the  unfertilized  ovum,  there  is already  an  ‘animal’  pole  (from  which  the  nervous  system  and  eyes develop) and a ‘vegetal’ pole (from which the future ‘lower’ organs and the gut develop).”[8] This would seem to fulfill the criteria of an epigenetic primordium: The first discernible indications of organs, which are not pre-formed but will develop naturally after they have made contact with a sperm cell. Since one of these blueprints, so to speak, is of the nervous system, the individual egg could be said to be “directed towards” the rationality associated with that nervous system. That would mean an unfertilized egg would be an actual human in almost the same way Feser, Haldane, and Lee say a zygote is an actual human. But, again, this seems absurd.

Absurd, they might say, because an unfertilized egg contains no inherent principle of growth. An egg without a sperm attached to it will just sit there until it’s eventually flushed out (a process, I hear, that causes quite some inconvenience every month). An egg combined with a sperm, however, has its own unique genetic blueprint and something that makes it start to divide and grow in size. Since

…there is no extrinsic agent responsible for the regular, complex development, then the obvious conclusion is that the cause of the process is…the embryo itself. But in that case the process is not an extrinsic formation, but is an instance of growth or maturation, i.e., the active self-development of a whole, though immature organism which is already a member of the species, the mature stage of which it is developing toward.[9]

This would be convincing…if Haldane and Lee hadn’t forgotten about a very important extrinsic actress indeed: The zygote’s mother. A zygote is not really like an adult cat or dog or squirrel or other animal Feser uses as examples of natural substances or animal souls.[10] A grown, independent animal is capable of taking in nutrients, reproducing, and carrying out all its other behaviors (barking, meowing, burying nuts, whatever) on its own volition and does not necessarily rely on any other entity to do it for them. In other words, these animals operate entirely according to their intrinsic principles, though bad fortune (such as predators or local famine) can frustrate these principles. A zygote, on the other hand, relies entirely on its mother’s body to carry out its distinctive operations. It first must attach itself to the uterus before it will grow, and none of the “epigenetic primordia” it contains will ever actually become the organs (much less the rationality) they “point towards” unless the mother’s body provides it nutrients and proper direction 24/7 for nine months.

In a meaningful sense, while a zygote may be “directed towards” growing in that it possesses a certain genetic blueprint conducive to that end, the little thing is not actually growing itself. Rather, the mother’s body is actively stuffing nutrients into it and moving the process along. It does not seem to be an intrinsic principle of the zygote that spurs its growth, but the extrinsic action of the body in which it is found. If zygotes really did possess some intrinsic principle as Haldane, Lee, and undoubtedly Feser hold, they would be able to nourish themselves and grow into little rational animals entirely on their own. But as we all know, this is impossible—a zygote separated from its uterus in some way will quickly wither and die. Even if there were some way to keep it alive—an artificial womb from Brave New World, for instance—that womb would still be providing nutrients and direction for the organism’s growth. There would still be an extrinsic, external agent responsible for the changes the zygote experiences, it would just be an artificial, science-fiction agent rather than a natural mother.

Neither is it any good to say the “blueprint,” the full set of chromosomes (with two being XX or XY) contained in a zygote constitute the “intrinsic principle,” at least if merely having an “intrinsic principle” that is not yet fully realized makes an entity of the same type as a fully-grown example. As mentioned earlier, an unfertilized egg contains a sort of blueprint for the nervous system all on its own, it merely needs a handsome, dashing sperm to complete the blueprint and begin the next step of the process towards which it is “directed.” If the Thomist Trio wishes to say a zygote is an actual rational animal that is just waiting to realize its potentials after nine months and with the aid of many nutrients, we can say an unfertilized egg is an actual rational animal that is just waiting to realize its potential with the aid of a single sperm, nine months, and many nutrients. As Feser might say, an incomplete or damaged blueprint is still a blueprint, and the half-chromosomal-load of a human egg certainly counts as an incomplete blueprint.

Equally problematic is the word “blueprint,” as a blueprint itself contains no intrinsic principle that can be actualized without the aid of an external actor. Imagine you give a builder the blueprints for a house. It would be silly for either you or him to act as if the blueprint itself were an actual (if incomplete) house, because the blueprint is merely providing a set of instructions. The builder must provide the materials and do the work of building a house, even if the blueprints are directing him in a sense. By the same token, the zygote’s distinct chromosomes serve as a blueprint for a unique human being, but that human being does not exist yet. Only when enough time has passed and the mother’s body has provided enough nutrients (she is the builder in this case) can we really say a new human has come into being.

Under Feser’s own lights, then, a consistently Aristotelian outlook makes abortion more, not less, justifiable. When we accept three important Aristotelian views (Realism [that things in the world actually have mind-independent Essences or Forms], the idea that a thing’s potentialities tell us what Form it instantiates, and the idea that substantial Form is determined by an inherent principle of growth), we find that since a zygote lacks inherent (as opposed to externally-powered) growth, it does not truly possess the potentialities associated with the Form of a human, and thus is not truly a human. Consequently, it does not possess a right to life all humans do. I would say that’s a hefty metaphysical argument pro-choicers could add to their arsenal.

We are then left with one more problem: Where, precisely do we draw the line between a merely proto-human zygote and a fully human child? It is a very important question, at least to guys like Feser: If zygotes really aren’t rational animals, then it would be acceptable to destroy them, but since children really are rational animals (just immature ones), we can’t simply kill them. Aquinas thought that growing proto-humans took on the full Form of Humanity (that is to say, their souls) at about forty days into development, but this was due to the primitive knowledge of embryology available to him at the time. Given that the Form of Man is being a rational animal, an ethics based on Forms seems to entail that any human-seeming organism would only be truly human once it began to demonstrate rational activity. But as we all know, babies aren’t very rational, so this would imply the absurd conclusion that infants and toddlers weren’t really human (and that abusing or killing them would be less morally severe).[11]

In order to avoid this conclusion, Feser, Lee, and Haldane had to resort to the concept of “epigenetic primordia” and the assertion that a zygote containing a blueprint directed towards being a rational animal (eventually) counted as having an intrinsic principle—making it an example of an actual rational animal, merely an immature one. But, fortunately, even under my own riff on an Aristotelian framework, where zygotes are not rational animals, it is possible for me to maintain that newborns are fully human and deserving of rights.

The key lies in the intrinsic principle of growth and behavior mentioned earlier in relation to animals. We have established that zygotes do not possess this principle because their growth and development is dictated entirely by an external actor (the mother’s body). However, when a baby leaves the womb, loses the umbilical cord, and takes the first breath out in the world, he or she gains that intrinsic principle. Yes, it is true that babies and toddlers are just about completely helpless, and that they need to be fed and cleaned by external actors to avoid starving to death (which obviously entails they are entirely un-rational). But even though babies are helpless, they are not as helpless as a zygote, embryo, or fetus. Babies are capable of manifesting behaviors all on their own and exerting some control over their environment, even if only in a very thin sense of crying loudly to get someone to notice them. Their independent actions evince a sort of intrinsic principle influencing the world around them, analogous to the way a dog barking or a cat meowing for food evinces an intrinsic, independently-operating behavior influencing the world, which tells us those things are dogs or cats. A proto-human, however, cannot influence anything in that way. Even a developed fetus, no matter how much it kicks or rolls around in the womb, cannot change the chemicals of the uterus surrounding it, nor how many nutrients the uterus provides it. We can say the fetus’s principle of growth is extrinsic, located in the mother’s body, while the newborn’s principle of growth is intrinsic, rooted in its own behaviors (even if they only serve to get others to feed it). Since the Thomists require a “blueprint” pointing towards rationality (which babies certainly have, given they’ll grow to be at least somewhat rational in a few years), and an intrinsic principle propelling growth towards that goal, babies fulfill both conditions, while zygotes have only the former. So it is demonstrated that we can justify abortion on Aristotelian-Thomistic grounds without necessarily condoning infanticide.

As always, I hope you’ve enjoyed this piece—and if you did, you’ll consider buying The Unnecessary Science, where you’ll find this argument expanded on, as well as many others that will prove massively useful to anyone interested in refuting “natural law theory,” which has taken a great deal of contemporary importance thanks to the preponderance of right-wing Catholics such as Clarence Thomas and, soon, Amy Barrett on the United States Supreme Court. You can buy a physical copy here.

If you’d prefer an ebook, don’t worry, the ebook version will be out before the end of the month—please look forward to it! You’ll have everything you need to send Thomists packing at the touch of a button on your computer or even a few swipes of your smartphone if you have Kindle, Kobo or Nook!

NOTES

[1] As is the case in the text of The Unnecessary Science, I reference Feser’s work with acronyms since I cite them so much. Here, The Last Superstition is TLS and Aquinas: A Beginner’s Guide is AQ.

[2]TLS, 31-35, AQ, 16-24. Essence, Form, and Nature each connote slightly different things in the most technical usage, but the distinction isn’t important in this context, so here we will use the terms interchangeably. I should note here that I am capitalizing all these terms in my own text, though leaving them uncapitalized when directly quoting from other authors, to differentiate the specifically philosophical terms from the common verbs and adjectives which denote different things.

[3] Ibid.

[4] TLS, 50-55.

[5] TLS, 129.

[6] TLS, 56.

[7] John Haldane and Patrick Lee, “Rational Souls and the Beginning of Life (A Reply to Robert Pasnau),Philosophy 78 no. 4 (2003), 537.

[8] Ibid.

[9] John Haldane and Patrick Lee, “Aquinas on Human Ensoulment, Abortion and the Value of Life,” Philosophy 78, no. 02 (2003), 271.

[10] TLS, 121. The specific term Feser uses is “sensory soul,” but that’s not relevant to the discussion at hand.

[11] AQ, 141.

 

 


Stay in touch! Like A Tippling Philosopher on Facebook:

A Tippling Philosopher

You can also buy me a cuppa. Or buy some of my awesome ATP merchandise! Please… It justifies me continuing to do this!



September 4, 2020

Having posted the Philpapers survey results, the biggest ever survey of philosophers conducted in 2009, several readers were not aware of it (the reason for re-communicating it) and were unsure as to what some of the questions meant. I offered to do a series on them, so here it is – Philosophy 101 (Philpapers induced). I will go down the questions in order. I will explain the terms and the question, whilst also giving some context within the discipline of Philosophy of Religion.

This is the twelfth post, after…

#1 – a priori

#2 – Abstract objects – Platonism or nominalism?

#3 – Aesthetic value: objective or subjective

#4 – Analytic-Synthetic Distinction

#5 – Epistemic justification: internalism or externalism?

#6  – External world: idealism, skepticism, or non-skeptical realism?

#7 – Free will: compatibilism, libertarianism, or no free will?

#8 – Belief in God: theism or atheism?

#9 – Knowledge claims: contextualism, relativism, or invariantism?

#10: Knowledge: Empiricism or Rationalism?

#11 – Laws of nature: Humean or non-Humean?

The question for this post is: logic: classical or non-classical? Here are the results:

Logic: classical or non-classical?

Accept or lean toward: classical 480 / 931 (51.6%)
Other 308 / 931 (33.1%)
Accept or lean toward: non-classical 143 / 931 (15.4%)

I won’t be explaining all the different subsets of logic here as that would entail multiple book-length volumes. We should start this piece by defining what logic is. Logic is a branch of both mathematics and philosophy and is broadly the appraisal of arguments in the context of formal or informal language taken together with a deductive system and/or model-theoretic semantics. There is no settlement to the definition of logic, hence this piece, right. Wikipedia sums it up in simple terms:

Logic (from Greekλογικήlogikḗ, ‘possessed of reasonintellectualdialecticalargumentative‘)[1][2][i] is the systematic study of valid rules of inference, i.e. the relations that lead to the acceptance of one proposition (the conclusion) on the basis of a set of other propositions (premises). More broadly, logic is the analysis and appraisal of arguments.[3]

There is no universal agreement as to the exact definition and boundaries of logic, hence the issue still remains one of the main subjects of research and debates in the field of philosophy of logic (see § Rival conceptions).[4][5][6] However, it has traditionally included the classification of arguments; the systematic exposition of the logical forms; the validity and soundness of deductive reasoning; the strength of inductive reasoning; the study of formal proofs and inference (including paradoxes and fallacies); and the study of syntax and semantics.

A good argument not only possesses validity and soundness (or strength, in induction), but it also avoids circular dependencies, is clearly stated, relevant, and consistent; otherwise it is useless for reasoning and persuasion, and is classified as a fallacy.[7]

There are a wide range of logics – a plurality of them. In philosophy, language and semantics are very closely related to logic and classical logic looks to codify this relationship such that it is truth-functional. It’s all about deductive validity.

Classic Logic

Classical Logic (or Standard Logic) is not actually derived from the classical period but came out of the 19th and 20th centuries (starting with Frege, but including an awful lot of other logicians).

Essentially, inferences have premises and conclusions, but language is a bit irregular, so we can get into a spot of bother.

There are different views on what language, natural language (as in, what people speak, as opposed to formal languages that are designed for a functional purpose), is and how it is underwritten. Natural languages are very irregular. The question here is about the relationship between natural and formal languages (such as propositional calculus, that seeks to set out propositions and conclusions in an almost mathematical manner using connectives). Some might say that natural languages have underlying logical forms, that can be displayed in formal languages. One might say that a good declarative sentence in natural language may contain propositions that can again be reflected in formal language. Others maintain that natural language’s vagueness means that it should be replaced with formal languages in a very regimented way. In this way, there can be no doubt as to the truth or certain propositions in a context of bivalence (things are either true or false).

This is where such logic shares much in common with mathematics.

Formal languages look to evade ambiguities (amphibolies), such as:

John is married, and Mary is single, or Joe is crazy.

This could be formalised as (A&B)∨C) or (A&(BC), for example.

Modal logic is a form of classic logic where modal denotes possibility or necessity, and formal symbols are often utilised to give it structure. However, there are some who maintain that modal logical stretches into non-classical territory, or at least extends classical theories.

The end result of lots of discussion that could be had over language, syntax and semantics is that:

We now introduce a deductive system, D, for our languages. As above, we define an argument to be a non-empty collection of sentences in the formal language, one of which is designated to be the conclusion. If there are any other sentences in the argument, they are its premises.[1]

Classical logic is bound by certain rules:

  1. Law of excluded middle and double negation elimination

  2. Law of noncontradiction, and the principle of explosion

  3. Monotonicity of entailment and idempotency of entailment

  4. Commutativity of conjunction

  5. De Morgan duality: every logical operator is dual to another

While not entailed by the preceding conditions, contemporary discussions of classical logic normally only include propositional and first-order logics.[4][5] In other words, the overwhelming majority of time spent studying classical logic has been spent studying specifically propositional and first-order logic, as opposed to the other forms of classical logic.

As the SEP concludes:

Logic and reasoning go hand in hand. We say that someone has reasoned poorly about something if they have not reasoned logically, or that an argument is bad because it is not logically valid. To date, research has been devoted to exactly just what types of logical systems are appropriate for guiding our reasoning. Traditionally, classical logic has been the logic suggested as the ideal for guiding reasoning (for example, see Quine [1986], Resnik [1996] or Rumfitt [2015]). For this reason, classical logic has often been called “the one right logic”. See Priest [2006a] for a description of how being the best reasoning-guiding logic could make a logic the one right logic.

That classical logic has been given as the answer to which logic ought to guide reasoning is not unexpected. It has rules which are more or less intuitive, and is surprisingly simple for how strong it is. Plus, it is both sound and complete, which is an added bonus. There are some issues, though. As indicated in Section 5, there are certain expressive limitations to classical logic. Thus, much literature has been written challenging this status quo. This literature in general stems from three positions.

Which neatly gets us onto non-classical logic(s).

Non-classical logic

Non-classical logics (or alternative logics) differ from propositional or predicate logic in several ways, and often because of extensions, deviations, and variations, which then has an impact of how we interpret logical truth and consequence.

If there are more than two truth values (the bivalence of True and False), such as in Ł3 logic, where you can have T, F and #, where # is an indeterminate proposition of either: both true and false or neither true nor false. This affects how truth tables might look.

“I am bald” can be indeterminate for someone who is perhaps not completely bald, i.e. has a few hairs here and there. We could assign a truth value of # to this. Here, we might return to the vagueness or ambiguity of natural languages.

Here is a list of different non-classical logics (from da Costa, Newton (1994), “Schrödinger logics”, Studia Logica53 (4): 533):

It depends how you define things and so some people include extensions of classical logic (such as modal logic) in with properly “deviant” logics.

For these logics, some of the aforementioned constraints do not hold, such as the law of the excluded middle for intuitionistic logic.

Discussion

When I first saw this question, I thought it was a false dichotomy, that you could use different logics for different ends and that one does not ultimately have to be true to the exclusion of the other. Indeed, in looking at the Philpapers survey discussion, the creators had this same problem in mind:

Various respondents wondered how to interpret this question, given that it’s not obvious that there has to be a fact of the matter about whether classical or nonclassical logic is correct. Still, plenty of people think there is a fact of the matter, and those that don’t (including ourselves) seem to have by and large chosen an “other” option.

As the SEP concluded in its article on classical logic:

Thus, much literature has been written challenging this status quo. This literature in general stems from three positions. The first is that classical logic is not reason-guiding because some other single logic is. Examples of this type of argument can be found in Brouwer [1949], Heyting [1956] and Dummett [2000] who argue that intuitionistic logic is correct, and Anderson and Belnap [1975], who argue relevance logic is correct, among many others. Further, some people propose that an extension of classical logic which can express the notion of “denumerably infinite” (see Shapiro [1991]). The second objection to the claim that classical logic is the one right logic comes from a different perspective: logical pluralists claim that classical logic is not the (single) one right logic, because more than one logic is right. See Beall and Restall [2006] and Shapiro [2014] for examples of this type of view (see also the entry on logical pluralism). Finally, the last objection to the claim that classical logic is the one right logic is that logic(s) is not reasoning-guiding, and so there is no one right logic.

Suffice it to say that, though classical logic has traditionally been thought of as “the one right logic”, this is not accepted by everyone. An interesting feature of these debates, though, is that they demonstrate clearly the strengths and weaknesses of various logics (including classical logic) when it comes to capturing reasoning.

Io think the way this question was answered probably reflects what most philosophers are comfortable with (i.e., what they know) and the fact that there isn’t necessarily a bivalence in the question itself!

 


Stay in touch! Like A Tippling Philosopher on Facebook:

A Tippling Philosopher

You can also buy me a cuppa. Or buy some of my awesome ATP merchandise! Please… It justifies me continuing to do this!

August 16, 2020

It seems that internet friends and atheist You Tube sensations CosmicSkeptic and Rationality Rules have either read my book at some point or are just hitting a rich vein of Kalam Cosmological Argument (KCA) criticism by mutual awesomeness.

I recently posted about my thoughts concerning CosmicSkeptic’s debate with William Lane Craig and you can see my three videos I did on the subject here.

Rationality Rules (RR) recently did a couple of videos showing how the KCA and libertarian free will are incompatible, something that features as a strong claim in my book Did God Create the Universe from Nothing? Countering William Lane Craig’s Kalam Cosmological Argument (UK). See my posts here:

Rationality Rules (aka Stephen Woodford) just responded to an apologist defending the coherence of the KCA and libertarian free will in his video here:

I was particularly intrigued with this video, not just because I was conceited enough to think that RR had read my book, but more importantly due to the T-shirt he was wearing: A Fallen Acorn brewery one. Fallen Acorn is my local beer/brewery in Gosport that was so named for the metaphor by point of fact it was the resurrection of the previous brewery, the Oakleaf Brewery, that went under. I love the name and what they are doing with their beers. So for him to wear the T-shirt made me think he was local to  me. I shot him a message that he immediately returned asking if he had read my book and whether he was local, vis-a-vis the T-shirt. To my disdain, he had not heard of my book (though promised to grab a copy), but to my joy he is not only local to me, but actually designed the brewery’s logo!

The long and the short of it is that we may well do a video together discussing the Kalam and drinking ale, after having a brief discussion, united by the love for good beer and the contempt for rubbish apologetics.

This is a win win. Tippling AND philosophising. What’s not to like with that?


Stay in touch! Like A Tippling Philosopher on Facebook:

A Tippling Philosopher

You can also buy me a cuppa. Or buy some of my awesome ATP merchandise! Please… It justifies me continuing to do this!


Browse Our Archives