2016-04-16T19:49:42+01:00

I had been been thinking, some time back (yes, I know, it hurts). In doing the philpapers inspired Philosophy 101 series (found here, so far), touching on the questions asked in the largest ever survey of philosophers, I thought I would give some nice, basic factfiles explaining what some of the key philosophers have brought to the philosophical table. We hear so much about Aristotle, Plato, Hume and Descartes, but who the hell are they and what did they think (in a really short, easy-to-digest manner)?

I thought I would go back and start at the beginning. Though Thales is often thought of as the first philosopher, I am going to start with Socrates. Let me know what you think (as ever!).

 

Name: Socrates

Location: Athens

Era: 469-399 BCE

Main area of philosophy: Epistemology (what is knowledge and how do we come by it?)

How do we know: Nothing survives of his work. Only know about him through his protege, Plato.

Bio:

Son of stonemason and midwife, probably followed Dad. Went in the army. Fought a war, did well. Inherited money, retired early to think. Got known around Athens, had a following, got accused of corrupting young minds, sentenced to death by drinking hemlock.

Philosophy stuff:

Socrates is an interesting blokey. He was well-known for asking questions, not necessarily claiming he had a lot of knowledge, but being able to point out that others didn’t by using a dialectical method. This is working things out through discussion, which is kind of what we all do when arguing on the internet, or in person over a pint.

Eg, Socrates might say:

Q Do you think that the gods know everything?

A Yes, they’re gods.

Q Do some gods disagree with others?

A Yes. You know gods, always fighting.

Q So gods disagree about what is right?

A I suppose so.

Q So some gods can sometimes be wrong?

A Er, yeah, I suppose so.

Therefore, sunshine, the gods cannot know everything!

It was with this dialectical method that Socrates became well known for discussing stuff with people. He used this method to examine people and himself. He was famous for believing that “the unexamined life is not worth living”:

1) The only life worth living is the good life

2) I can only live the good life if I know the difference between good and evil

3) these are absolutes, not relative, and can only be discovered from questioning and examining and reasoning

4) Therefore, morality and knowledge are inextricably linked

5) An unquestioning life is one of ignorance without morality

6) An unexamined life is not worth living

A good life, he thought, was achieving peace of mind by doing the right thing, which can only be discovered by examining oneself and others. He saw virtue as the most valued possession – no-one wants to do evil, it makes them feel uncomfortable (we want peace of mind). It all comes down to gaining knowledge. This is a virtuous goal – it is why we exist. The key to this is self-knowledge.

Socrates was interested in love, loyalty, justice, good and evil, amongst other things.

Socrates’ dialectical method, which produced knowledge from a starting point of ignorance – merely questioning – was actually the seed for the inductive method, which became the scientific method. In this way, he set the foundation, not only for Western philosophy, but also for the empirical sciences.

Well done, old chap!

Socrates: “I know nothing except the fact of my ignorance.”

Green Day: “All I know is that I don’t know nuffin’, all I know is that I don’t know nuffin’ now!”

By Dimitris Zarafonitis (Publication) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons
By Dimitris Zarafonitis (Publication) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons
2016-04-03T08:50:22+01:00

I recently posted on the topic of mentalizing deficits and using this as an example of God’s unfairness, and thus arguing that either God does not exist, or he is not all-loving.

I used the term “mentalizing deficit” as it was the term used in the paper to which I was referring. A commenter there, Michael Neville, stated this:

As a high functioning autistic (HFA) I have ask how “our reality” differs from the reality I see?

I’m not being snarky about this, obviously you believe that autistics perceive reality differently from non-autistics. I know that I have problems with oral communication, emotional expression and recognition, and social interaction that other people don’t have. But are these things connected with reality? For me reality is how the world (assume when I say “world” that I’m including the wider universe) functions, how various parts of the world interact (think ecology writ very large), and how science can interpret the world. Since humans are part of the world, then they (we) are part of reality.

Here, he uses the term “high functioning autistic (HFA)”. Both terms refer to a sort of optimal functioning. One claims a deficit, the other claims by implication that to have HFAs, one must have low functioning autistics, and these, one presumes, are set against neurotypical people.

All this revolves around a norm – a neurotypical person. This, to me, bespeaks of a norm by consensus. What your average or typical functioning brain does, or acts like, in most cases. Those that are different, in any clearly discernible way, therefore, are not typical.

This doesn’t, at least on face value, mean there is a value judgement involved. There is not, prima facie, a better brain. Better depends on function. I have talked about this in relation to God and perfection. At the moment, these brains are merely different. When we talk of better (and I want to move away from value judgements of autistics for the moment), this requires having a goal. For example, a rugby ball is better for playing rugby than a golf ball. In fact, it was designed for that, or at least evolved into that shape through time.

In the same way, we can say that the human brain evolved as being fit for purpose, over time, to function in the way it does. One could then conclude that certain configurations of the brain are better than others. This would be in terms of survival and sexual reproduction (including sex/mate selection). These are not intrinsic value judgements, but functional value judgements. A rugby ball is not intrinsically better than a golf ball, or vice versa. They are different and can only be assigned value judgements when considering goals.

Let’s apply this to the brain. Any given brain, then, can only be assigned value judgements when considering goals. If the goal is to survive longer, or to produce attractive behaviours, then we could make such judgements (being careful to qualify them only in terms of the goal). In other words, one must really say, to be accurate, that brain A is better than brain B if you want to achieve scenario X.

I am not here to say that autistic brains are better than neurotypical brains or vice versa. Firstly, it probably isn’t helpful. Secondly, it also depends on the goal. Evolutionary pressures and successes are not the only goals open to humanity. We are at a point when such pressures are less obvious, or the old pressures are not necessarily the new ones. We are not fighting the food chain on the Serengeti.

I am sure that in the academic literature and particular disciplines that “high functioning” is defined much more closely than an arbitrary sort of generalistic sense. Whose reality is better is clearly a subjective one, if senses and feelings are involved. That I feel my life is great in no way enables to me to say it is better than a comparable (ie socio-demographics, quality of life etc) autistic or, indeed, anyone else (in feeling). My reality is different to an HFA’s, and to anyone else. I am my own experience machine, after all.

As Michael continued:

In order to have meaningful conversations about differences, we do need to be able to label things, and these often sit on a continuum. These are, at definitional level, arbitrary labels. There are no such things as species in the slowly transitional continua of evolutionary change. But we ascribe to continua labels that split up the line into categorisable chunks. See my writings on the Sorites paradox.

Personally I am inspired by music (I’m listening to Ravel’s “Bolero” as I write this). I’ve recently reread The Brothers Karamazov (David McDuff’s translation, though I prefer Constance Garnett’s) and derived considerable pleasure from it. I’m presently reading Joe Abercrombie’s Best Served Cold(the character Friendly is an HFA). While not trained as a scientist I read science books for pleasure and information and to understand the world better. So what parts of reality am I missing?

I cannot, without doing some research, say exactly what Michael is missing, if anything. There will be a difference in certain interpretative qualities. For example if you watch episode 5 of David Eagleman’s excellent documentary series on the brain

https://www.youtube.com/watch?v=V–a7oHB_T4

you will see the chap with Asperger’s who was unable to read expressions (qua people) and late in life was given some TMS (transcranial stimulation) as part of an experiment, and then suddenly could. Experimenters are presently unsure as to why this happened, but it did. So he was able to recognise a lacking in his neural and intuitive toolbox as he experienced both scenarios. One can claim that this is better for socialisation and interaction, hence why such abilities evolved; but, likewise, it may hinder some in doing certain other jobs and tasks. I remember listening to a BBC Radio 4 programme on autism, and there was a man with AS who was talking about the type of job (I think in nuclear physics, or something) that would have been much harder for a neurotypical person. He staked a claim that many of the great minds who achieved so much in our scientific feats of endeavour were somewhere on the spectrum; that we need to be thankful of the many AS people working hard in certain jobs that the more socially functioning human would be less able to do.

Empathy is often the key area (as it was with the research in my other post) that underlines differences (and this can depend on reading facial cues, etc.). I have taught many children all over the spectrum and with certain children, there has been an obvious lacking of empathy compared with other children, with the whole enterprise being a cognitive journey, rather than an intuitive one.

On this subject of empathy, Marja Erwin, on the other post, stated:

Lost me at empathy.

Half the time empathy seems to refer to the ability to read minds. Autistic people do have trouble reading allistic people’s minds. Allistic people also have trouble reading autistic people’s minds.

Half the time empathy seems to refer to concern for other people. Autistic people have the same range from compassion to cruelty as allistic people.

Given the double meaning, the suggestion that we lack empathy conflates autism with sociopathy, helps dehumanize us they way our society also dehumanizes sociopaths if they aren’t rich and powerful, helps excuse systemic violence against us, and helps excuse eugenicist hate-groups trying to eliminate us.

That such claims may lead to some autistics claim that they are being dehumanised is a real shame, but does not take away from the truth of the claim. One theory for the lacking in empathetic skills is a dysfunctional mirror neuron set up. Mirror neurons are those that fire when we see someone else do something that make us feel like we are doing the action. This, it is proposed, is the basis of empathy., and allows for the itersubjectivity of putting ourselves in “someone else’s shoes”. The nonconscious brain is doing a lot of work in the background. for certain autistics, this function has to be taken on and learnt cognitively or consciously.

Hopefully, greater understanding of all of our brains won’t lead to dehumanising portions of our society, but will enable better integration and functioning of society as a whole.

To conclude to my ramblings, though, the levels of functioning depend upon what the tasks and goals are.

2017-03-30T10:40:23+01:00

Having posted the Philpapers survey results, the biggest ever survey of philosophers conducted in 2009, several readers were not aware of it (the reason for re-communicating it) and were unsure as to what some of the questions meant. I offered to do a series on them, so here it is – Philosophy 101 (Philpapers induced). I will go down the questions in order. I will explain the terms and the question, whilst also giving some context within the discipline of Philosophy of Religion.

This is the ninth post after

#1 – a priori

#2 – Abstract objects – Platonism or nominalism?

#3 – Aesthetic value: objective or subjective

#4 – Analytic-Synthetic Distinction

#5 – Epistemic justification: internalism or externalism?

#6  – External world: idealism, skepticism, or non-skeptical realism?

#7 – Free will: compatibilism, libertarianism, or no free will?

#8 – Philosophy 101 (philpapers induced) #8: Belief in God: theism or atheism?

The question for this post is: Knowledge claims: contextualism, relativism, or invariantism? Here are the results:

Accept or lean toward: contextualism 373 / 931 (40.1%)
Accept or lean toward: invariantism 290 / 931 (31.1%)
Other 241 / 931 (25.9%)
Accept or lean toward: relativism 27 / 931 (2.9%)

So, what the hell does this all mean? This might get a little dry, and I will try to parse it all out in normal, understandable English, even if some of the quotes I use will be jargon heavy.

This is, as you might guess, about knowledge claims, and how knowledge can be defined. Let us start with contextualism.

Epistemic Contextualism

Epistemology is the study of knowledge. What it is, how we come by it and so on. Contextualism is defined by the Internet Encyclopedia of Philosophy (IEP) as:

In very general terms, epistemological contextualism maintains that whether one knows is somehow relative to context. Certain features of contexts—features such as the intentions and presuppositions of the members of a conversational context—shape the standards that one must meet in order for one’s beliefs to count as knowledge. This allows for the possibility that different contexts set different epistemic standards, and contextualists invariably maintain that the standards do in fact vary from context to context. In some contexts, the epistemic standards are unusually high, and it is difficult, if not impossible, for our beliefs to count as knowledge in such contexts. In most contexts, however, the epistemic standards are comparatively low, and our beliefs can and often do count as knowledge in these contexts. The primary arguments for epistemological contextualism claim that contextualism best explains our epistemic judgments—it explains why we judge in most contexts that we have knowledge and why we judge in some contexts that we don’t—and that contextualism provides the best solution to puzzles generated by skeptical arguments.

I imagine you just read that several times. What does it mean?

Well, I might make the claim that I know that I have eyes. I see, and my brain gets signals and interprets those as sight, but the impulses come from my eyes.

But hang on, I could be a brain in a vat, or in The Matrix! I don’t know I am not in The Matrix, so I don’t know that I have eyes.

Hmmm.

It seems that the plausibility of me being a brain in a vat, or not, having hands, or not, make all such claims mutually inconsistent. Contextualists maintain that the meaning of the word know is dependent on its context. One can surmise that the truth of such claims are dependent on the levels that we set, in context. We might call these epistemic standards.

In other words, if I am setting my standards super-high, at 100% indubitable, then all I know is that “I” exist (cogito ergo sum). I cannot prove I am not a brain in a vat, so on these high standards, I do not know I have eyes.

Drop the standards a little (assume that we are not brains in vats, that the world around us exists in some “real” way), and there is some truth to the claim, one could argue, that I have eyes. Usually, our standards are low, and we accept things as being known much more readily. We shelve such high-standards skepticism pragmatically. So in some contexts, a truth claim X is true, whilst in others it is false. It doesn’t have a universal truth.

I could get very technical here, and go into some funky detail, but I will leave that to you. Go check out the IEP or other sources for more detail.

Epistemic/Epistemological Relativism

Harvey Siegel defines such relativism as follows:

Epistemological relativism may be defined as the view that knowledge (and/or truth or justification) is relative – to time, to place, to society, to culture, to historical epoch, to conceptual scheme or framework, or to personal training or conviction – in that what counts as knowledge (or as true or justified) depends upon the value of one or more of these variables. According to the relativist, knowledge is relative in this way because different cultures, societies, epochs, etc. accept different sets of background principles, criteria, and/or standards of evaluation for knowledge-claims, and there is no neutral way of choosing between these alternative sets of standards. So the relativist’s basic thesis is that a claim’s status as knowledge (and/or the truth or rational justifiability of such knowledge-claims) is relative to the standards used in evaluating such claims; and (further) that such alternative standards cannot themselves be neutrally evaluated in terms of some fair, encompassing meta-standard.

This sounds “same same but different” to contextualism. There is similarity, sure, but the context now becomes entire frameworks of variables, such as culture and norms, as opposed to epistemic (skeptical) standards, in simple terms. As wiki states of factual relativism:

Yves Winkin, a Belgian professor of communications, responded to a popular trial in which two witnesses gave contradicting testimony by telling the newspaper Le Soir that “There is no transcendent truth. […] It is not surprising that these two people, representing two very different professional universes, should each set forth a different truth. Having said that, I think that, in this context of public responsibility, the commission can only proceed as it does.”[6]

The basis of it is that there is no way to establish that your set of standards is more superior than any other person’s, and that in a pluralist world, this presents a problem. Thus the truth of any claim is dependent upon those frameworks (culturally derived).

This has an effect on scientific claims because, some claim, these are themselves claims based on the social and cultural contexts out of which they emerge. Indeed, Stephen Hawking has recently proposed model-dependent realism, which bears some striking resemblances to such relativism. As wiki states:

Model-dependent realism is a view of scientific inquiry that focuses on the role of scientific models of phenomena.[1] It claims reality should be interpreted based upon these models, and where several models overlap in describing a particular subject, multiple, equally valid, realities exist. It claims that it is meaningless to talk about the “true reality” of a model as we can never be absolutely certain of anything. The only meaningful thing is the usefulness of the model.[2] The term “model-dependent realism” was coined by Stephen Hawking and Leonard Mlodinow in their 2010 book, The Grand Design.[3]

One of the problems, as I see it here, is as follows. Imagine a community, A. A know proposition X. I can assess them, and evaluate their knowledge. A are justified, as a community with their own baggage, in believing X. But hang on, if A really are justified in believing X, and I see and evaluate that justification, then it must follow that should normatively believe X, if that justification holds. In other words, justifications, if they are based on sound logic, must hold irrespective of community. To me, this presents a problem. If we are to rely on sound logic, then surely this must cut through relativistic contexts?

Epistemic Invariantism

For contextualists, the knowledge claim depends on the utterance of the proposition. In what context was it “spoken”? John MacFarlane puts it this way:

Local Invariantism is rejected by philosophers who take “know” to be an indexical. An indexical is a word whose content (its contribution to the propositions expressed by sentences of which it is a part) is determined in part by features of the context. A paradigm is “today,” which denotes the day on which it is uttered.

This context will determine the outcome of the discussion. As Alexander Dinges states:

Epistemic invariantism, or invariantism for short, is the position that the proposition expressed by knowledge sentences does not vary with the epistemic standard of the context in which these sentences can be used. At least one of the major challenges for invariantism is to explain our intuitions about scenarios such as the so-called bank cases. These cases elicit intuitions to the effect that the truth-value of knowledge sentences varies with the epistemic standard of the context in which these sentences can be used.

There are no easy or particularly interesting explanations about the intricacies of invariantism, and I am not going to get technical here. That is not the purpose of this piece. As the SEP explains:

A utters, “Pretzels are tasty”, and B utters, “Pretzels are not tasty”. While the semantic invariantist (for whom the truth-value of taste predications is in no way context sensitive) will insist that the above exchange constitutes a genuine disagreement about whether pretzels are tasty and that at least one party is wrong, contextualists and truth-relativists have the prima facie advantageous resources to avoid the result that at least one party to the apparent disagreement has made a mistake.

But there are different flavours of the position. In fact, let philpapers themselves sum the whole thing up:

Epistemic contextualism is primarily a semantic thesis about the meaning of the word “knows” and its cognates. Invariantism, which is the more traditional view, holds that the truth or falsity of sentences like “Mary knows that the bank is open on Saturday” does not shift from context to context. Contextualists, however, argue that such a sentence can be true in one context but not another. A typical model here is the case of indexical expressions, like “I” or “here.” My utterance of “I am a president” can be false while Obama’s is true. Some contextualists have argued that this can solve skepticism in a satisfactory way: e.g. “I know I have hands” is true out on the street but false in the philosophy classroom, where the context raises the standards for knowledge.  Attempting to capture some of the same phenomena as contextualism, various forms of invariantism involving “pragmatic encroachment” have been developed.  These include interest-relative invariantism, on which a subject’s interests make a difference to whether they know a proposition, and related forms of subject-sensitive invariantism [SSI].

As mentioned, it gets pretty muddy around here. For example, a typical definition for SSI might be: Subject sensitive invariantism is the view that whether a subject knows depends on what is at stake for that subject: the truth-value of a knowledge-attribution is sensitive to the subject’s practical interests. Word salad. Suffice to say that this version of invariantism is slightly more charitable than a classic invariantism, since there is some movement around the subject, in a sort of contextual manner.

Anyway, that’s enough to get you started. Confused? Yup, you probably should be; banboozled, even. As the SEP states (where SA refers to Skeptical Argument – brain in a vat as above, with the eyes):

…if you present a group of subjects with SA, for instance, and ask them whether the conclusion contradicts an ordinary claim to know such a thing, some will say ‘yes’, and some will say ‘no’. If contextualism turns out to be true, then many are blind to that, and so on. So, whoever turns out to be right, the contextualist or the ‘invariantist’, a substantial portion of ordinary speakers are afflicted by “semantic blindness” (Hawthorne 2004, 107). ‘Bamboozlement’ is something we are stuck with either way.

 

2016-02-20T13:36:19+01:00

Due to this post being lost in the handover from SIN to PAtheos, I am reposting it for the records, to add to the list of other philpapers ones. This one is a shorter one than many of the others.

So having posted the Philpapers survey results, the biggest ever survey of philosophers conducted in 2009, several readers were not aware of it (the reason for re-communicating it) and were unsure as to what some of the questions were. I offered to do a series on them, so here it is – Philosophy 101 (Philpapers induced). I will go down the questions in order. I will explain the terms and the question, whilst also giving some context within the discipline of Philosophy of Religion.

The first question is “a priori knowledge: yes or no?”

a priori

As one can guess from the use of the word for ‘prior’, this is knowledge that is ‘prior to’ evidence (or experience). In other words, it does not require evidence to justify it as knowledge, in effect being self-evident. There are questions as to whether such knowledge can be defeated by empirical evidence, and whether it must be necessary (or knowledge in all possible worlds). Necessary propositions cannot be false, such as ‘all sisters are female’. There is certainly a large element of semantics and definitional language here. This includes tautologies such as ‘all bachelors are unmarried’, which is true by definition (in a slightly different manner to the previous proposition).

This might lead one on to claim that language is a desciptive layer added on to observed phenomena (over time).

There is certainly an element of intuition with a priori knowledge. Rationalists are seen as the set of philosophers who adhere to the coherence of a priori knowledge.

Maths, for example, seems to have elements of a priori justification. That 6+4=10 is intuitively rational. Deductive arguments in logic also assume this. Whether intuition is enough to justify knowledge is, it appears, the nub of this philpapers question. Are logic and maths, for example, descriptive languages applied to reality after observing reality? Or are maths and logic conceptually pre-existing phenomena which we, as humans, merely access?

Rationalists can be seen as opposing empiricists in that they argue for the primacy of thought, ideas, concepts and a priori knowledge, whereas empricists see knowledge as dependent on observation and evidence.

The Stanford Encyclopedia of Philosophy sums things up well, as ever, in defining the argument between rationalists and empiricists:

Rationalists have typically thought that we can be a priori justified, and even know, things about the world, and empiricists have denied this. Now if the world includes abstract entities like numbers and propositions, then some rationalists, and even some empiricists, will hold that we can know a priori things about the existence and nature of these entities (though the empiricists might have a different view about what it is to be an abstract entity). However, rationalists like BonJour (1998) will insist that we can also know a priori things about the natural world. For instance, we can know a priori that no object can be red and green all over at the same time and in the same respects, that no object can be wholly in two distinct places at the same time, and (perhaps) that backward causation is impossible. They will claim that this is knowledge of the nature of reality and will be true of any object, or event, that exists.

One might grant this claim and at the same time point out that it does not give us knowledge of the existence of things, events, and states of affairs but only knowledge of what they must be like if they exist. We only know that there are objects and events in space and time by experiencing them, even if we can know a priori certain things about the distribution of colors on their surfaces, how many places they can be in at any given time, and whether a later event can cause an earlier one.

It does, obviously, depend on how you define knowledge. In a Cartesian sense, the only thing we can indubitably know is cogito ergo sum, that “I” exist. Everything else has an element of doubt. Perhaps we are brains in vats. Our minds are experiencing machines, and in a sense, all experience of the world is sense data. Does this include logic and the things that are often deemed a priori?

There are many objections, and counterpoints to those objections (the notion that reason, or reflection alone, can justify knowledge) concerning these ideas, and if you want a deeper analysis of the world of a priori, then hit the link above.

It would be rude not to include the other ‘a’s, in terms of knowledge, here.

A posteriori

Simply put, empirical evidence (meaning ‘from the later’). There, that was easy. This is empirical data as well as (or which indeed is) sense experience. When we talk about logical syllogisms, deductive arguments apply more to a priori style propositions, whilst inductive arguments apply a posteriori.

Eg

1)      All men are mortal

2)      Socrates is a man

3)      Therefore, Socrates is mortal

Is deductive a priori, whilst:

1)      All swans that I have seen are white

2)      Therefore, all swans are white

Or

1)      All swans I have seen are white

2)      Therefore, in all probability, the next swan I will see will be white

Now, we could confuse matters here by claiming that the first few premises of the deductive, a priori argument, are in fact inductive conclusions such that:

1)      All men that I have seen are mortal

2)      Socrates is, according to my sense data, a man

3)      Therefore, Socrates is mortal

Which opens a whole can of worms. Perhaps this then leads us to conclude that all knowledge is, at base, inductive…?

A fortiori

This term is less well-used. An argumentum a fortiori denotes “argument ‘from [the] stronger [reason]’ which entails the use, usually, of probability. When faced with two propositions, arguing for the one which is probabilistically more likely would be such an argument. Using Bayes’s Theorem would be an obvious method to employ in a fortoriori arguments. Examples here.

Just finally, let me know if this is useful, and spark a debate about whether you think a priori makes sense, or whether our foundations of knowledge are a posteriori.

RELATED POSTS:

#1 – a priori

#2 – Abstract objects – Platonism or nominalism?

#3 – Aesthetic value: objective or subjective

#4 – Analytic-Synthetic Distinction

#5 – Epistemic justification: internalism or externalism?

#6  – External world: idealism, skepticism, or non-skeptical realism?

#7 – Free will: compatibilism, libertarianism, or no free will?

#8 – Belief in God: theism or atheism?

 


Stay in touch! Like A Tippling Philosopher on Facebook:

2016-02-19T02:10:52+01:00

Patheos will be down for maintenance for most of today, so I will give you the back catalogue of my philpapers philosophical explanations as I am presently working on my new one, which I should have written by the end of tomorrow.

Having posted the Philpapers survey results, the biggest ever survey of philosophers conducted in 2009, several readers were not aware of it (the reason for re-communicating it) and were unsure as to what some of the questions meant. I offered to do a series on them, so here it is – Philosophy 101 (Philpapers induced). I will go down the questions in order. I will explain the terms and the question, whilst also giving some context within the discipline of Philosophy of Religion.

Here are the completed ones so far:

#1 – a priori

#2 – Abstract objects – Platonism or nominalism?

#3 – Aesthetic value: objective or subjective

#4 – Analytic-Synthetic Distinction

#5 – Epistemic justification: internalism or externalism?

#6  – External world: idealism, skepticism, or non-skeptical realism?

#7 – Free will: compatibilism, libertarianism, or no free will?

#8 – Philosophy 101 (philpapers induced) #8: Belief in God: theism or atheism?


Stay in touch! Like A Tippling Philosopher on Facebook:

2016-01-17T17:43:57+01:00

I am soon to release a book on the Kalam Cosmological Argument which has been long in the pipeline, but is finally ready, short of a foreword. As a result of its imminent creation (er, out of something!), let me remind you of an old couple of posts that deal with a major set of points that feature in the book.

I have, over the years, been a keen objector to the Kalam Cosmological Argument, an argument that apologists like William Lane Craig use to posit the existence of a creator god for the universe.

Having looked at the issue of causality in the last post, I would like to continue to analyse the first premise in the KCA. This objection is connected to the last objection in its implications on the KCA. To remind people of the KCA: (more…)

2016-01-06T13:56:30+01:00

The Daily Beast recently reported the following:

According to a Pew Research Report released earlier this year, the percentage of the U.S. population that identifies as Christian has dropped from 78.4 percent in 2007 to 70.6 percent in 2014. Evangelical, Catholic, and mainline Protestant affiliations have all declined.

Meanwhile, 30 percent of Americans ages 18-29 list “none” as their religious affiliation (the figure for all ages is about 23 percent). Nearly 40 percent of Americans who have married since 2010 report that they are in “religiously mixed” marriages, which means that many individuals who profess Christianity are in families where not everyone does.

These changes are taking place for a constellation of reasons: greater secular education (college degrees), multiculturalism, shifting social mores, the secular space of consumer capitalism and celebrity culture, the sexual revolution (including feminism and LGBT equality), legal and constitutional changes (like the banning of prayer in public school, and the finding of a constitutional right to same-sex marriage), the breakdown of the nuclear family, the decline of certain forms of family and group identification, and the association of religion in general with nonsensical and outdated dogmas. The Pew report noted Americans are also changing religions more than in the past, and when they do so, they are more likely to move away from Christianity than toward it.

The idea is that the causal respionsibility for the loss of religious fervour in the States is complex, multi-faceted, and takes a lot of thought and analysis. Steven Reiss, in the Huffington Post, talks about these from a psychologial perspective:

1. Organized religion versus spirituality

2. Tribalism versus humanitarianism

3. Traditional versus nontraditional families

4. Trust versus loss of confidence in institutions

To which he conlcudes:

I believe these four factors have played a role in making organized religion less adept at meeting people’s basic desires. That doesn’t mean this will always be so. Religion may change and adapt — as it has before — to better meet our basic human needs.

Whether it will remains an open question.

There are many theories, and many levels of variables and, as mentioned, complexities.

How do such complexities play themselves out? Well, as Jay Michaelson claims in the Daily Beast article:

But no one likes a “constellation of reasons” to explain why the world they grew up in, and the values they cherish, seem to be slipping away. Enter the scapegoat: the war on religion, and the persecution of Christianity.

It’s much easier to explain changes by referring to a single, malevolent cause than by having to understand a dozen complex demographic trends. Plus, if Christianity is declining because it’s being attacked, then that decline could be reversed if the attack were successfully repelled. Unlike what is actually happening—a slow, seemingly irrevocable decline in American Christianity—the right’s argument that “religious liberty” is under assault mixes truth and fantasy to provide a simpler, and more palatable, explanation for believers.

Take, as an example, Christmas. The weird idea that there is a “War on Christmas” orchestrated by liberal elites—Starbucks cups in hand—is, on its face, ridiculous, even if it is widely held on the right. Shop clerks saying “Happy Holidays” aren’t causing the de-Christianization of Christmas—they’re effects of it. Roughly half of Americans celebrate Christmas as a cultural, not a religious, holiday: Santa Claus and Christmas trees, not baby Jesus in a manger. So that’s what businesses celebrate. It’s capitalism, not conspiracy.

Unfortunately, even if the war on religion is fictive, the “defense” against it is very real and very harmful. This year alone, 17 states introduced legislation to protect “religious freedom” by exempting not just churches and religious organizations (including bogus ones set up to evade the law) from civil rights laws, domestic violence laws, even the Hippocratic Oath, but also but private individuals and for-profit businesses. Already, we’ve seen pediatricians turn children away because their parents are gay, and wife-abusers argue that it’s their religious duty to beat their spouses, and most notoriously that multimillion-dollar corporations like Hobby Lobby can have religious beliefs that permit them to refuse to provide health insurance to their employees on that basis.

I think this is a really pertninent point. Humans love certainty – we know this from psychological reseaerch. Multiple variables means we have this uneasy psychological state of affairs in our brains in being unable to simplistically aportioin blame. We love to assign blame. In fact, this is something I looked at in “Have I killed someone?” which I will quote from now.

Causality. It is a funny thing. Or not so funny.

A few years back, I took my class, as a teacher, on a class trip to the Historic Dockyard in the naval city of Portsmouth, UK. My school is some 45 minutes walk and a short ferry ride from there. With the cost of coaches, it is important to be able to walk to such places to keep the costs down for parents.

We pasted it there on the way, and we were running a little behind, so the walk back at the end of the day was quicker still. One of our parents, helping with the trip, was a heavy smoker who had to stop off at strategic times throughout the day for a crafty kids-can’t-see-me smoke. Many of the children were moaning on the way back because they simply were not used to walking any such length of time. This certainly applied to some of the parent helpers too.

Anyway, we made it back for the end of the school day, so good effort.

Except, that night, we heard that the aforementioned parent helper had died. He had had a heart attack.

Ever since that moment, I have felt partly responsible for that outcome, of that man’s death. In a naive, folk understanding sort of way, that is.

In writing my book on free will, and in researching the Kalam Cosmological Argument, I have come to understand that causality is much more complex than one might imagine. A does not cause B which causes C in such a simplistic manner. At best, things are only ever contributory causes (see JL Mackie’s INUS notion of causality [1]); but even then, this assumes one can quantise time, and arbitrarily assign discrete units of existence to both events and entities.

Let’s look at the event of the class trip. Did it start when we arrived at the dockyard, when we got off the ferry, when we left, when I started organising it, or, indeed, were elements of the trip in place when I started planning the unit, given the job, got my teacher’s qualifications etc?

Of course, there is no objective answer to that. These abstract labels are subjectively assigned such that we can all disagree on them. That is, simplistically speaking, an element of conceptual nominalism. Likewise, there were necessary conditions in the parent’s life which contributed to his death: anything from his smoking, to his lack of general health, from deciding to come on the school trip, to  deciding to get married and have kids. And so on.

An event happens in time and arbitrarily ascribing a beginning and an end to that event is an abstract pastime, and thus fails to be (imho) objectively and (Platonically) real.

Too simple! By aussiegall ([1]) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

Causality works through people, and harnessing it so that any one individual can claim themselves (morally) responsible for future effects which themselves are caused by effects preceding the individual makes for tricky philosophy. This is the battleground for the free will debate, for sure. Arbitrarily cutting causality up in such a way is problematic.

As I have set out in my analyses of the Kalam Cosmological Argument (KCA), which I hope to turn into a book (based on a university thesis I did on it), causality is not a linear affair which can be sliced and diced, It is a unitary matrix which derives from either a single beginning (like the Big Bang), something I find problematic, or eternally backward, or reaching some time commencement which could itself be a reboot. Either way, the idea of causality cannot be seen, and should not therefore be seen, in a discrete manner of units which can be attributed to equally problematic notions of events or unities. We are one big family of causality, this here universe.

So, in answer to the question, no. No, I didn’t kill anyone. Perhaps we could say that the universe did. And whatever notion “I” am, and whatever “I” am represented by, sat on or, better still, was part of the threads which cross and recross intricately and almost infinitely over each other in a mazy web of interconnected causality.

Why mention all of this? Well, this idea of simplistic causal understanding is what underwrites the points from the DB article such that Christians are claiming persecution and martyrdom from the secular political machine in the sense that “atheists atre evil”.

If you ever get the chance to watch “Bitter Lake” by superb documentary maker Adam Curtis, then do so. Here is the first part:

http://www.dailymotion.com/video/x2hdcji

Bitter Lake is an adventurous and epic film that explains why the big stories that politicians tell us have become so simplified that we can’t really see the world any longer. It argues that Western politicians have manufactured a simplified story about militant Islam into a “good” vs. “evil” argument, informed by and a reaction to Western society’s increasing chaos and disorder, which they neither grasp nor understand.

This fits in perfectly with the narrative which supposedly persecuted Christians perpetuate. Their narrative is not true and is dangerous, prompting kneejerk and reactionary lawmaking from right-wing blowhards. Causality is far more complex, and it’s about time that those very same Christians tried to get to grips witht he myriad variables that represent the changing seas of religio-political reality.

2016-01-06T16:13:09+01:00

Having posted the Philpapers survey results, the biggest ever survey of philosophers conducted in 2009, several readers were not aware of it (the reason for re-communicating it) and were unsure as to what some of the questions meant. I offered to do a series on them, so here it is – Philosophy 101 (Philpapers induced). I will go down the questions in order. I will explain the terms and the question, whilst also giving some context within the discipline of Philosophy of Religion.

This is the eighth post after

#1 – a priori

#2 – Abstract objects – Platonism or nominalism?

#3 – Aesthetic value: objective or subjective

#4 – Analytic-Synthetic Distinction

#5 – Epistemic justification: internalism or externalism?

#6  – External world: idealism, skepticism, or non-skeptical realism?

#7 – Free will: compatibilism, libertarianism, or no free will?

The question for this post is: Belief in God, theism or atheism? Here are the results:

God: theism or atheism?

Accept or lean toward: atheism 678 / 931 (72.8%)
Accept or lean toward: theism 136 / 931 (14.6%)
Other 117 / 931 (12.6%)

One might be inclined to leave it at that. It’s pretty simple, right? The obvious observation is that most philosophers are atheist. Most people who spend a lot of time deeply thinking about things and stuff and ideas and concepts conclude that God is indeed an idea or a concept in people’s minds. And nothing more. God has no ontic reality. God doesn’t “exist”.

Wikimedia Commons
Wikimedia Commons

 

But I don’t want to leave it there because the devil is in the detail and looking more closely at the figures can lea to some more interesting questions. The subjects of this survey were professional philosophers and graduates around the world. However, for those who know philosophy well, being the study of, well, everything, one normally has to specialise. Thus the subjects of the survey cover the whole gamut of philosophy disciplines: morality and ethics, religion, abstracts, science, maths, epistemology, consciousness and so on. The list is long. The key to this list is the philosophy of religion.

As many Christians have made great effort to emphasise, the philosophers in the philosophy of religion discipline bear different results. 72.8% of the 3226 philosophers who took the PhilPapers survey in 2009 said that they accept or lean towards atheism. Among philosophers of religion, though, 72.3% accept or lean towards theism. That is a big reversal. Here is what the Daily Nous says about this:

Adriano Mannino considers the question in a post at the group blog Crucial Considerations. Of these figures, he writes:

On the face of it, there are two hypotheses which could explain the data, one of them worrying for atheists, the other less so:

Expert Knowledge: Philosophers of religion possess expert knowledge on the arguments for and against God’s existence. The arguments for God’s existence are just overall more convincing and render God’s existence more probable than not.

Selection Bias: People often become philosophers of religion because they are religious, or at least have a high credence in God’s existence. Theist often become philosophers of religion, not the other way around.

He then makes use of the results of the data from the study by Helen De Cruz (VU University Amsterdam) of why philosophers of religion went into that field and how their beliefs concerning theism and atheism changed over time. He ends up concluding that the evidence is best explained by the “selection bias” hypothesis. He says:

The theists to atheists/agnostics ratio is even higher before exposure to philosophy of religion. This confirms the impression we got from considering philosophers’ motivations for doing philosophy of religion: most philosophers of religion were already theists when they started, so there is a strong selection bias at work.

Moreover, there are more philosophers of religion updating their beliefs toward atheism and agnosticism than toward theism, so we can reject the hypothesis that although there is a strong selection bias, expert knowledge favouring theism is still reflected in the fact that philosophers of religion convert more often to theism than to atheism/agnosticism while acquiring expertise in the field. The numbers show that the ratio of theists to atheists/agnostics declines with exposure to philosophy of religion.

From Mannino’s original post, I will take an extended quote which very nicely sums up the state of affairs (in a different colour to retain formatting):

Expert Knowledge or Selection Bias?

To draw further conclusions concerning the two hypotheses we need more empirical data, as the 2009 philsurvey does not contain sufficient information to determine their truth or falsity. Helen De Cruz’s study, however, contains qualitative data on why philosophers started doing philosophy of religion as well as quantitative data on how their beliefs concerning theism and atheism developed over time. This is exactly what we need.

Motivation for doing philosophy of religion: The study brings to light three main reasons for doing philosophy of religion. The most prevalent is described by the author of the study as “faith seeking understanding”, religious people who want to better understand their own belief. The second frequently cited reasons was Proselytism and witnessing: many philosophers of religion felt that doing philosophy of religion was part of their calling as religious people. One philosopher for example wrote: “My religious commitment helps to motivate some of the work I do (part of which involves defending and explicating Christian doctrine)”. The third most cited reason was a fascination with religion as a cultural phenomenon. These results give some support to the selection bias hypothesis, but since the study does not contain the numbers for each of these responses it is too early to tell.

Belief-revision: The study contains numbers on how philosophers engaged in belief-revision due to their engagement with philosophy of religion. Of all the respondents (136 of 151 answered the questions on belief-revision) more than 75% underwent some degree belief-revision on topics in philosophy of religion, and around 67% of all participants observed a change in their beliefs which can be attributed to philosophy:

no change: 24.3%

belief revision to atheism or agnosticism: 11.8%

belief revision to theism: 8.1%

philosophy polarized: 9.6%

philosophy tempered: 25%

other change: 12.9

change, but not attributed to philosophy: 8.1%

These numbers show that there was an overall shift toward atheism/agnosticism of 3.7% if we compare both directions of belief-revision: the direction of belief-revision was most frequently in the direction of atheism/agnosticism.

This supports the view that the theists to atheists/agnostics ratio is even higher before exposure to philosophy of religion and confirms the impression we got from considering philosophers’ motivations for doing philosophy of religion: most philosophers of religion were already theists when they started, so there is a strong selection bias at work.

Moreover, there are more philosophers of religion updating their beliefs toward atheism and agnosticism than toward theism. This seems to weaken the hypothesis that although there is a strong selection bias, expert knowledge favouring theism is still reflected in the fact that philosophers of religion convert more often to theism than to atheism/agnosticism while acquiring expertise in the field. The numbers show that the ratio of theists to atheists/agnostics declines with exposure to philosophy of religion.

This verdict is confirmed if we look at the percentage of theists who report that exposure to philosophy of religion tempered their beliefs and the percentage of atheist who reported a tempering of their beliefs. Of the theists, 33.7% reported a tempering influence, whereas only 10.3% of the atheists reported a similar influence. In other words: a higher proportion of theists become less sure about beliefs such as taking the Bible to be literally true, accepting the Fall, regarding Catholics as heretics, etc. than atheists who become less sure about aspects of their atheism or more appreciative of theist views.

Despite these statistical conclusions it is still possible that most theists remain theists due tostrong arguments for theism, and those atheists/agnostics who convert to theism do so for the same reason, while conversion to atheism/agnosticism happens due to weak arguments. This may be very implausible, but it is an epistemic possibility.

However, it seems that the fact that this epistemic possibility is highly unlikely is already sufficient to undermine appeal to authority arguments in this domain. In most cases it is truth-tracking to reject appeal to authority arguments if strong selection biases are at work in the field and the experts in question are more likely to reject the view in question after acquiring expert knowledge on the topic. Even if it just so happens that this is not the case when it comes to theism, it is still reasonable to reject appeal to authority arguments for theism, because rejecting such arguments is overall more likely to promote true beliefs. Long story short: atheists should not be worried about the theists to atheists ratio in philosophy of religion.

I think this is really important information to bear in mind as the “most philosophers of religion are theists” card is very often played in answer to the philpapers’ stats.

I can well imagine that people getting into POR might well be seeking to post hoc rationalise their positions, and it is interesting that there is a net move away from theism within the discipline.

2016-01-06T16:18:41+01:00

This is a topic which I have covered in other ways before, both in the piece “Have I killed someone?” and “A Great Myth about Atheism: Hitler/Stalin/Pol Pot = Atheism = Atrocity – REDUX“. This idea that atheism causes people to do X or Y has reared its ugly head. Why am I mentioning this now? Anton Lundin-Pettersson went into a school in Sweden with a mask and helmet, looking pretty dark, and a sword, and killed two people. And it looks like he was an atheist.

PZ Myers has written about it. He did not say that Anton Lundin-Pettersson was an atheist and that this caused his killing spree, but that atheists don’t like admitting when one of their own is a bad person, that atheists pull the No True Scotsman fallacy and skew the stats on atheist atrocities.

This may well be true.

But at the same time, people invariably do not commit atrocities on account of their lack of belief in a god, or their belief that there is no god.

So what’s going on here? Well, people have a very naive understanding of causality. People think in terms of billiard balls: that one hits another which hits a third. The universe is not like that. As I wrote before in whether I was responsible for a parent helper dying on one of my school trips (I had made us really walk fast for 40 minutes on the way back, and that night one of the parent helpers had a heart attack and died):

In writing my book on free will, and in researching the Kalam Cosmological Argument, I have come to understand that causality is much more complex than one might imagine. A does not cause B which causes C in such a simplistic manner. At best, things are only ever contributory causes (see JL Mackie’s INUS notion of causality [1]); but even then, this assumes one can quantise time, and arbitrarily assign discrete units of existence to both events and entities.

Let’s look at the event of the class trip. Did it start when we arrived at the dockyard, when we got off the ferry, when we left, when I started organising it, or, indeed, were elements of the trip in place when I started planning the unit, given the job, got my teacher’s qualifications etc?

Of course, there is no objective answer to that. These abstract labels are subjectively assigned such that we can all disagree on them. That is, simplistically speaking, an element of conceptual nominalism. Likewise, there were necessary conditions in the parent’s life which contributed to his death: anything from his smoking, to his lack of general health, from deciding to come on the school trip, to  deciding to get married and have kids. And so on.

An event happens in time and arbitrarily ascribing a beginning and an end to that event is an abstract pastime, and thus fails to be (imho) objectively and (Platonically) real.

This is too simplistic and I don’t buy it!

Causality works through people, and harnessing it so that any one individual can claim themselves (morally) responsible for future effects which themselves are caused by effects preceding the individual makes for tricky philosophy. This is the battleground for the free will debate, for sure. Arbitrarily cutting causality up in such a way is problematic.

As I have set out in my analyses of the Kalam Cosmological Argument(KCA), which I hope to turn into a book (based on a university thesis I did on it), causality is not a linear affair which can be sliced and diced, It is a unitary matrix which derives from either a single beginning (like the Big Bang), something I find problematic, or eternally backward, or reaching some time commencement which could itself be a reboot. Either way, the idea of causality cannot be seen, and should not therefore be seen, in a discrete manner of units which can be attributed to equally problematic notions of events or unities. We are one big family of causality, this here universe.

So, in answer to the question, no. No, I didn’t kill anyone. Perhaps we could say that the universe did. And whatever notion “I” am, and whatever “I” am represented by, sat on or, better still, was part of the threads which cross and recross intricately and almost infinitely over each other in a mazy web of interconnected causality.

NOTES

[1] Cause as INUS-condition. The most sophisticated version of the necessary and/or suffi­cient conditions approach is probably John Mackie’s analysis of causes in terms of so-called INUS condit­ions. Mackie suggested that a cause of some particular event is “an insufficient but non-redundant part of a condition which is itself unnecessary but sufficient for the result” (Mackie 1974: 62). Mackie called a condition of this kind an INUS condition, after the initial letters of the main words used in the definition. Thus, when experts declare a short-circuit to be the cause of fire, they “are saying in effect that the short-circuit is a condition of this sort, that it occurred, that the other condi­tions which, conjoined with it, form a sufficient condition were also present, and that no other suffi­cient condition of the house’s catching fire was present on this occa­sion” (Mackie [1965] 1993: 34). Thus, Mackie’s view may be expressed roughly in the following definition of ‘cause:’ an event A is the cause of an event B if A is a non-redundant part of a complex condition C, which, though sufficient, is not necessary for the effect (B). Source.

The Swedish man’s atheism is not the cause of the killing. There are myriad causal factors. Moreover, I am an atheist and I don’t go around killing people, so there have to be other things at play which lead to this. Atheism says nothing about morality, which is why frameworks such as secular humanism exist to provide a more complex philosophy than merely God not existing to define how to live one’s life.

This works both ways. Atheists have to understand that it is too simplistic to say religion causes X and Y. Often it is one of the major contributing factors, but the universe is also often very much more complex, and human psychology is very much more complex than the content of a religion causing some moral atrocity.

This is also not to invalidate the idea that something causes something else. I think we can still isolate things which are important or major contributing factors such that we can say, “X caused Y” as a shorthand for “X is one of the more major and proximal causes of Y”.

RELATED POSTS

2016-01-06T23:56:16+01:00

Apologist Matthew Flannagan has criticised my points made on the recent post “Inter-Testamental Moral Relativism” which can also be expressed as “Covenantal Moral Relativism” as Justin Schieber has stated it. In this post I declared that the moral obligations being different between the Old Testament (OT) and the New Testament (NT) amounted to moral relativism (MR). Here is what Flannagan had to say:

This post seems to misunderstand both the torah, and the new testament, and also to not grasp what relativism is.

Taking the first point, and focuses on the example, of shellfish and mixed cloth and the suggestion that the NT supersedes or suspends these rules.

Actually, if you read the Torah you’ll see it doesn’t present God as commanding all people to refrain from eating shellfish. It states that the Jewish nation,were prohibited from eating this food as a condition of a special covenant he had made with Isreal, the torah does in fact state that Gentiles have permission to eat meat.

Moreover, when you get to the New Testament, and look at the context of Pauls writings ( and the council of Jerusalem in Acts 15) the issue was actually over whether Gentiles, that is non-Jews, had to convert to Judaism to become part of the church. The answer Paul gives is no, and that’s the context in which he argues that people are free to refrain from certain Mosaic restrictions. So it’s actually not a case where a group of people were previously bound by a set of rules and now aren’t. Gentiles were never required to keep these laws even in the torah. The issue is actually over which group is the covenant people.

Note also that Paul, when he argues that Gentiles don’t have to convert, contends that this answer is actually the answer the Old Testament had always given. He stresses that Abraham had been called as a Gentile prior to the giving of these commands, and that the purpose of the covenant with Abraham was always to “bless the nations” and so the idea that Gentiles are part of Gods people and Gods intention was always to include them.

(This incid enly is much of what is emphasised by the so called new perspective of Paul, proposed by people like Wright, James Dunn, Sanders and so on though Wright notes in fact its implicit in the reformed tradition all the way back to Calvin. But seeing you state you have researched this issue “extensively” you of course know this)

As to the second issue of relativism, you write:

“Or what was a sin in that cultural milieu is no longer a sin? I get it! To me, it looks like what was good and right as a moral truth worked for one set of peopl e in a geographical and historical location, but not for another. The second set of geographical and historically contextualised people (yourself included) appear to have a different set of

rights and wrongs. Who’s to say that yours won’t be superseded? This lookslike, ya know, moral relativism.”

This simply misunderstands what relativism is, you seem to be arguing that if the moral requirements of one group of people differs from that of another relativism is true. But that’s just mistaken, whether relativism is true or not will depend on why those moral requirements hold, specifically what makes it true that a given action is required and what constitutes the requirement in question.

If the reason different moral requirements apply in the different communities is because those communities recognize different rights and make different demands, and that’s what constitutes these different requirements or makes it the case that they exist, then what you have is relativism. If on the other hand the reason different moral requirements apply is because different factual situations obtain in the two communities, and the fact in question obtain independently of whether the community in question accepts or rejects the requirement in question, then in fact what you have is objectivism, moral requirements are rooted in objective facts of the matter not the subjective preferences of society.

There is lots to address here, but particularly the first sentence and the last paragraph. I do not think I have misunderstood moral relativism in the slightest! In fact, it seems rather like Flannagan has taken the tack which another commenter (Ryan M) and myself were talking about. Ryan said:

The only way I see the relativism avoided is if such a theistic morality has propositions with truth conditions that depend on particular descriptive facts such as what the culture is like. In that sense, they can say some moral proposition M was true at some t1 but false at some t2. However, maybe that isn’t available to the Christian. I can’t say I want to read the bible to find out though.

To which I replied:

That is a really interesting point. I have thought about this before: like the individual context is part of the propositional statement about a moral truth, such that it becomes part of the absolute truth.

I think this could be an option, though thoroughly unwieldy. Also you would have to ad hoc rationalise that absolutist decrees in the OT/NT have HIDDEN contextual propositional content, which would be stretching things entirely.

I think this is what Fannagan is referring to in his last paragraph. this problem with the reasoning involved in moral relativism and seeing whether the system could work at all, can be seen in this section and the next of the Moral Relativism section in the Stanford Encyclopedia of Philosophy (SEP) – it is well worth reading for a more in depth analysis than I will provide here.

As far as what moral relativism basically is, here is the SEP:

Most often it is associated with an empirical thesis that there are deep and widespread moral disagreements and a metaethical thesis that the truth or justification of moral judgments is not absolute, but relative to the moral standard of some person or group of persons…

Metaethical Moral Relativism (MMR). The truth or falsity of moral judgments, or their justification, is not absolute or universal, but is relative to the traditions, convictions, or practices of a group of persons.

With respect to truth-value, this means that a moral judgment such as ‘Polygamy is morally wrong’ may be true relative to one society, but false relative to another. It is not true, or false, simply speaking. Likewise, with respect to justification, this judgment may be justified in one society, but not another. Taken in one way, this last point is uncontroversial: The people in one society may have different evidence available to them than the people in the other society. But proponents of MMR usually have something stronger and more provocative in mind: That the standards of justification in the two societies may differ from one another and that there is no rational basis for resolving these differences. This is why the justification of moral judgments is relative rather than absolute.

There are, in reflecting moral philosophy in general, three areas of moral relativism: descriptive, normative and meta-ethical. Or what people do, what they should do, and what right and wrong mean within a relativistic paradigm. We can forget the first, as this is just describing the obvious, that there exist moral disagreements between cultures. The second talks about whether we ought to tolerate other moral value systems from other cultures etc. I think we need not worry about this either. The third is the important one; this deals with what defines the goodness and badness of a moral act it is seen as true within different cultural and historical contexts.

I did not go into this in detail in the original short post and as such there could be equivocation between myself and Flannagan on this account. As far as MR is concerned, wiki states:

Meta-ethical moral relativists believe not only that people disagree about moral issues, but that terms such as “good”, “bad”, “right” and “wrong” do not stand subject to universal truth conditions at all; rather, they are relative to the traditions, convictions, or practices of an individual or a group of people.[3] …

Meta-ethical relativists are, firstly, descriptive relativists: they believe that, given the same set of facts, some societies or individuals will have a fundamental disagreement about what one ought to do (based on societal or individual norms). What’s more, they argue that one cannot adjudicate these disagreements using some independent standard of evaluation—the standard will always be societal or personal.

It seems fairly clear to me that if a group of people believe something is in some sense definitely descriptively and noaamtively required, morally speaking, but is not required by another group of people (in this case separated by time and perhaps geography) then this qualifies as moral relativism. It is of course the context which prescribes such. This is the case if you think that the first and second set of moral obligations held moral truth values.

This is superbly articulated by Flannagan himself:

Actually, if you read the Torah you’ll see it doesn’t present God as commanding all people to refrain from eating shellfish. It states that the Jewish nation,were prohibited from eating this food as a condition of a special covenant he had made with Isreal, the torah does in fact state that Gentiles have permission to eat meat.

Now, what separated Jews from Gentiles if not geography and culture? Did each individual Jew sign a covenant with God? This looks precisely like moral relativism as simply unerstood.

So it’s actually not a case where a group of people were previously bound by a set of rules and now aren’t. Gentiles were never required to keep these laws even in the torah. The issue is actually over which group is the covenant people.

OK, so Flannagan could claim that the OT Jews are simply fulfilling divine commands such that it is divine command theory which defines the moral goodness; but this suffers the same issues as DCT in general suffers as I exposed in the last post. This means that the moral rectitude of such commands cannot be founded on moral reasoning otherwise the reasoning provides the foundation for the moral value as opposed to just God. This makes the actions of the Jews of this period as effectively morally arbitrary (or based on commands from God which appear to be a-rational or morally arbitrary as they cannot defer to third party moral reasoning).

But it still means that a certain act has its moral value defined not by the act in and of itself (i.e. committing act A) but by who does it (a Jew or a Gentile). This is cultural moral relativism. Yes, God supposedly had a covenant. But that’s what ALL the cultures of the world claim of their morality: it’s somehow codified by a holy book or decreed by God, or spirits, or whatever. This is a sleight of hand to get the Torah Jews off the hook, but still allow moral relativism for all the other religions of the world to which Flannagan does not adhere. That’s moral relativism over there but this here isn’t! Take God out of the picture, and it is clearly moral relativism. Add God into the picture and at best Flannagan is arguing for DCT by special pleading his God whilst claiming all other divinely decreed culturally defined morality is moral relativism. He could deny MR altogether and claim that all other morality is DCT, just with the wrong God. It depends from which position you look at it. In some sense, then, Flannagan is almost invalidating moral relativism completely; that all examples of cultural relativism can be individually special pleaded from within that culture as being absolute and that all other cultures are wrong applications of the same sorts of moral frameworks.

From an objective and neutral point of view, this is problematic. In some sense, there is no wonder Flannagan cannot see this as moral relativism because he is judging it from within that paradigm, rather than being an objective neutral (as far as that can exist) and seeing Christianity and Judaism from an anthropological perspective, rendering each system as “just another religio-cultural moral value system”.

This gets me on to another point which I was going to make a post of, but will express here anyway as it is raised in this context. That is the nebulous definition of peoples, namely OT Jews, here. To have truth values in moral propositions being defined by groups of people, those groups of people must actually properly exist in some ontic sense. One might see this as group nominalism. The label “Jew” in the context of people who have a covenant with God at the time of the Torah is problematic. There are fuzzy areas around the outside and the problem of the No True Scotsman (Jew) fallacy raising its ugly head. Who gets to define who is in X, that group of people to which action A is morally good, and who is in not-X, where A is not morally good? This definition can have large moral consequences. Do certain actions morally invalidate them being called Jews, so that doing another action is now morally fine where it would have been bad? Do they define this themselves? Or the Jewish community around them? Or God?

It appears that a person, then, just assigning an abstract label to themselves therefore changes the moral evaluation of an action they do. This internalises the moral evaluation and moves it away from an objectivist account to an internalist subjectivist account.

“I am a Jew. This action A is obligatory.”

“Actually, that was yesterday. I converted to Christianity today. That means that A is no longer obligatory and has a different moral value even though I am a human being committing an identical act.”

This actually looks like moral subjectivism, which the SEP discusses in the context of relativism:

The fact that social groups are defined by different criteria, and that persons commonly belong to more than one social group, might be taken as a reason to move from relativism to a form of subjectivism. That is, instead of saying that the truth or justification of moral judgments is relative to a group, we should say it is relative to each individual (as noted above, relativism is sometimes defined to include both positions). This revision might defuse the issues just discussed, but it would abandon the notion of intersubjectivity with respect to truth or justification—what for many proponents of MMR is a chief advantage of the position. Moreover, a proponent of this subjectivist account would need to explain in what sense, if any, moral values have normative authority for a person as opposed to simply being accepted. The fact that we sometimes think our moral values have been mistaken is often thought to imply that we believe they have some authority that does not consist in the mere fact that we accept them.

Where Flannagan goes on to talk about Paul and defining the orthodoxy of the newly found Christian sect, we have further issues. Firstly, we can see that this one man, Paul, gets to define what is morally obligatory, whilst also recognising that he was arguing with others who believed the opposite to him at the time. This shows, empirically, that people were confused about the moral obligations; but also that they appear to be contextually derived. What’s more, one can argue that the loosening of moral rules and procedures was very much for pragmatic reasons, since getting Gentiles to sign up to the weird and wonderful requirements of the Jewish orthodoxy was a tough call. Conversions would come much more easily, as Paul no doubt saw, if some of those requirements were dropped (like slicing parts of penises, and what you had to eat).

the issue was actually over whether Gentiles, that is non-Jews, had to convert to Judaism to become part of the church. The answer Paul gives is no, and that’s the context in which he argues that people are free to refrain from certain Mosaic restrictions. So it’s actually not a case where a group of people were previously bound by a set of rules and now aren’t.

Flannagan appears to want to argue that Gentiles were somehow a-moral until converting. Because otherwise, he is accepting that they adhered to moral relativism or that they happened, by luck, to be moral beforehand, on certain occasions (since they had no access to the Torah). The problem, again, is that such morality would have to be defined using moral reasoning, and not on account of Divine Command Theory, since they would have no access to those commands. Perhaps, though, they were acting luckily in accordance with such commands that they were not aware of, but then such morality is definitely arbitrary because they just happened to be lucky enough, through no intention of their own, to accord themselves with a god’s (of which they had no knowledge) commands. Defining those actions as being good on account of X and Y means that the morality of such is defined by secular moral reasoning, or some other non-DCT moral value system.

You see, the Bible does not explicitly explain why action A is good in the Torah times and not obligated in the NT times. That has to be investigated and interpreted by theologians and people like Paul. This plays into relativistic and anthropological accounts, since there is nothing but command, and then an explanation away of those commands by Paul and others. The simple fact is that action A is good by supposed divine decree (which is actually the writings and oral traditions of people) at time t=1, but not good or obligated by people (yes, still Jews, but ones who eventually come to be called Christians) at time t=2. But Jesus/God actually decreed that those same Laws would be fulfilled in him, every jot and tittle of them (ignored by Flannagan as a previous point).

A further issue pertains to whether good and bad is not only defined by the Torah but by other rabbinic writings, and the problem with divine miscommunication therein.

Flannagan also states:

He stresses that Abraham had been called as a Gentile prior to the giving of these commands, and that the purpose of the covenant with Abraham was always to “bless the nations” and so the idea that Gentiles are part of Gods people and Gods intention was always to include them.

Which rather looks like moral relativism just for Abraham, or failing that, that he was using intuitive and subjective morality and subsequent moral reasoning to define what was right, and then formed a covenant with God to closer define these moral obligations.

Ryan M went on to comment:

I know at least some will say that. As an example, I have seen apologists defend the notion that forcing a woman to marry her rapist was morally obligatory at some point on account of the culture of the time because the woman would have been ostracized otherwise. So in other words, some have said the culture made it such that a clearly false proposition now would have been true at the time.

A comment in reply to Ryan from Flannagan continued:

No that just again shows a failure to distinguish between contextualizing and relativism is.

Suppose culture A has a moral code which prohibits a person firing dead horses over castle walls in catapults during a siege. The reason they prohibit this is because dead horses are diseased and

this tactic is both designed to and does in fact result in the population of the beseiged city contracting diseases.

Suppose culture B, doesn’t contain any prohibition regarding horses or catapults, instead it has a prohibition prohibiting firing dirty bombs civilian populations.

Now, on your view this would be relativism, because different rules are being applied by different cultures in different contexts.

However, that’s clearly not relativism, because an objectivist can quite easily accept this claim is true and compatible with his position. He can simply state that there is an objective moral requirement to refrain from using weapons that inflict diseases cancers, plaques and so forth on the civilian population of

the enemy. The reason for the different requirements is due to different objective facts about the context, in society A, the technological level and techniques used in warfare mean that that in that culture, people use catapults and dead horses to inflict diseases on civilians.

In society B, they don’t use catapults they use bombs, however in society B, dirty bombs are used for the same purpose that catapults and horses are in society A.

This is the same kind of thing that occurs when people contextualise” command in torah. In fact it’s the same kind of thing that occurs in normal common law reasoning or any moral reasoning that reasons analogically from cases. To suggest this is automatically relativism is just erroneous.

This reasoning seems to play exactly into what I stated above, and just goes to show that objectivist morality exists to which we morally reason. One would need to ask why such acts were morally good; what makes them good. It seems that Flannagan implicitly accepts the examples above to be consequentialist. In fact, I would actually argue that consequentialism is what underwrites most moral evaluations, and certainly those of God and her followers, as I have set out in my essay “God is a consequentialist“. Again, this appears to play into the notion (if he were to deny consequentialism as deriving moral value in terms of the OT/NT context) that Flannagan cannot ever see OT/NT as moral relativism since he is within that paradigm. He sees it as successive divinely commanded morality, and that the cultural and historical contexts were true events which defined and define the moral truths as divine commands.

In other words, what Flannagan in effect does is not criticise my post, but criticises (correctly) meta-ethical moral relativism. It cannot be true because it cannot arbitrate for moral disagreements where something is supposedly true and not true when seen in different contexts.

I am not a moral relativist and don’t think it makes sense as a moral value system, so to say as I did that there is Inter-Testamental Moral Relativism between the OT and NT is perhaps unfair because I don’t believe such a system exists or could exist in the meta-ethical sense.

Of course, I am presenting the case from the Christian’s point of view, so it is useful to show it does not make sense. If it is not DCT, then it is MMR, or is a mixture of them both (as mentioned, descriptively and normatively it certainly appears to be MR). Neither make sense, so it is a win-win. Thanks to Matt Flannagan for his comments.


Browse Our Archives