So yesterday I posted a piece being very critical of Islam as a religious theology (in the context of teaching RE), just as I have spent 10 years doing with Christianity. The comments thread exploded and, I have to say, the conversation was really pretty good. There were lots of people who disagreed with each other but they did so, in the main, fairly respectfully. Of course, there were/are the odd exceptions, but it was generally quite a decent, robust conversation between opposing thinkers. That’s precisely what I’ve been trying to foster here.
If we disagree respectfully, we have a much better chance of nudging people with opposing beliefs in a given direction. If we butt heads, nothing ever really gets achieved other than a lot of heat and noise.
I was called out by some fellow liberals for my criticism of Islam. Part of the issue lies in the conflation of Islam with Muslim, which is something I am aware that happens quite a bit. Let me be clear, I have been attacking the theology not the vast range of believers. This is a case of dealing with normative ethics as opposed to descriptive ethics.
I was particularly arguing with Susan Montgomery and I was interested in this comment in particular as I think it goes to the heart of the matter:
I just don’t see the harm in accepting a fiction of a fiction. Unless this is you whining “you’re not the boss of me”, I don’t see how this impacts anyone in any meaningful way.
I said there were simplistically two options:
- The fundamentalist Muslim, who “accurately” believes in and interprets the Qur’an, adapts their morality to it.
- The liberal Muslim who, unjustified theologically, adapts the Qur’an to their morality.
My case is that if a little is to accept the foundation of their belief, in this case the revelations of Mohammed as being accurate in their provenance and claims, then this should lead to option one. Yes, there are plenty of liberal Muslims and I am not “taking sides” with the fundamentalists. I am merely trying to claim that the fundamentalists in option one align themselves more accurately with the Qur’an.
But you must also remember that I in no way find the foundations plausible. Islam is completely false in any kind of historico-theological sense because the foundations are false.
In one sense, this is merely a philosophical or theological observation, an attempt to get an accurate understanding of the world around us, including Islamic theology. In another sense it is an attempt to understand why people are the way they are. Why fundamentalist Muslims are fundamentalist Muslims. Without understanding this, we have no hope of dealing with any problems that might arise as effectively as possible.
The two options above can be seen as:
- If A, then X –> Y
- If A, then X –> Z
I am saying that X –> Z is false, and If A, then X –> Y.
Susan is saying, and forgive me if I misrepresent her, “Who gives a shit if 1) holds? A is false. 2) is better for society and telling people the truth of 1), if A, is potentially dangerous. Why the hell would you bother sending all this effort telling us that 1) holds better than 2) if A? What do you get out of it?”
If, indeed, this is what she believes, then this is a very good question. Why am I putting all this effort in to establishing 1) over 2) when both are in the end false and 1) is potentially dangerous to communicate?
The flipside is that it is a dangerous precedent to set inoculating any ideas from being discussed or analysed. This is a hop, skip and a jump away from blasphemy, and this sends shivers down my spine. If I can’t analyse ideas to better understand the world in an intellectual manner in a space like this, what does this say about the world we live in?
This becomes, as I discussed in the previous piece, a discussion about consequentialism. But herein there is much to discuss as we can disagree on what the moral currency that underwrites consequentialism really is.
In a sense, we are pitting different moral currencies against each other. This is, in its own way, an incredibly interesting discussion. Some people might say that knowledge is a moral currency in the same way that happiness or well-being is often seen:
Hedonism = the value of the consequences depends only on the pleasures and pains in the consequences (as opposed to other supposed goods, such as freedom, knowledge, life, and so on).
Susan’s question is about what impact my knowledge has in any meaningful sense. In fact, this knowledge, she implies, is dangerous. Let’s unpick this.
In the entry on “Consequentialism” in the Stanford Encyclopaedia of Philosophy, mostly taken from the section “What is Good? Hedonistic vs. Pluralistic Consequentialisms”, we have the following (my emphases):
Some moral theorists seek a single simple basic principle because they assume that simplicity is needed in order to decide what is right when less basic principles or reasons conflict. This assumption seems to make hedonism attractive. Unfortunately, however, hedonism is not as simple as they assume, because hedonists count both pleasures and pains. Pleasure is distinct from the absence of pain, and pain is distinct from the absence of pleasure, since sometimes people feel neither pleasure nor pain, and sometimes they feel both at once. Nonetheless, hedonism was adopted partly because it seemed simpler than competing views….
Even if qualitative hedonism is coherent and is a kind of hedonism, it still might not seem plausible. Some critics argue that not all pleasures are valuable, since, for example, there is no value in the pleasures that a sadist gets from whipping a victim or that an addict gets from drugs. Other opponents object that not only pleasures are intrinsically valuable, because other things are valuable independently of whether they lead to pleasure or avoid pain. For example, my love for my wife does not seem to become less valuable when I get less pleasure from her because she contracts some horrible disease. Similarly, freedom seems valuable even when it creates anxiety, and even when it is freedom to do something (such as leave one’s country) that one does not want to do. Again, many people value knowledge of distant galaxies regardless of whether this knowledge will create pleasure or avoid pain….
Many consequentialists deny that all values can be reduced to any single ground, such as pleasure or desire satisfaction, so they instead adopt a pluralistic theory of value. Moore’s ideal utilitarianism, for example, takes into account the values of beauty and truth (or knowledge) in addition to pleasure (Moore 1903, 83–85, 194; 1912). Other consequentialists add the intrinsic values of friendship or love, freedom or ability, justice or fairness, desert, life, virtue, and so on.
If the recognized values all concern individual welfare, then the theory of value can be called welfarist (Sen 1979). When a welfarist theory of value is combined with the other elements of classic utilitarianism, the resulting theory can be called welfarist consequentialism.
One non-welfarist theory of value is perfectionism, which claims that certain states make a person’s life good without necessarily being good for the person in any way that increases that person’s welfare (Hurka 1993, esp. 17). If this theory of value is combined with other elements of classic utilitarianism, the resulting theory can be called perfectionist consequentialism or, in deference to its Aristotelian roots, eudaemonistic consequentialism.
Similarly, some consequentialists hold that an act is right if and only if it maximizes some function of both happiness and capabilities (Sen 1985, Nussbaum 2000). Disabilities are then seen as bad regardless of whether they are accompanied by pain or loss of pleasure.
Or one could hold that an act is right if it maximizes respect for (or minimizes violations of) certain specified moral rights. Such theories are sometimes described as a utilitarianism of rights. This approach could be built into total consequentialism with rights weighed against happiness and other values or, alternatively, the disvalue of rights violations could be lexically ranked prior to any other kind of loss or harm (cf. Rawls 1971, 42). Such a lexical ranking within a consequentialist moral theory would yield the result that nobody is ever justified in violating rights for the sake of happiness or any value other than rights, although it would still allow some rights violations in order to avoid or prevent other rights violations.
When consequentialists incorporate a variety of values, they need to rank or weigh each value against the others. This is often difficult. Some consequentialists even hold that certain values are incommensurable or incomparable in that no comparison of their values is possible (Griffin 1986 and Chang 1997). This position allows consequentialists to recognize the possibility of irresolvable moral dilemmas (Sinnott-Armstrong 1988, 81; Railton 2003, 249–91).
Pluralism about values also enables consequentialists to handle many of the problems that plague hedonistic utilitarianism. For example, opponents often charge that classical utilitarians cannot explain our obligations to keep promises and not to lie when no pain is caused or pleasure is lost. Whether or not hedonists can meet this challenge, pluralists can hold that knowledge is intrinsically good and/or that false belief is intrinsically bad. Then, if deception causes false beliefs, deception is instrumentally bad, and agents ought not to lie without a good reason, even when lying causes no pain or loss of pleasure. Since lying is an attempt to deceive, to lie is to attempt to do what is morally wrong (in the absence of defeating factors). Similarly, if a promise to do an act is an attempt to make an audience believe that the promiser will do the act, then to break a promise is for a promiser to make false a belief that the promiser created or tried to create. Although there is more tale to tell, the disvalue of false belief can be part of a consequentialist story about why it is morally wrong to break promises.
When such pluralist versions of consequentialism are not welfarist, some philosophers would not call them utilitarian. However, this usage is not uniform, since even non-welfarist views are sometimes called utilitarian. Whatever you call them, the important point is that consequentialism and the other elements of classical utilitarianism are compatible with many different theories about which things are good or valuable.
Instead of turning pluralist, some consequentialists foreswear the aggregation of values. Classic utilitarianism added up the values within each part of the consequences to determine which total set of consequences has the most value in it. One could, instead, aggregate goods for each individual but not aggregate goods of separate individuals (Roberts 2002). Or one could give up aggregation altogether and just rank total sets of consequences or total worlds created by acts without breaking those worlds down into valuable parts. One motive for this move is Moore’s principle of organic unity (Moore 1903, 27–36), which claims that the value of a combination or “organic unity” of two or more things cannot be calculated simply by adding the values of the things that are combined or unified. For example, even if punishment of a criminal causes pain, a consequentialist can hold that a world with both the crime and the punishment is better than a world with the crime but not the punishment, perhaps because the former contains more justice. Similarly, a world might seem better when people do not get pleasures that they do not deserve. Cases like these lead some consequentialists to deny that moral rightness is any function of the values of particular effects of acts. Instead, they compare the whole world (or total set of consequences) that results from an action with the whole world that results from not doing that action. If the former is better, then the action is morally right (J.J.C. Smart 1973, 32; Feldman 1997, 17–35). This approach can be called holistic consequentialism or world utilitarianism.
It is worth me reminding the reader that I am a moral skeptic. What this means is that morality does not exist in an ontic sense, out there in the ether. Instead, we construct morality in order to help us function socially, or perhaps out of self-interest, or for whatever reason. Morality is itself, arguably, functional – instrumental. Wow. Meta.
In this way, humans find consequentialism intuitively attractive. The question is, what do we use as moral currency to underwrite it? Generally, we seek things that are non-derivative. What this means is we look for a currency that cannot be derived further when you ask the “why” question.
“You gave that £10 note to that homeless person. Why did you do that? (Why is that good?)”
“So that he could go and buy himself some food.”
“Why do you want him to do that?”
“So that he can eat.”
“Why?” “Why?” “Why?”
Eventually, you will get down to something like “so that he can be happy” or “so that I become happier” or “so that humanity is happier”. Happiness or well-being are often seen as good moral currencies because they are non-derivative. When you give the answer “because it makes me happy” and then I ask you “why do you want to be happy?” you will rightfully answer “um, because happiness is good”. It is an axiom. You cannot derive it further – happiness is self-evidently good (or well-being, pleasure, lack of pain etc.)
The question is, can other qualities be input into this position as a moral currency? Is knowledge a good contender? Or is knowledge only instrumental?
“I want to learn this.”
“So that I can have more knowledge.”
Is this answer good enough? Can it act as an axiom? Does it have intrinsic value or is the value extrinsic? Is knowledge only good because it obtains further derivations?
“Why do you want more knowledge?”
“To make the world a better, happier place.”
I am somewhat undecided as to whether there is some kind of intuitive pull of knowledge beyond its instrumentality. Do I want to know these conclusions or facts about the theology of Islam for instrumental reasons? Perhaps just because I enjoy the process of learning and communicating. Learning and communicating my knowledge, blogging, gives me joy, and this is all part of that process. Does the all serve a purpose in giving me, in a self-centred way, a better understanding of the world so that I obtain some advantage? Or perhaps I am helping to build a body of knowledge that will make the world a better place.
Or is knowledge of these things good in and of itself?
Should we strive for accurate knowledge no matter what the consequences because, as a rule for thumb, that will get us to a better place, even if there are some bumps in the road? Or should we suppress knowledge on a case-by-case basis if it is morally useful to do so? Is it morally useful to do so here?
Stay in touch! Like A Tippling Philosopher on Facebook: