The Secular Outpost
Follow Patheos Atheist:
"In sum, I think that EN faces the same sorts of challenges as any other theory of ethics, and has the resources to address those challenges at least as effectively."
I agree with this wholeheartedly. But a couple of comments:
With regard to a species' telos (and specifically with regard to a human telos): a species adapts (or not) to its ever-changing environment – obviously some environments change frequently and rapidly, while some change occasionally and slowly, as well as everything in between – so a species' telos will be a moving target, so to speak, if it can even be said that a species has a telos.
Whether it's the flourishing of an ecosystem (deep ecologists like Peter Singer), a species (Aristotle, Protagoras, A. Pope), or an individual human (Nietzsche), the common theme is that THERE IS a fact/value distinction, THERE IS NO ultimate meaning or purpose in the universe – so each normative/ethical/moral system must vie for supremacy in the infamous 'marketplace of ideas,' and each evangelist of an idiosyncratic system must present the best reasons/evidence/arguments/rhetoric he can muster in order to convince others of the veracity and/or efficacy of his system.
Excellent point! Aristotle, of course, took the world as essentially static and thought that the current economy of nature was permanent. We know now, of course, that it is not. Still, for practical purposes, there is sufficient stability in the nature of the human organism that we, for instance, still live in the same moral universe as Job and Aeschylus. That is why we can continue to read such works for our own immense edification.
Thanks very much for the detailed response. The historical question of how Aristotle argued isn’t that important to me, but I think some clarifications are needed. When I asked about whether the values of a post-Darwin Artistotelian have or need metaphysical grounding, I meant to contrast those values with Aristotle’s, which were so grounded in the notion of a telos. You say we can “employ the vocabulary of contemporary biology to cash out Aristotle’s meaning,” but I don’t think that’s so, the efforts of some naturalistic philosophers, like Ruth Millikan’s, notwithstanding. I grant, of course, the Darwinian’s concept of adaptation (biological function), with the understanding that an adaptation isn’t at all normative. This is very tricky, though, because our artifacts tend to have functions and also to have been designed with some good in mind, and before Darwin biological functions were understood theistically in terms of an analogy between animals and artifacts: God designed creatures with functions because he highly values the creatures and wants them to achieve some goals. Darwin showed how we can keep the notion of a biological function, by understanding it as a natural adaptation, but the cost is that norms are stripped from biology, and the analogy between biological traits and intelligently designed ones is no longer thought to hold except at the most superficial level.
This becomes even trickier when we equivocate on the phrase “doing well.” Thus, you say that a naturally selected adaptation is a characteristic sort of behaviour that an animal is designed to engage in and to engage in “well.” But Aristotle’s normative question of how humans should live is also interpreted as one about how we can “live well.” There are, though, two senses of “well” that need to be distinguished. A mosquito sucks blood well, but not in any normative sense–at least, not on Darwin’s, as opposed to Aristotle’s picture. A mosquito is efficient at sucking blood and has inherited the ability to do so, but the mosquito isn’t actually serving any objective good by performing its adapted behaviour. There is no magnetic Force for Good in the sky, in Darwinian biology, and that’s why naturally selected efficiencies must always be understood ironically, as merely apparent means without objective ends. When a mosquito does well, then, “well” must be put in scare quotes. A happy life, however, for moral realists who think moral questions are objective, is a non-ironic kind of wellness, not just an efficient tailoring of apparent means to a pseudo end, but a fulfillment of an objective, real good.
Now, a telos is not a Darwinian adaptation. In Physics, Aristotle explains that a telos (purpose) isn’t just the possibly coincidental end of a process, but is necessarily a good . Thus, for Aristotle, a telos is already normative, prior to any consideration of how people think about happiness, and he has a complex metaphysical theory of final causes, relying ultimately on the peculiar idea of a magnetic Star Wars Force as the ultimate good. A Darwinian has no such theory, so can the Darwinian help herself to Aristotelian ethics?
You say that “Aristotle does not identify that telos and then deduce the nature of happiness from that. On the contrary, he begins with common notions of well-being or flourishing (eudaimonia) and observes, correctly, that humans do best when they” fulfill their function. Can a Darwinian ethicist proceed in the same way? Aristotle didn’t begin by citing popular notions of happiness to see whether we have any such good in the first place. On the contrary, his teleology implies that we have a normative, metaphysically grounded final cause, and the remaining questions are just whether we have an ultimate one and what in particular that ultimate one might be. He critiques popular notions of happiness to arrive at his own answers to those two questions, but the meta-ethical question of whether we have any objective good in the first place is answered at the metaphysical level, with his teleological arguments.
I’m assuming that a Darwinian ethicist can’t make use of that teleology, so what replaces it? Here are the empirical premises a Darwinian can use: Humans generally want to be happy; we’re naturally selected (adapted) to think and to be social; thinking socially is perhaps the most efficient way of being happy. Now, no normative conclusion follows from those premises. Once again, like mosquitoes, we may be ironically tailored to achieve a good that doesn’t exist. Oh, happiness may exist, but its objective goodness may not. And there’s no argument here backing up the prescription to be happy. Saying that because most people want to be happy, they should be so is to commit the naturalistic fallacy. You can say happiness is an axiomatic good, but the Underground Man shows that this isn’t self-evident; moreover, without teleology, you lose Aristotle’s assumption that of course we have an objective, natural good (such as happiness), because every natural kind has a (normatively understood) telos. It’s true, as Aristotle says, that a highest good isn’t done for any other good, but that doesn’t mean there can be no argument in favour of there being such a good. Aristotle argues for our all having an objective good in the first place, with his teleological concept of a final cause. A Darwinian lacks that argument and must replace it without blatantly committing the naturalistic fallacy. (Alternatively, you can argue there’s no such fallacy, and blur the line between facts and values.) The argument can be indirect, as you say, but it’s got to connect facts with values, without teleology and without ignoring Hume on facts and values. A tall order, and one that Sam Harris, for example, has failed to take down.
So I think a Darwinian Aristotelian has a *unique* set of problems that derives from the attempt to disentangle Aristotle’s ethics from his metaphysics.
Finally, you raise the problem of whether humans should respect other animals. This isn’t quite the problem I meant to raise when I compared humans to viruses. I grant that as a managerial matter, we may have to care for other species to be happy ourselves. And again, as you say, we may be able to manage inevitable conflicts between species, calculating the necessary sacrifices for the greater good. But my point was meant to be deeper. Aristotle looked at every species, saw that it has a function, and identified the endpoint of carrying out the function as a good. Thus, he connected functionalism and teleology, and he interpreted the telos optimistically. Schopenhauer, by contrast, saw the ultimate, value-laden end point, the Will, as a great evil. And so we can distinguish between optimistic and pessimistic teleology. Those are extremes, but they get at my point, which is that an optimistic interpretation of our particular function (social rationality) isn’t obviously warranted. It’s no good saying there are conflicts between all species, because humans are clearly unique in terms of our control over other species. I think a Darwinian Aristotelian needs an argument for her *optimistic* interpretation of our function. It’s one thing to say we’re good at being rational; it’s quite another to say that being rational is good. More specifically, it’s one thing to say that humans are happy when we flourish, carrying out our natural functions. It’s quite another to say that happiness is an objective good, that we ought to be happy because our flourishing serves a natural good.
Thanks much, again, for the very thougthful response, and I hope I can respond in similar detail again soon. I am a bit busy right now with summer term about to begin. In the meantime, I hope you do not mind if I draw your attention to the terrific book by Larry Arnhart, Darwinian Natural Right (SUNY Press, 1998), where he argues that Darwinian biology supports an Aristotelian ethic. Thanks once again.
I’ll look for your response.
Judging just from the Amazon and Secular Web reviews, it seems that Arnhart identifies the good with the desirable, deriving ethical norms from natural, universal, biological desires. So what would Arnhart’s premises be that license any prescriptive conclusion? There seems to be a pragmatic move: to achieve our desires, we need rules or conventions and so we have hypothetical imperatives. I didn’t get into this in my last response, but I’d argue that hypothetical imperatives are equivalent to descriptions, not to prescriptions. Thus, if a Darwinian or an Aristotelian theory ends up only with hypothetical, pragmatic imperatives about the “best” way of achieving desires, the theorist doesn’t yet have a theory with ethical implications about how we (merely) ought to act.
Does Arnhart say something like, “If you have an instinct for raising a family, then you ought to respect other people’s similar instincts, since following some such rule is the most efficient way of satisfying your instinct”? If so, he’s not yet prescribing the instinct or even the rule. Suppose you have the instinct to raise a family and you agree that the most efficient way of achieving that desire is to respect other people’s similar desires. And suppose you violate the rule and harm someone else’s family. In so far as the rule is part of a hypothetical imperative, the harming violates only norms of instrumental rationality, not ethical norms. The violator would either miscalculate the probabilities of different means of achieving her goal or else ignore her rationality altogether and act out of some nonrational capacity (the brain is, after all, modular). I just don’t see the relevance of such failings to ethics.
Arnhart seems to share Aristotle’s optimism, as though it were self-evident that natural selection produces goods rather than evils or indeed rather than morally neutral traits, like reason or instincts. Suppose we were living in the worst of all natural worlds, like Schopenhauer’s. Natural selection would design organisms that are efficient in achieving evil ends. They’d have natural desires, and those desires would have negative rather than positive ethical value. Why believe we’re living in the optimist’s rather than in the pessimist’s world?
Conway’s game of life is a cellular automaton which can be used as an analogy of a mechanistic universe. Now in that automaton there are so-called gliders, i.e. simple configurations that remain stable and move in a straight line. Would it make sense to say that a glider’s telos is to remain stable and move in a straight line? I don’t think so. The concept of telos entails proper function, and proper function cannot be naturalized (or at least to my knowledge nobody has found a way to do so).
One could argue that the game of life automaton is too simple for us to derive from it any knowledge about reality. But, as it happens, it has been proven that the game of life can be used as a universal Turing machine. Assume that our physical universe is deterministic. Then there is game of life initial configuration that is one-to-one equivalent to the history of our universe. If there is telos in our universe then there should be telos in that game of life automaton too. But how can that be? Indeed how can particular regions in that cellular automaton be “good” and others be “bad”?
Keith Parsons wrote:Ethical naturalists ask: “What are humans distinctively adapted to do? Of course, we share many adaptations with other animals since we have to meet many of the same environmental challenges they do, but are there any natural capacities that stand out in humans, and signify that we are particularly well adapted to live in certain distinctive and characteristic ways?” Conversely, can we identify a certain lifestyle that humans seem particularly, indeed uniquely, well-adapted to live? Aristotle held that we can. Humans are particularly well adapted to live the life of a rational creature in society with other rational creatures, so he identifies this as the human telos…. As for rationality, Aristotle says that this consists of two distinct capacities—the ability to think rationally and the ability to adapt our behavior according to rational rules. We use both of these capacities when we engage in the distinctively human activity of deliberation.============Comment:Given that humans evolved from non-human animals, aren’t we simply adapted to survive and procreate, to pass on our genes to the next generation of human animals? There is nothing ‘distinctive’ about this goal or telos, it is just the same goal that is imposed on all species of plants and animals.
Although I might agree that the continuation of the human species is a good thing, I don’t see why I should make the goal or telos of evolution into my own personal objective. If the survival of the human species depended on me becoming a ruthless murderer (as in Nazi ideology), might not it be reasonable for me to decline to make the objective of the survival of my species the highest or most ultimate goal of my life? Might I not reasonably choose to be a decent, kind, and considerate person even at the cost of the ultimate survival of my species?
Also, why should we look to what is distinctively human? Why is what is distinctive about us any more important than what we have in common with other species of animals?
I can imagine a different world in which all animals were rational (imagine the famous bar scene from Star Wars or just about any scene from the Narnia movies). In such a world, rationality would NOT be distinctive of the human species. But that means that in such a world, what Aristotle considered to be most important about humans would be insignificant, if the world had contained different sorts of animals than it happens to currently contain. Why should what is most important and essential to morality be grounded in such a random and contingent fact, in a circumstance that might easily have been otherwise?
I suspect that what is distinctive is what Aristotle focused on because doing so would yield the answer that he already had in mind: rationality. If rationality had not been distinctively human, I suspect that Aristotle would have looked for some other way to justify the conclusion that rationality is what is most important for humans.
Follow Patheos on