After a week of talking about transhumanism, brain-hacking, and the persistence of identity, I couldn’t pass up a chance to comment on Brian Appleyard’s slam on using science to improve people’s moral character.
Moral enhancement cannot be a scientific project because neither term has any measurable meaning that can be universalised. Rather, it is an ideological project which would hand power to an oligarchy of neuropharmacologists who would be permitted to decide that somebody – probably them – had the power to determine our moral status. This embodies the familiar delusion of many powerful and prejudiced people that all history and culture attained some kind of apotheosis at the moment of their birth. The point is that there are as many definitions of morality as there are human societies. Dr Sandberg spoke about making people less violent which sounds fine until you realise that, for example, the Taliban would regard such a drug as immoral, refuse to take it and conduct a gleeful onslaught on the newly pacific remainder of the world’s population.
Two quick notes, before my main objection: first, as I wrote the first time I talked about chemically-induced moral jump discontinuities, I share Appleyard’s worry that the medical community tends to define ‘normal’ within a dangerously narrow spectrum. This is a reason to be cautious when evaluating any engineered boost to our moral character, not a reason to dismiss the possibility outright.
Second, Appleyard seems to be endorsing moral relativism, or, at the very least, strong epistemological modesty when it comes to questions of right and wrong. Except, just a little later in his post, Appleyard cites possible Taliban victory as an obviously bad outcome, and clearly expects his audience to agree. There may be a lot of moral questions that are obscure due to confusion about the stakes or the facts, but Appleyard’s own rhetoric presupposes that some choices and cultures are superior to others. Now on to my main issue…
I find Appleyard’s argument from pragmatism surprising, since it could be brought to bear against any moral improvement, regardless of whether it was pharmacologically induced. Appleyard seems to see moral growth as a collective action problem — if improvement happens unevenly, those of us who are ‘too good’ will be patsies for the defectors who strategically remained bad enough. Think of it as the Prisoner’s Dilemma writ large.
Appleyard’s argument only makes sense if being as ethical as possible is not an end-in-itself. If you take moral perfection as your telos it is tautological that nice people finish first. Radical forgiveness or any other kind of extreme moral witness might cause them physical or emotional pain, but comfort isn’t the metric they’re using to score their lives.
The strongest argument I’ve heard for avoiding moral martyrdom is a stronger call to stewardship. After all, you shouldn’t be so focused on keeping your own hands clean that you retreat from the world — abandoning the possibility of doing good for fear of doing evil. (I’m just going to be self-indulgent and throw in a link back to my Sweeney Todd post, since, in that musical, Sondheim forced his characters to live in a dystopia that was totally inimical to innocence and goodness and then let the audience see they different ways they broke).
I don’t think most of the brain-hacks available now or in the next few decades will put at risk of being too good to avoid being destroyed by the world. But if they did, I don’t think we should take it as a foregone conclusion that we should choose our own self-preservation as dangerously flawed beings. It shouldn’t surprise us that morality doesn’t optimize for survival; evolution is blind to ethics.