I don’t know how self-driving car technology ranks on a difficulty scale. Perhaps it’s not as difficult as rocket science, but it still must be very hard. Add to that the challenge of programming a self-driving car to make moral decisions.
Take for example the MIT Media Lab experiment called “The Moral Machine,” which was “designed to test how we view…moral problems in light of the emergence of self-driving cars.” If a self-driving car were in a ‘moral bind’ in which it would have to hit either an elderly person, a child or a pet to avoid the others, what should it do? What would you do?
Concerning what the self-driving car should do, the Moral Machine collected “over 40 million moral decisions made by millions of individuals in 233 countries.” The findings were somewhat predictable, though there were some interesting discoveries across the world:
In general, we agreed across the world that sparing the lives of humans over animals should take priority; many people should be saved rather than few, and the young should be preserved over the elderly.
However, there were also some regional clusters which contributed to the general moral decisions. For example, individuals in ‘southern’ countries, including the African continent, preferred to save the young and women first, especially in comparison to those in ‘eastern’ countries, such as countries in Asia.
Ethical considerations will differ in various cultures across the globe based on what a given society values most. Asian countries place a higher premium on elderly people than in many other societies, so it is understandable that saving the elderly was a higher priority. I wonder if there are cultures that would place a higher priority on cats—perhaps a place like Portlandia. If so, I don’t look forward to self-driving cars making their way here to the Pacific Northwest.
No matter where one lives, choosing between who to save and who to hit in a moral bind would be quite difficult, even for a self-driving car. After all, humans are the programmers. One of the researchers in the ethical study, Edmond Awad, summed up the aim and difficulty: “The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to…We don’t know yet how they should do that.”
Not only do our views of the young, the old and cats impact artificial intelligence, but so, too, do our racial values. One study reveals that AI is learning our worst racial dynamics. Like programmer, like programmed. What will it take for us to un-program our racist impulses in ourselves? If we can’t reprogram our own racial wiring, what hope is there for our machines? And how might those racial impulses impact end-of-life care and the termination of life, whether those making the decisions are humans or machines? Are vulnerable populations more susceptible to abuse when it comes to pulling the plug on the machine?
Making ethical determinations requires far different computations than simply making self-driving cars and robots operate. What goes into moral machinations or maneuvering with end-of-life care is not a hard science like robotics—it may actually be much harder. After all, ethical determinations are more subjective.
It won’t do for us to try and make self-driving cars and robots determine our ethical choices for us. We program them, though perhaps someday they will reprogram us. Even so, while we can use machines to make calculations, we must not try and kick the can down the street to the machine to make decisions as to who (including a child, an elderly person, or a cat) and when someone should kick the bucket. Otherwise, we have lost our humanity. To choose ethically is to be human.
Besides, it’s not just the matter of making ethical decisions that matters. It also matters what kind of person is making the decisions–hopefully someone who is even-tempered and measured, neither a tempestuous nor timid soul. Those who would rather not make ethical decisions cannot plead innocence or virtue, including on end-of-life care issues, such as the termination of life. Choosing not to decide what to do is as much a decision as deciding what to do with end-of-life care, including if and when to aid or allow someone to die. Don’t ever say: the self-driving car or life-support system made me do it.