There’s no such thing as artificial ethics.
“The information society is like a tree that has been growing its far-reaching branches much more widely, hastily, and chaotically than its conceptual, ethical, and cultural roots.”
—Luciano Floridi, Information: A Very Short Introduction (Oxford, 2010)
We can imagine the growth of artificial intelligence as a huge tree spreading outward in all directions. Its branches are bearing real fruit: AI is continually finding new ways to cut costs, save energy, cure diseases, drive cars, recognize faces, target shoppers, make money, and sway voters.
These applications represent the branches of the tree. But what about the roots of the tree?
Roots give a tree its food and strength. They keep the tree from falling over. In the case of AI, there is a risk of new technologies growing faster than the “conceptual, ethical, and cultural roots” necessary to sustain its healthy growth.
AI and internet communication technologies (ICTs) in general have become so much a part of daily life that they begin to shape culture and ideas.
Perhaps the biggest ethical risk that might arise from these technologies would come from any false presumptions about the capabilities of AI to make good ethical decisions. AI is capable of amazing feats. It can beat the world’s best professional Go player, for example. But there is a big difference between besting a human at the game of Go, and understanding the moral values that make us human in the first place.
As a start to understanding the moral issues associated with AI and ICTs, I would like to point out three “myths”, or misconceptions, that could influence attitudes in popular culture.
Myth #1: you can live two lives.
We spend more and more time online. We make friends online. We build relationships online. We share more and more bits of ourselves online. It almost seems like we can create a new self, a new identity, by carefully designing our online presence.
But no matter how sophisticated social networks and ICTs become, they do not create a new reality. There’s only one you. Human beings are flesh-and-blood, finite creatures. The challenge is to live an integrated life. This means being real with people, both online and in real life. Real persons have consciousness, free will, and moral agency. We cannot delegate these aspects of our humanity to bots or apps, no matter how much intelligent they become.
So what does this have to do with ethics? First, the pervasive influence of online relationships begins to influence our ideas about what makes a relationship meaningful. Online relational boundaries can be crossed or violated in ways that would never happen in real life. Facebook’s “emotional contagion” experiment, for example, treated people like nameless, faceless bit streams whose emotional states of mind were merely data open for manipulation. The researchers never would have treated people that way if they had been sitting at table together.
Myth #2: organisms are algorithms.
This is a false conclusion of the mindset that places technology, and especially AI, at the apex of human evolution. No question, AI is a powerful force that can do much good. But if this force is treated as an ultimate good, in the absence of God or any transcendent purpose to life, it leads to nihilism, and despair of any meaningful existence.
Yuval Harari, in his best-selling book Homo Deus, argues in favor of this atheistic, ultra-materialism. He argues that technology will ultimately accomplish the goal of the human evolution, which is “to upgrade humans into gods, and turn Homo sapiens into Homo deus.”
The ethical failing here is to deny the existence of any transcendent values in concepts such as virtue, human dignity, and moral law. If AI is all there is, then at the end of the day, there is no such thing as ethics. There is only the relentless advance of technology. We do not have to look far back in history to see where that sort of thinking leads.
Sadly, in my opinion, this sort of thinking, expressed compellingly by Harari, becomes popular when revelatory faith statements are denied as sources of moral authority.
Myth #3: AI can become a source of moral authority.
To see the danger in this myth, we need to recognize the difference between intelligence and wisdom.
AI will get better and better at making smart decisions that employ ethical reasoning. Safety rules programmed into driverless cars are a good example. Our roads will become safer as a result. But this is not the same thing as generalized moral deliberation; it is mere quantitative utilitarianism.
Spiritual growth engages the intellect, but it also seeks to know in deeper ways and to grow in wisdom. This requires reflective, reasonable, responsible, and active belief. “Faith seeking understanding,” as St. Anselm put it.
Here’s a simple example. Write an algorithm for this, please:
For everything there is a season, and a time for every matter under heaven:
a time to be born, and a time to die;
a time to plant, and a time to pluck up what is planted;
a time to kill, and a time to heal;…
a time to embrace, and a time to refrain from embracing;
a time to keep silence, and a time to speak; …
a time to love, and a time to hate;
a time for war, and a time for peace.
That’s the ancient wisdom of Ecclesiastes 3. It’s not reducible to code. It requires a living, breathing, human soul. It’s a faith journey. Which is why you cannot write an algorithm for wisdom.
Wisdom begins with Awe-of-the-Lord.
The lie is to believe we can reduce ethics to code.
The truth is: no soul, no ethics.
In conclusion, we can take a big step in the right direction, toward digital wisdom, by spotting the myths and debunking them.