Is AI a Shortcut to Virtue? Or to Holiness?
H+ 2005
Could AI (Artificial Intelligence)–or even better IA (Intelligence Amplification)–make us more moral? More virtuous? More holy? Could we program intelligence enhancement to speed up our deification? Our theosis?
Are these reasonable questions? Yes. After all, Sundar Pichai, CEO of Google, says “AI could be our savior.” So, let’s ask questions about the moral, ethical, spiritual, and sanctifying implications of AI and IA. Here I will continue the discussion of AI Ethics begun in “Enhanced Intelligence and Sanctification in Living Lutheran (November 2021) (Peters November 2021).
The following discussion of AI Ethics takes an onramp to the public theology highway. First we ask: how might the public theologian add direction to the routes already mapped out by AI public policy makers? Then, we make a U-turn. With the traffic coming from technology toward the church, this compels us to ask: what impact will advances in AI will have on our spirituality? Will AI provide a shortcut to virtue and holiness?
The Moral Challenges to AI Ethics
We imbibe AI like our lungs breathe the air. While watching your Roomba race across your carpet, ponder how AI is at work in our calculators, traffic lights, and Fitbits. At micro processing speed, pre-programmed algorithms perform computation, data processing, and automated reasoning tasks. AI is changing our communications, to be sure; but is it changing our lives? If so, how much?
Here’s the problem according to Reid Blackman of Virtue Consultants: “most companies grapple with data and AI ethics through ad-hoc discussions on a per-product basis.” Instead of ad hoc discussion, might we engage AI Ethics more systematically? UNESCO is drafting a document on the ethics of AI. The World Economic Forum lists the top 19 issues prompted by the ethics of AI. Below is my own sample list.
First, unemployment. Californians and Australians fear .he coming of the robotcalypse, the loss of eight hundred million jobs to robots by the year 2030. Will AI lead to unemployment? Or, more employment?
Second, economic inequality. Advances in AI risk expanding the chasm between rich and poor. In medicine, for example, the marvelous if not miraculous new diagnostic and therapeutic technologies are expensive. Only large hospitals and medical complexes will be able to afford them, leaving local neighborhood clinics to slip toward obsolescence. Those who lose their jobs to robots might also lose easy access to the best medical care. Should we be concerned?
Third, AI sex. AI sexbots are programmed to carry on kinky conversation and other services. Should sexbots have rights? Codes of ethics are already being formulated to avoid sexual abuse. Treat your machine with dignity!
Fourth, self-driving cars. How many pedestrians will be run over by self-driving cars? Will our culture value the life of the person over the rights of the AI controlled machine? Or not?
Fifth, bias. In its guidelines, the US Department of Defense states that it “will take deliberate steps to minimize unintended bias in AI capabilities.” Is this realistic? How do you like this headline in Futurism? “Scientists built an AI to give ethical advice, but it turned out super racist.” Who is surprised by this? How might philosophical ethicists respond? “Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice,” said political philosophers Michael Sandel, Anne T. and Robert M. Bass. “But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing … replicate and embed the biases that already exist in our society.” (Harvard Gazette, October 26,2020). Who is surprised by this?
Sixth, Security. How do we protect AI from nefarious forces? We have witnessed computer viruses, creations of the villainous for the sole purpose of destruction. We have also witnessed computer hacking into banks to steal and to swindle senior citizens. The more powerful a technology becomes, the more it can be used for nefarious reasons as well as good. Cybersecurity will become even more important. But, what if nefarious individuals get control of our cybersecurity? After all, we can find sinners in every walk of life.
Seventh, autonomous weapons. This concern is by far the most dangerous, namely, the prospect that our military will turn decision making over to armed drones sent on missions to kill. Once programmed to destroy, they cannot be called back. Should we turn our life-and-death responsibilities over to an algorithm? “Countries need to demilitarize AI, that’s a common goal countries should work towards,” Sundar Pichai says.
Eighth, moral failure. As mentioned above, AI could fall into the hands of the nefarious. But, even under the control of the nefarious, AI would perform as instructed by its human instructors. Might something else go haywire? According to Stanford computer scientist Stuart Russell, “the real problem relates to the possibility that AI may become incredibly good at achieving something other than what we really want” (Russell 2016, 58).
Ninth, privacy. AI Ethics turned public policy in the City of Vienna requires “respecting the private sphere of life and public identity of an individual, group, or community, upholding dignity.” Really? It is my judgment that privacy is no longer a possibility. The horse is out of the barn. Our personal data is ubiquitous. It cannot be protected from either Santa or Satan. Forget privacy. What we want from public policy is information without discrimination.
It is one thing to outline AI Ethics. It is quite another to ask whether AI could do human ethics for us. Or, whether AI could make humans more virtuous.
Healthy human morality begins with a positive vision. A High-Level Expert Group of the EU lifts up a heuristic vision.
AI is not an end in itself, but rather a promising means to increase human flourishing, thereby enhancing individual and societal well-being and the common good, as well as bringing progress and innovation.
To think through the conundrums emitted by AI Ethics, I have become a founding member of AI and Faith where computer wonks around the world conscientiously sort out the moral challenges.
Could AI itself conceive of a high-minded moral vision and then lead us into a better future? Ask your Roomba. (Picture by James Abundus, Seattle Times 2019).
Or, more basically, is AI moral or immoral? Neither. Intelligence is a necessary condition for moral behavior, but not sufficient. We must add consciousness follogwed by deliberation, decision, and action. No AI machine can possibly be considered moral or immoral in this customary sense. We must treat any machine with an AI label to be the moral responsibility of the developer, deployer, or user.[Note 1]
Is AI actually intelligent? No. No one yet has produced AGI (artificial general intelligence)–that is, a machine capable of performing any task the human brain can perform. According to computer scientist and theologian, Noreen Herzfeld, “We do not yet have intelligent computers. We may never have them” (Herzfeld, 94). When you plug in an amazing computer, don’t mistake if for an intelligent person.
Do We Have a Future of Intelligence Enhancement?
Far more relevant to moral and spiritual concerns than AI is IA. Medical specialists are experimenting with hybridizing your and my brain by connecting it to a computer chip. A chip implanted deep within our brain could provide instant access to all the information in Wikipedia. Without turning a page in a book or clicking on Google you could report on obscure facts, write down complex physics formulas, or recite the entire Gospel of Mark by heart. The bioethical term for such Intelligence Amplification is ‘enhancement’. Are you ready to get enhanced?
Now, we do not want that chip in our brain to go out of date, do we? So, let’s install a transmitter. The chip within our brain could then communicate 24/7 with a satellite that continually updates it. Whenever you want a stock price, you silently ask the chip to provide the latest Dow Jones report.
What might be the implications? Let’s consider one. Suppose a tyrant gains control of that satellite. Suppose this tyrant begins to feed you fake news, false facts, and conspiracy theories. Or, by far the most dreaded of all: advertising. Can you imagine advertising in your brain that you cannot switch off?
If AI is not itself intelligent, could IA make you more intelligent? No neither AI nor IA would provide you with anything more than increased access to information. When it comes to deliberation, decision, and action, you are on your own.
AI Ethics, Virtue, and Holiness
Be that as it may, let’s explore the possible futures that could be precipitated by enhanced intelligence.
The hope that we could take a shortcut to sanctification via technology has already been debated with the prospect of genetic engineering. Could we engineer virtue by engineering our genes? Yes, says, philosopher Mark Walker, head of the Genetic Virtue Project (Walker, 2009). No, responds Arvin Gouw, geneticist and theologian. Not only is the enhancement of virtue through DNA manipulation unscientific, it’s a crass Pelagian attempt to fool God (Gouw, 2018). It appears that this issue is arising again, now in the context of AI and IA.
It is time to ask moral questions and spiritual questions. Could either AI or IA increase our moral capacity? Might AI or IA make us more godly? Might AI or IA inspire us to serve the common good rather than our own selfish ends? Might we rely on such technologies to enhance our feelings of compassion, our desire to do good, our virtuous behavior? We have always asked the Holy Spirit to grant us God’s grace so that we could selflessly love our neighbor. Might AI or IA replace the Holy Spirit? As we set out on the path toward sanctification, could AI and IA lead us to greater holiness?
In recent years I have asked experts in Roman Catholic spirituality, Eastern Orthodox spirituality, and Lutheran ethics about expectations for the long-range future. The responses have been uniform. No technological enhancement by either AI or IA could possibly lead toward increased holiness or even moral virtue.
Why? Because the pursuit of virtue or holiness requires one indispensable ingredient that is absent in both AI and IA. What is that ingredient? Our choice—that is, deliberation, decision, and action. Each of us individually must decide to live a godly life. No influence by either AI or IA can make that choice for us.
Further, virtue and holiness are habitual. Once we’ve set out on the path to virtuous living and committed ourselves to compassion for our neighbor, we no longer think about it. The commitment drops from our consciousness into our preconscious. We simply do it.
The Holy Spirit can work from within us, to be sure, to empower our will to choose compassion, love, care, and even virtuous living. However, for empowerment by the Holy Spirit ,we don’t need to wait for the technological future.
Here is what Ian Curran says, speaking from within the Eastern Orthodox tradition in which sanctification is understood as theoisis or deification.
While the Christian tradition does share with techno-humanism a vision of deification as integral to the human story, its understanding of the source, means, and ultimate end of this radical transformation of human beings is substantially different. For Christians, deification is the work of the Christian deity….Deification is only possible because Christ deifies human nature in the incarnation and the Spirit sanctifies human persons in the common life of the church and in our engagements with the wider world. (Curran 2017, 25)
Sanctification let alone deification requires divine grace. Technological advance belongs strictly in the penultimate domain. AI or IA enhancement contributes nothing to sanctification or deification.
Conclusion
What may we conclude from this discussion of AI Ethics within the framework of public theology?
I forecast that byproducts of attempts to build ever smarter computers will benefit the human race. These sciences and resulting technologies will likely improve medical care and may even increase human longevity. Nevertheless, a healthy caution regarding utopian promises is warranted by the doctrine of sin reinforced by historical knowledge about how the human race behaves. A sinful humanity is incapable of creating a sinless superintelligence. Utopia is not possible by human effort alone.
Nor can AI or even IA provide a shortcut to virtue let alone sanctification. No technology–gene editing, AI, or IA–can make the decision we ourselves need to make, namely, to love God and love our neighbor. Further, we must rely upon the Holy Spirit to liberate us from inhibiting impulses and to strengthen us with insight and resolve. Once we’ve made that decision and set our lives on a course of holy living, then we would welcome any aid that a new technology might offer.
▓
Ted Peters directs traffic at the intersection of science, religion, and ethics. Peters is a professor at the Graduate Theological Union (GTU), where he co-edits the journal, Theology and Science, on behalf of the Center for Theology and the Natural Sciences (CTNS), in Berkeley, California, USA. Peters edited a new volume published by ATF, AI and IA: Utopia or Extinction? Along with Martinez Hewlett, Joshua Moritz, and Robert John Russell, he co-edited, Astrotheology: Science and Theology Meet Extraterrestrial Intelligence (2018). Along with Octavio Chon Torres, Joseph Seckbach, and Russell Gordon, he co-edited, Astrobiology: Science, Ethics, and Public Policy (Scrivener 2021). He is also author of UFOs: God’s Chariots? Spirituality, Ancient Aliens, and Religious Yearnings in the Age of Extraterrestrials (Career Press New Page Books, 2014). Cyrus Twelve, the 2nd in the Lona Foxx espionage thriller series, includes in its plot AI, IA. and Transhumanism.
▓
—–NOTES——-
[Note 1] Most current AI policy work distinguishes between developers, deployers and users (European Parliament 2020). The developer is the technical expert (or organisation) who builds the system. The deployer is the one who decides its use and thus has control over risks and benefits. In the case of an autonomous vehicle, for example, the developer might be the car manufacturer, and the deployer might be an organisation offering mobility services. A user is the one benefiting from the services. These roles may coincide, and a developer may be a deployer. Making the distinction seems reasonable, however, because a developer can be expected to have detailed understanding of the underlying technology, whereas the deployer may have much less insight. (Stahl March 18, 2021)Bibliography
Curran, I. 2019. The Incarnation and the Challenge of Transhumanism.” Christian Century 134:24 22-25.
Gouw, Arvin. 2018. “Genetic Virtue Program: An Unfeasible Neo-Pelagian Theodicy?” Theology and Science 16:3 173-278.
Herzfeld, Noreen. 2002. In Our Image: Artificial Intelligence and the Human Spirit. Minneapolis MN: Fortress.
Parlaiment, European. 2020. FRamework of Ethical Aspects of Artificial Intelligence, Robotics, and Related Technologies (draft). https://www.europarl.europa.eu/doceo/document/JURI-PR-650508_EN.pdf. , Brussels: Committee on Legal Affairs, European Parlaiment.
Pazzanese, Christina. October 26, 2020. “Great promise but potential for peril.” Harvette Gazettle https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/.
Peters, Ted. 2019. AI and IA: Utopia or Extinction? Adelaide: Australian Theological Forum.
Peters, Ted. November 2021. “Enhanced Intelligence and Sanctification.” Living Lutheran 24-25.
Polkinghorne, John. 1994. The Faith of a Phsicist. Princeton NJ: Princeton Universit Press.
Russell, Stuart. 2016. “Should We Fear Supersmart Robots?” Scientific American 314:6 58-59.
Stahl, Bernd. March 18, 2021. “Ethical Issues of AI.” PMC 35-53; doi: 10.1007/978-3-030-69978-9_4.
Walker, Mark. September 2009. “Enhancing Genetic Virtue: A Project for Twenty-First Century Humanity?” Politics and Life Sciences 28:2 27-47.