Hopes and Hazards of Artificial Intelligence
A tsunami of controversy regarding the future impact of Artificial Intelligence (AI) has engulfed us over the last 15 months. The storm whipped up winds of fear with the arrival of ChatGPT and similar Large Language Models (LLMs) in November 2022. A hundred million users immediately found the speed and human-like quality both uncanny and amazing. But the techie giants — Microsoft’s Bill Gates, Google’s Geoffrey Hinton, Open AI co-founder John Schulman, transhumanist Ray Kurzweil, along with Tesla maverick Elon Musk — are piling sandbags in defense against a dangerous tidal wave they see coming.
What dangers lurk in the AI tsunami? First, the possibility of superintelligence — what our transhumanists have been calling the “Singularity” — taking over the world and dispensing with the human race. Second, bad actors with malicious intent getting a hold on powerful AI tools and disrupting global communications while letting loose lethal autonomous weapons. These ominous forecasts have sent many in the world’s Silicon Valleys racing for their bomb shelters.
How should we realistically balance the hopes and hazards of artificial intelligence?
VAI at the Center for Theology and the Natural Sciences (CTNS)
I’m currently working with Braden Molhoek, a professor who holds the Ian G. Barbour Chair in Theology and Science at the Graduate Theological Union (GTU) in Berkeley, California. Braden, along with Robert John Russell, heads a research project called “VAI”, Virtuous AI? Cultural Evolution, Artificial Intelligence, and Virtue. The project is funded by the John Templeton Foundation.
We’ve just put together a CTNS Research Brief, “Hopes and Hazards of Artificial Intelligence.” Click and read it. There you’ll find what we think are the key questions we should ask about the challenges AI poses.
1. How realistic is the transhumanist anticipation of the Singularity?
2. Why do our technological leaders fear a global takeover by Superintelligence?
3. What kind of damage could malicious malefactors wreak and what cybersecurity guardrails might mitigate this damage?
4. What are the hopes and hazards surrounding AI and our planet’s ecosphere?
5. Is the already voiced fear that AI will eliminate well-paying human jobs realistic?
6. Should educators, editors, and script writers incorporate or shun the products of Generative AI?
7. As AI becomes ubiquitous, should we fear an increase in uncontrollable misinformation and even disinformation?
8. Can AI help human individuals become more virtuous?
9. Is it realistic to forecast that AI will develop selfhood and a sense of moral responsibility?
10. What contribution to the public discussion of AI might churches and other religious organizations offer?
In this Patheos series on Public Theology, I frequently post on Artificial Intelligence along with Intelligence Amplification in order to bring theological resources to bear on discourse clarification and worldview construction. In this case, I simply recommend you click on this AI primer, Hopes and Hazards of Artificial Intelligence.
More Adventures in Hopes and Hazards of Artificial Intelligence
Ted Peters (Ph.D., University of Chicago) is a public theologian directing traffic at the intersection of science, religion, and ethics. Peters is an emeritus professor at the Graduate Theological Union, where he co-edits the journal, Theology and Science, on behalf of the Center for Theology and the Natural Sciences, in Berkeley, California, USA. In 2019 he edited AI and IA: Utopia or Extinction? (ATF Press).
Peters recently co-edited Astrobiology: Science, Ethics, and Public Policy (Scrivener 2021) as well as Astrotheology: Science and Theology Meet Extraterrestrial Intelligence (Cascade 2018). He also co-edited Religious Transhumanism and Its Critics (Lexington 2022) and The CRISPR Revolution in Science, Ethics, and Religion (Praeger 2023). Peters is author of Playing God: Genetic Determinism and Human Freedom (Routledge, 2nd ed, 2002) and The Stem Cell Debate (Fortress 2007). See his website [TedsTimelyTake.com].