There is a major new development in online technology, and it isn’t the metaverse, which is floundering. It’s ChatGPT, a chat system that uses artificial intelligence to tap into the vast pool of information on the internet and to generate answers to questions in human-sounding language.
The abbreviation stands for “Generative Pre-Trained Transformer,” referring to a technology that pushes machine learning and communication to an extremely sophisticated level. Developed by a San Francisco company called OpenAI, valued at $29 billion since its release of ChatGPT, the free version is available here.
Thus far, most discussion of this technology has focused on its capacity to facilitate cheating. Students can type in their school assignments and paper topics, and ChatGPT can do their homework for them. Scholars can generate scientific articles. Pastors can ask ChatGPT to write their sermons.
But it has other potential applications as well. Doctors, for example, could type in a patient’s symptoms and print off a diagnosis and course of treatment. Which potentially raises the question, why do we need doctors with six years of med school? Blue collar workers have long faced the prospect of being replaced by automation; now white collar “knowledge-based” workers might learn what that’s like. But ChatGPT also gives users access not just to information, as in a list of unconnected hits on Google searches, but broad-based, comprehensive, digested, and well-expressed knowledge.
Some observers go much further, seeing this application of Artificial Intelligence to be a milestone of human history, for better or worse, on the scale of Gutenberg’s invention of the printing press and the Enlightenment. The Wall Street Journal has published a long opinion piece entitled ChatGPT Heralds an Intellectual Revolution with the deck “Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the start of the Enlightenment.” The authors are Henry Kissinger (no less), the former CEO of Google Eric Schmidt, and MIT dean Daniel Huttenlocher. Many of their ideas are developed in their book The Age of AI: And Our Human Future, which was published in 2021, before the advent of ChatGPT, which seems to make their projections real.
The article is unfortunately behind a paywall, but here are excerpts:
A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing. The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly. But new technology today reverses that process. Whereas the printing press caused a profusion of modern human thought, the new technology achieves its distillation and elaboration. In the process, it creates a gap between human knowledge and human understanding. . . .
ChatGPT, developed at the OpenAI research laboratory, is now able to converse with humans. As its capacities become broader, they will redefine human knowledge, accelerate changes in the fabric of our reality, and reorganize politics and society. . . .
Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the beginning of the Enlightenment. . . .Enlightenment knowledge was achieved progressively, step by step, with each step testable and teachable. AI-enabled systems start at the other end. They can store and distill a huge amount of existing information, in ChatGPT’s case much of the textual material on the internet and a large number of books—billions of items. Holding that volume of information and distilling it is beyond human capacity. . . .
Enlightenment science accumulated certainties; the new AI generates cumulative ambiguities. Enlightenment science evolved by making mysteries explicable, delineating the boundaries of human knowledge and understanding as they moved. . . .In the Age of AI, riddles are solved by processes that remain unknown. This disorienting paradox makes mysteries unmysterious but also unexplainable. Inherently, highly complex AI furthers human knowledge but not human understanding—a phenomenon contrary to almost all of post-Enlightenment modernity. Yet at the same time AI, when coupled with human reason, stands to be a more powerful means of discovery than human reason alone.
Here is the killer paragraph, as far as I am concerned (emphasis mine):
The arrival of an unknowable and apparently omniscient instrument, capable of altering reality, may trigger a resurgence in mystic religiosity. The potential for group obedience to an authority whose reasoning is largely inaccessible to its subjects has been seen from time to time in the history of man, perhaps most dramatically and recently in the 20th-century subjugation of whole masses of humanity under the slogan of ideologies on both sides of the political spectrum. A third way of knowing the world may emerge, one that is neither human reason nor faith.
In my opinion, the specter raised by Kissinger and company is at least somewhat overblown. Yes, since this application of artificial intelligence actively learns, the fund of knowledge upon which it draws must be fed. The computer cannot perform experiments or do new medical research. We will still need scientists and physicians. ChatGPT can pull together and even process the current state of knowledge about a given topic, and this can certainly be extremely useful. But I can’t see it adding to the current state of knowledge, much less challenging it.
Still, we are right to be concerned. The article stresses that the sources of the knowledge conveyed in this way is not known by the users. That is, it synthesizes the contributions of many different sources, without attribution, which also means that those contributions cannot be checked. People will believe the machine by virtue of its “authority,” not understanding why it says what it does. The authors worry that this technology will undermine liberal democracy. And I certainly dread what it could do to education by minimizing even more than they are already knowledge and reason, thinking and writing.
And I’m curious what this could mean for theology. “A resurgence in mystic religiosity”? “Group obedience to an authority”? “A third way of knowing. . . that is neither human reason nor faith”? Maybe this is how the Catholic Integralists can take over.
For tomorrow’s post, I plan to take ChatPG on a test drive. I’m going to ask it to generate a Lutheran sermon. We’ll see if it would pass doctrinal review.