In “The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence,” Julia Powles and Helen Nissenbaum caution against limiting our attention and imagination to the problem of computational fairness. “Bias is real,” they acknowledge, “but it’s also a captivating diversion”:
Artificial intelligence evokes a mythical, objective omnipotence, but it is backed by real-world forces of money, power, and data. In service of these forces, we are being spun potent stories that drive toward widespread reliance on regressive, surveillance-based classification systems that enlist us all in an unprecedented societal experiment from which it is difficult to return. Now, more than ever, we need a robust, bold, imaginative response.
Broader concerns are raised in the new Pew Research Center’s report on “Artificial Intelligence and the Future of Humans.” While 63% of the 979 experts Pew surveyed believe AI “will enhance human capacities and empower them,” and that “most people [will] be better off than they are today,” they raised a number of concerns. These include:
- the loss of human agency, when choice is “automatically ceded”;
- data abuse in complex systems designed for profit and power;
- job loss resulting in “economic divides and social upheavals”;
- reduction of cognitive, social, and survival skills due to deepening dependencies on automated systems;
- mayhem caused by “autonomous weapons, cybercrime, and weaponized information.”
Some of the solutions experts suggested include:
- global digital cooperation and collaboration around “common understandings and agreements”;
- values-based systems with “policies to assure AI will be directed at ‘humanness’ and common good”;
- prioritize people and “help humans ‘race with the robots.’”
Pew reports such as this one can help us understand many important issues, but I’m not sure this report goes far enough in helping us cultivate “a robust, bold, imaginative response” to—or better narratives for—AI.
The experts Pew canvased included “technology pioneers, innovators, developers, business and policy leaders, researchers and activists,” but it’s not clear if many (or any) creative artists or religious leaders—people who have a formative role in cultivating and curating imagination and narratives—were included.
Last weekend I hosted a technology and faith unconference at the Seattle Pacific University Library, and many of our discussions were about AI. (The event was covered by GeekWire, but our conversations were much less sensational and speculative than this article suggests.) For a full day we discussed faith perspectives on: ethics and values; the design and use of technology; understandings of human personhood; power asymmetries; digital divides; and other topics. Drawing from the wisdom of various faith traditions, we explored questions of ultimate purpose, religious narratives, and experiences of divinity. For me, and for manysof those present, these conversations demonstrated the value of—and need for—explicitly bringing faith into more public discussions about technology. This is a goal of AI and Faith, which was a sponsor of the unconference.
In a recent post I suggested the value of the apocalyptic imagination, a narrative framework—involving some form of faith—for thinking about artificial intelligence. Call it AI for AI.
Over at the Anxious Bench, Chris Gehrz recently reflected on how Advent is “the most apocalyptic time of the year.” Advent it the beginning of the Christian liturgical year, which marks the beginning of New Creation and the Christian apocalyptic imagination. The coming of Christ into the world inaugurates, as Charles Williams puts it, “a conversion of time” through human individuals and institutions—and, one could add, human technologies such as AI. This transformation of reality concerns the continuing manifestation of the kingdom of God proclaimed by Jesus, which Williams called, “apocalyptically, the City”—i.e., the New Jerusalem.
At the end of his Christmas poem “For the Time Being,” which was inspired by Williams’s apocalyptic image of the City, W. H. Auden takes readers to the other side of Advent when “we have sent Him away.” We are “Back in the moderate Aristotelian city … where Euclid’s geometry / And Newton’s mechanics would account for our experience.” But after Advent the world seems smaller: “The streets / Are much narrower than we remembered; we had forgotten / The office was as depressing as this.”
Advent leaves the memory of promise unfulfilled—a revelation of a reality not yet fully realized:
We can repress the joy, but the guilt remains conscious;
Remembering the stable where for once in our lives
Everything became a You and nothing was an It.
Fulfillment depends on a second coming, and Auden leaves us with “the Time Being to redeem / From insignificance.” He calls us to seek “the Kingdom of Anxiety” with the promise that we “will come to a great city that has expected [our] return for years.”
As that City comes to us from the future, informing our narrative about the future, we may participate in creating the future with hope. And we can imagine that that future includes AI.