Last Friday, New York Times reporter Cade Metz surveyed a number of ethical issues related to artificial intelligence and asked, “Is Ethical AI Even Possible?”
I recnetly had the opportunity to participate in a two-day summit on digital ethics in Seattle. I’ve been involved in such discussions before, but what impressed me about this event was (1) the increasing clarity of agreements around a number of generally accepted but specific ethical concerns, and (2) the increasing clarity about areas where disagreements are likely to persist (economic, political, cultural, etc.).
It helped that the summit opened with a keynote address by Ryan Calo, a law professor at the University of Washington School of Law and co-director of Tech Policy Lab. Calo, who emphasizes the limits of ethics, encouraged us to focus on universal but particular agreements that could become regulations.
As we interacted with ethical statements that will appear in the forthcoming book We the People: A Guide to Digital Ethics for People, Organizations, and Robots of All Kinds by Peter Temes and Florin Rotar, I thought of the emerging consensus around AI ethical concerns that Luciano Floridi and others have identified. Some of these concerns have been raised previously, and some are new (with AI, what’s new is explicability).
It is encouraging that agreement may be possible about many important issues related to data collection and algorithmic control. But these issues, as important as they are, are penultimate concerns. As we make progress toward respecting individual attention, autonomy, and agency and making our society more equitable and just, further thought must be given not only to what is feasible but also to what is desirable.
Floridi calls this “soft ethics.” Shannon Vallor and others address this space within the concept of “global ethics.” Full consensus about ultimate values is—at least in the world we know—impossible. That doesn’t mean we shouldn’t seek it; it means we shouldn’t expect it.
As I participated in these ethical discussions, and we clarified points of agreement and disagreement, I began to wonder how we could document and describe our disagreements in our ethical frameworks. As our technological powers grow and accelerate, we need to be clearer and more transparent about our beliefs individually and institutinoally. As Byron Reese points out in his book The Fourth Age, “experts disagree not because they know different things, but because they believe different things.”
Wouldn’t it be wise for us to be more deliberate about creating space within our ethical discussions and frameworks where we can identify and disclose ethical disagreements as we seek shared values?
To create and live into a shared vision of the future, we need to be clear about—and explore together—our diverse beliefs and hopes. Ultimately, individually and collectively, we are constructing and affirming different eschatological narratives that must be productively and peacefully coordinated.