A few weeks ago, I posted on the limits of digital ethics. I wrote that post after participating in a digital ethics working group in Seattle. In the discussions we had, I observed agreements around common but specific concerns as well as opportunities for clarifying areas where disagreements seem inevitable.
As our keynote speaker Ryan Calo pointed out, ethical approaches are limited. We cannot reconcile all ethical systems and positions. The recent controversy surrounding Google’s establishment of an Advanced Technology External Advisory Council—specifically over its membership, which led to the dissolution of the planned council the week after it was announced—reveals how challenging ethical discussions about technology can be in (to use Google’s words) “the current environment.”
Joanna Bryson, one of the appointees to the council, explains that at its first meeting the group was “expected to ‘stress test’ a proposed face recognition technology policy.” It was “an advanced technology external advisory council, not an ethics board,” she stresses. “What Google wanted from ATEAC was to ‘stress test’ the policy they’d come up with internally.
The potential value of the council, Bryson says, was that it “was meant to think differently than Google—to be people Google would never have in house.”
A number of experts have offered some sound advice about how such a debacle might be avoided in the future, beginning with more transparency about selection criteria and expectations for advisory groups. In addition, Ellen Pao stresses the importance of internal work that must happen first: “Bringing people who are more reflective of the world we live in should have happened internally before trying to put together an external group.” And Joy Buolamwini argues for more radical inclusion of external voices, centering “the views of those who are most at risk for the adverse impacts of AI.”
All of which points to the need for an institutional ethical infrastructure that connects organizational principles, internal people and operations, and diverse external stakeholders and advisors. Ethics must be embedded throughout an organization and connect with the world the organization will impact.
Consensus around many important AI issues is possible: See, for example, the EU ethics guidelines for trustworthy AI, recently released by The High-Level Expert Group on Artificial Intelligence. Clarifying these, and exploring how to operationalize and legislate them, will do much for the common good.
But dialogue about where there are ethical disagreements needs to happen as well. Full reconciliation may not be possible, but it should remain our aspiration. A simple step toward this goal would be to begin documenting and describing our ethical disagreements. Somehow, as I noted in my previous post, we need to create a space within emerging ethical frameworks to disclose and identify where there are disagreements—and better understand why they exist.
As I’ve stated previously, to create a shared vision of the future—which we desperately need—requires us to clarify and explore together our diverse beliefs and hopes. We will have different and even diverging eschatological narratives, but we must find ways for these to be related and pursued productively and peacefully.