Trust in Technology

Trust in Technology August 15, 2019

At my institution we’re organizing a faculty research group to explore the impact of artificial intelligence from a variety of disciplinary perspectives—Business, Computer Science, Education, Library and Information Science, Philosophy, Psychology, and Theology. I and others will write more about that in future posts.

One of the first things we did was watch and discuss the documentary Do You Trust This Computer?, which is largely an inventory of fears related to AI. By the end of the film, it’s pretty clear—especially after hearing so much from the creator of Westworld—that the expected answer is “No!” From the opening invocation of Frankenstein’s monster (“You are my creator, but I am your master”) to Elon Musk summoning the image of an immortal dictator at the end, the viewer is left mistrusting our artifacts, the corporations that profit from them, the governments that should be regulating them, and even oneself. There are, it seems, no trustworthy agents.

Which isn’t terribly helpful, for it leaves us paralyzed by fear in an ethical and narrative vacuum. In a world being transformed radically by digital and networked technologies, which are reconfiguring our lives and relationships rapidly, in whom and what do we trust?

According to a new report from the Pew Research Center, Americans’ trust in technology companies is declining:

Four years ago, technology companies were widely seen as having a positive impact on the United States. But the share of Americans who hold this view has tumbled 21 percentage points since then, from 71% to 50%.

Negative views of technology companies’ impact on the country have nearly doubled during this period, from 17% to 33%.

Trust in most institutions is declining (with libraries being an interesting exception), but the loss of trust in technology companies outpaces them all.

A case study of violating trust is covered in The Great Hack, a documentary about how Cambridge Analytica used data acquired from Facebook in an attempt to influence voters through psychological manipulation. If you don’t trust this documentary, and are inclined to dismiss it as psychological manipulation through mis- or dis-information, the concerns it raises about data rights should not be ignored. Cambridge Analytica as a corporate entity ended, but Facebook is still with us and the apologies from social media companies continue to come.

Data is the foundation of our society’s future. All the exponential technologies such as AI that are shaping our individual and collective lives depend on data, and much more social, legal, philosophical, and theological attention should be given to the collection, analysis, and use of data. But that’s another topic I’ll return to in future posts.

Google’s Chief Decision Scientist Cassie Kozyrkov recently reminded us that “no technology is free of its creators … all technology is an echo of the wishes of whoever built it.” And, as my colleague Bruce Baker has been articulating in his series on sin, we have good reasons not to trust our wishes entirely. It follows that we shouldn’t fully trust our creations, which inherit our flaws. It may have taken the events of 2016 to reveal to some the nature of our flawed and fragmented world—which technology can exacerbate—but those attentive to history or rooted in moral and faith traditions should be less surprised.

“The Red-Hot Boat Has Turned to Everlasting Ice” (Hotel Murano, Tacoma, Wash., 2019)
“The Red-Hot Boat Has Turned to Everlasting Ice” (Hotel Murano, Tacoma, Wash., 2019)

So far, throughout the history of our species, trust has been an adaptive advantage and we’ve managed to scale it—in our abilities to survive, in dependence on families and groups, and in social and cultural constructions. In the forms of cities and civilizations, we have trusted distributed multi-agent systems for millennia. Trust has always been fallible and provisional, and continuously renegotiated and refined. A hopeful trajectory would expect this evolution of trust to be extended globally, and some religions such as Christianity include this in their eschatologies. This provides some of us with a foundation for faith and hope in the future—including our technological future.

In the present, pragmatically, (re)establishing trust isn’t impossible. “A classic way of gaining trust,” says Luciano Floridi, “can be summarized in three words: transparency, accountability, and empowerment”:

That means transparency so that people can see what you are doing; accountability because you take responsibility for what you are doing; and empowerment because you put people in charge to tell you if something you did was not right or not good.

Information—true information, not misinformation or disinformation—reduces the need for trust. But the transfer of information is a process that requires trust, and our evolutionary history has depended on it. According to Mark Johnson, the moral imagination begins with biological needs related to survival, expands to include social and cultural needs such as care and trust, and then takes on the search for meaning and fulfillment.

For Johnson, religion is unnecessary. Moral development is our evolutionary inheritance and mandate. For people of faith, there is a greater creative mandate. Our creation narratives introduce frameworks of care in which we learn how and whom to trust. The ends of our narratives—especially those informed and transformed by an apocalyptic imagination—further help us cultivate trust for the future in the present.


Browse Our Archives