Kate Crawford and Vladan Joler recently created a large map and long-form essay titled “Anatomy of an AI System.” This project aims to represent the extent of the stack of resources—technological, natural, and human—involved in the production and operation of an Amazon Echo device.
Their diagram and multi-part essay show how, after a consumer issues a brief command to this “impassive, smooth, simple, and small” cylinder, “a vast matrix of capacities is invoked: interlaced chains of resource extraction, human labor and algorithmic processing across networks of mining, logistics, distribution, prediction, and optimization.”
All of this brings to mind Albert Borgmann’s device paradigm, which points to the invisible and complex technologies beneath the visible surface of the products we consume. But “Anatomy” also highlights another layer of complexity introduced by networked digital technologies: an enveloping layer of surveillance, including data mining and machine learning, which Crawford says extracts “value from all kinds of human activities.”
Crawford and Vladan point out how we, as human agents, “are visible in almost every interaction with technological platforms. We are always being tracked, quantified, analyzed and commodified.” “The Echo user,” they say, “is simultaneously a consumer, a resource, a worker, and a product.” We are not simply using these devices; we are becoming integrated with them.
Borgmann urges us to be more aware of the dynamics and nature of our technological culture. Crawford and Vladan’s project provides an example of how challenging this is with proprietary AI-powered systems. If we are unaware of and inattentive to the forces beneath the surfaces of our enveloping digital devices, we risk having our agency compromised by autonomous agents.
Borgmann argues that the device paradigm reduces substantial things and practices to superficial commodities and consumption, which reduces human beings to consumers. Within the digital device paradigm, we can be reduced further to sources of information for learning machines and new markets.
As Crawfrod and Vladan write near the end of their essay, “The new gold rush in the context of artificial intelligence is to enclose different fields of human knowing, feeling, and action, in order to capture and privatize those fields.”
Mariarosaria Taddeo and Luciano Floridi observe that as AI “matures and disseminates, [it] blends into our lives, experiences, and environments and becomes an invisible facilitator that mediates our interactions in a convenient, barely noticeable way.” If designed and used improperly, they warn, “AI may threaten our fragile, and yet constitutive, ability to determine our own lives and identities and keep our choices open.”
We can ask, “Alexa, what time is it?,” but we should be asking ourselves much deeper questions about the information revolution we’re living through and shaping. How is autonomous agency shaping our goals and nudging our behaviors? How do we want these technologies to enhance—and not merely commodify—our individual and collective attention and agency?