Over at his excellent blog on technology and ethics, “The Frailest Thing,” Michael Sacasas argues that “the cyborg discourse” is useless. I would like to argue that it can be useful.
Sacasas’s post is a response to Ben Tarnoff and Moira Weigel’s critique of “tech humanism.” According to Tarnoff and Weigel, the proposals of tech humanists (who are recovering tech utopianists) don’t go far enough. Calls for better—i.e., not manipulative—design will simply lead to profitable improvements rather than “meaningful reform.” Asking whether or not these improvements will be more human shifts the focus from penultimate concerns about better technology to ultimate concerns about what it means to be a human being.
And this is Tarnoff and Weigel’s deeper criticism of tech humanism—its vision of humanity. The desire of tech humanists “to align humanity and technology,” Tarnoff and Weigel claim, “is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.”
An alternative vision, which Tarnoff and Weigel say is “both truer to the history of our species and useful for building a more democratic future,” understands humans as beings “whose capacities are shaped by the tools they use. It sees us as hybrids of animal and machine—as ‘cyborgs.’” Technology needs to be understood as an integral part of what it means to be human, and therefore “the power to shape how we live with technology should be a fundamental human right.”
Sacasas claims this cyborg discourse is useless “because it gets us nowhere”:
By itself it offers no practical wisdom. It offers no critical tools to help us judge, weigh, or evaluate. We’ve always been cyborgs, you say? Fine. How does this help me think about any given technology? How does this help me evaluate its consequences?
Like tech humanism, or its antecedent tech utopianism, the cyborg discourse by itself is limited—and, as Sacasas notes, it is as capable of being appropriated, absorbed, and abused as any other discourse.
But I do think it is useful. As I noted in a previous post (citing Ron Cole-Turner), technology had a significant role in human evolution—enabling us to become human as well as more human. The evolutionary cyborg narrative gives us access to past wisdom about how humans and technology have evolved together, which can inform current wisdom about how we may best evolve with new technology. The cyborg narrative also helps us think about how an intentional and integrative relationship with technology may shape narratives about human futures.
In her The Future: A Very Short Introduction, Jennifer Gidley identifies two contrasting Enlightenment visions for human futures: a technological vision, represented by Julien Offray de La Mettrie’s Man as Machine (1746), and a humanistic vision, represented by Johann Gottfried Herder’s This Too a Philosophy of History for the Formation of Humanity (1774). Gidley traces the trajectories of these visions into two competing narratives of transhumanism. One is the “techo-transhumanist claim that superhuman powers can only be reached through technological, biological, or genetic enhancement.” The other narrative is about the continuing evolution of human consciousness and intelligence, and the realization of “the superhuman potential already within us.”
For Gidley, these are the options before us:
We can continue to invest heavily in technotopian dreams of creating machines that can operate better than humans. Or we can invest more of our consciousness and resources on education and consciously evolving human futures with all the wisdom that would entail.
If the cyborg narrative is reduced to an extreme form of posthumanism, or a form of existence in which a human being is reduced to or superseded by a machine, then I would agree with Sacasas that “it is worse than useless.” But if the cyborg narrative can help inform narratives about a better human future and world, then it should be useful as we seek to answer the ultimate question: What are people for?