A Dead Metaphor and Reproductive Organs: Timely Reflections on Technology and Human Agency

A Dead Metaphor and Reproductive Organs: Timely Reflections on Technology and Human Agency October 4, 2019

by Robert Doede

History teaches us that civilizations’ most advanced technologies typically become their root metaphors. Root metaphors are the metaphors by which we understand everything else. As you might expect then, the West’s privileged metaphor today derives from the computer/the digital information processor. Metaphors, as we know, can expand understanding; by bringing two separate ideas into a working relationship in close linguistic proximity, metaphors enable ideas to semantically copulate, producing literal untruths that have the paradoxical potential of expanding understanding. In this way, metaphors often yield new insights. To a civilization with a robust ground metaphor, everything looks more intelligible, almost self-evident, in its terms. However, despite their capacity to expand apparent intelligibility, it is always important to remember that not all metaphors expand understanding—that literal untruths, when taken as literal truths, may result not in insight but in deep mis-understandings. Let me give you a case in point from my area of specialty, philosophy of mind.

 

I

During, but mostly after, the Second World War, the West’s primary technological advancements were shifting from analog information processing systems used in the war to digital information processing systems. Predictably, it wasn’t long before digital information processing was seen everywhere, not only in the domains of engineering and communications where one would expect to see it, but also in chemistry, biology, and psychology. In the late 50s and early 60s, when behaviorism and identity theories of mind were being rigorously challenged, the computer metaphor began to make inroads into the philosophy of mind. By the late 60s, the computer metaphor completely revolutionized philosophy of mind, transforming it from a largely a priori sub-discipline of metaphysics into one of the contributors to a converging cohort of disciplines (cognitive psychology, AI, information and communication sciences, etc.) that make up what today is called cognitive science.

Hilary Putnam’s 1967 paper entitled “The Nature of Mental States” explicitly deployed the metaphor of digital information processing/computation to demonstrate how the mind’s operations are analogous to Turing machine-state transitions driven by algorithms.[1] Drawing into focus a distinction between hardware and software that had, up to this time, remained largely implicit and background, Putnam provided the impetus behind the early functionalist’s mantra “the mind is to software as the brain is to hardware.” This hardware-software analogy soon became the presupposition of cognitive science, and within a decade, the functionalist mantra had transformed into “the mind is software and the brain is hardware.” Today, it is almost impossible to be taken seriously in philosophy of mind if you don’t construe your subject matter in digital terms.

When metaphors gain currency and settle into a culture’s social imaginary, the more likely they will die as metaphors only to be resurrected as unremarkable literal truths (Nietzsche)—e.g., legs of a table, face of a clock, etc. The computer metaphor has not been an exception to this tendency. Functionalism, the bedrock of cognitive science, was born through the death of the computer metaphor. Falling in line with orthodoxy within the precincts of cognitive science, Zenon Pylyshyn, renown Canadian cognitive scientist, confesses “For me the notion of computation … is not a metaphor but part of a literal description of cognitive activity.“[2] When the computer metaphor died in the rhetorical register to rise again literalized in the metaphysical register, a powerfully new yet hardly discernible “It-Bit” dualism was inaugurated that has become not only the working assumption in cognitive science, but has also, in the last few decades, underwritten the feasibility of the transhumanist project of mind uploading.[3]

Literalizing the computational metaphor means the human body and its brain are quite literally hardware—mere implementation devices of software minds. So construed, all that really matters for understanding human intelligence and replicating it is getting the right abstractions and getting the abstractions right: rules (algorithms) and symbols (data-structures). Accordingly, we share “cognitive DNA” with all digital computational systems, so to replicate human intelligence is just a matter of finding a serviceable hardware on which to run our software digital patterns/programs. This picture provides the basis for fearlessly transferring epithets from computers to minds and vice versa: e.g., the now common claims that minds compute and computers think. Computers and minds are posited as of the same genus “informavores”; the “DNA” of both is information processing software. Their hardware is different, but that is a difference that makes no difference. The brain’s hardware is neurons; the computer’s hardware is microprocessors. When two separate instances of hardware are running the same software, they are functionally equivalent/identical. If one is thinking or computing, so is the other because at the software level they are the same thing: two token instantiations of the same program type. Moreover, since their state transitions are individuated not by physical substance but by function alone, their differences of hardware-substrate fade into irrelevance.[4]

Although ignoring the differences amongst different hardware substrates might have no bearing on the “intelligence” we are willing to ascribe to information processing machineries, when it comes to ignoring the so-called “hardware” of living, first-person point of viewed, culturally groomed and socially normed bodily substrates, things are not nearly so simple. If the human body is mere hardware mechanism bearing no internal relations to the software it runs,[5] not only does the body fade out of the picture of human cognition, it also renders irrelevant the cultural context of forces in which the body is embedded and which fundamentally shape human cognition. At the impersonal level of abstraction where functionalism gets its plausibility, it fails to mark the difference between instances of intelligent mechanical output and conscious intelligence—surely a difference that makes a difference. As was the case with its progenitor, viz., behaviorism, functionalism faces massive implausibility with the first-person and intentional dimensions of human intelligence.

When functionalism’s computer metaphor becomes its realist metaphysics, as it undoubtedly has in our cognitive sciences, abstractions quite quickly are reified and naturalized, embodiment is trivialized, and differences between algorithms and persons soon begin to blur. Cognitivist ideology, grounded as it is in a literalization of computer metaphorics, has pushed philosophy of mind in a direction where what really matters is code not context, isolated abstractions not concrete relations, and information processing not bodily engagements—i.e., towards a picture of intelligence capable of inspiring transhumanist dreams of post-biological mind uploading.

I conclude with brief reflections on technology, otherness, and human agency to highlight one of the most important subterranean constrictions of our humanity already taking place within the technoculture nurtured by the literalization of the computer metaphor.

 

II

What do I mean by otherness? Otherness in the context of my reflections here is what gives the friction necessary for embodied agency. Otherness is what is sufficiently different from the self to both resist its agency and enable it. Technology, on the other hand, files away at the rough edges of otherness, and thus re-makes the world more to our liking. So while agency depends on otherness, technology systematically softens and diminishes otherness. You can see there is a problem in the making. As technologies proliferate and ubiquitously smooth otherness, human agency cannot help but in like measure thin out. Although we currently remain agents in our designing and making of technologies, there will come a time when we hand these tasks over to technology, effectively obliterating otherness from our lives. Once we’ve developed technologies capable of recursive self-improvement—remember Good’s quip in 1965 about humanity’s last invention?—or produced fully immersive virtual reality, what becomes of our agency thereafter?[6] What becomes of our selves? Although this is a problem future generations will undoubtedly have to deal with, even now our technologies are quietly eroding our agency in ways that are not easily noticeable.

If technology carpenters otherness into smooth mesh with our desires and wills (think of the internet of things!), and if our agency depends on reality’s resistances and pushbacks, it is somewhat ironic that we celebrate and treasure our technologies for expanding our agency in the world. We must always remember that technologies can constrict our autonomy and progressively enslave us even when it is most effectively ramping up our efficiencies and pandering to our conveniences. All technologies actually expand and constrict our agency to some degree, but not at the same level. Technologies may well expand our surface agency, the agency that devolves out of desiring to avoid inconvenient realities and deciding which technology to purchase for that job. But at a more important level, habitual commerce with such technologies can undermine the conditions conducive to the cultivation of character and deep agency, i.e., agency forged in the thick demands of, and trued to the relentless resistances of, reality’s native otherness.

When technologies are mindlessly embraced as nothing other than neutral gadgets of convenience and efficiency, they will in the end have their way with us as we come to depend more and more on their services while at the same time losing more and more of our own capacities and competences. In grand Hegelian fashion, the servant will have usurped the erstwhile master’s autonomy. Interestingly, this is exactly what Kevin Kelly, in his 2010 book What Technology Wants, tells us technology wants. Technology, he says, wants to proliferate by using our agency to bring more of it into the world: technology wants to make of us its reproductive organs (at least until we’ve developed it enough for it to go it alone).[7] Perhaps, Kelly’s reading of what technology wants is worrying only to those who still hang onto the belief in the humanist self, i.e., the belief that human agency is not merely a temporary node of converging and ultimately impersonal forces, but a unified, personal agency that initiates actions of its own. Christians are among those who can’t accept that their agency is dissolvable without remainder into impersonal forces if only because the God whose image they believe they bear is the ultimate Agent, the Uncaused causer of everything, having freely brought even His Other, finitude, into being.

We produce and use technologies to reduce energy investments and the unnerving temporal stretches separating human desires from their satisfactions. Unlike the tools and technologies typical of distant generations that arose out of genuine human needs and survival necessities, today we design most technologies to cater to our entertainment and luxury demands with maximal efficiency, in a stepwise march toward the goal of ultimately reducing to zero the spatial and temporal distances between our wishes and their fulfillment. Anything that gets in our way, or places demands on us, or forces us to wait, or requires us to give of ourselves must be technologically amended. To paraphrase Max Frisch, technology has the knack of so transforming reality that we don’t have to experience it. To get the import of his point, just think about the looming otherness you’ll encounter on a walk to destination X and compare it to the “samed,” “curated,” “tamed” otherness of a car drive to destination X (X = some place a few miles from your home). In the case of the walk, you face off with the raw otherness that isn’t contoured to your comforts and desires: things like heat or cold, rain or snow, insects, loads of time to wait through, vast spaces to traverse—the kind of otherness through which agency is both challenged and consolidated. In the case of the car drive, you are largely isolated from the external elements in your air-conditioned, GPS-ed, tinted windowed, stereo-ed, metallic isolation booth, zipping along at 60 miles an hour. You are cocooned in a very weak otherness buffered and softened through and through by human desire, intentions, and design. Imagine now the same trip to X, but imagine it in the future when digital technology has virtually vanquished all otherness, where all otherness is designer “otherness,” i.e., nothing more than simulated human will and desire. This is the post-material world of mind-upload where both your body and the world have become software programs. Surely, Marc Andreessen nailed it a few years ago when he proclaimed, “Software is eating the world.”[9]

This simple thought-experiment helps us see that technology’s deep structures, its inner logic of erasure of distance, time, and otherness concludes in pure virtuality. Technological erasures of reality’s roughage come with a cost. As technologies get better and better at softening fricative otherness, there is a corresponding thinning of the self (Hegel again!); as we are required to overcome less and less resistance, as reality’s push back is increasingly dampened, we begin to dis-integrate, losing the definition and intensities required for substantive personal identity and moral agency. As Wittgenstein declared in a related context, “We have got onto slippery ice where there is no friction and so in a certain sense the conditions are ideal, but also, just because of that, we are unable to walk.We want to walk, so we need friction. Back to the rough ground.”[10] Our use of technologies impacts our sense of agency, our sense of identity and responsibility, and thus all our relationships to alterity. If we fail to recognize, sooner rather than later, the important unintended consequences of technological advancement, human agency may well reach its climax in becoming technology’s reproductive organs.

[1] Named for renowned mathematician and computer scientist Alan Turing, the Turing machine is not a machine per se, but a model or method for computations—“a notional computing machine performing simple reading, writing, and shifting operations in accordance with a prescribed set of rules, invoked in theories of computability and automata” (OED Online).

[2] Software is “substrate/hardware independent” in the sense that it is “multiply realizable,” see Putnam (1967) “Psychological Predicates,” 37–48.

[3] “I think my mind currently is running on a kind of protein computer, and if exactly the same computation processes were implemented on a silicone computer I believe I wouldn’t notice any difference,” says transhumanist extraordinaire Nick Bostrom, founding Director of the Future of Humanity Institute at Oxford and Director of the Strategic Artificial Intelligence Research Center, in an interview by Cronopis. Note that although the computer metaphor died in the cognitive sciences when they literalized it, it still lives on, to varying degrees, as a metaphor in the broader culture.

[4] Software is “substrate/hardware independent” in the sense that it is “multiply realizable.” See Putnam (1967) “Psychological Predicates,” 37–48.

[5] To recognize non-contingent relations in the established hardware/software couplet would effectively undermine the ultimacy of the distinction between hardware and software.

[6] Good said, “the first ultraintelligent machine is the last invention that man need ever make.” I. J. Good, “Speculations Concerning the First Ultraintelligent Machine,” in Advances in Computers, Franz L. Alt and Morris Rubinoff, eds., Vol. 6 (New York: Academic Press, 1965), 31–88.

[7] Kevin Kelly, What Technology Wants, Viking Press, 2010, 296.

[8] Max Frisch, Homo Faber: A Report, 1st ed. (New York: Mariner Books, 1989), 165–166.

[9] Marc Andreessen,”Why Software is Eating the World,” WSJ, August 20, 2011.

[10] Philosophical Investigations, ¶107.

About Dr. Robert Doede
Professor of Philosophy, Trinity Western University BRE (Tyndale), MCS (Regent College), PhD (Kings College) Dr. Doede is a professor of philosophy at Trinity Western University. He has published articles in a number of philosophy journals, including “Transhumanism, Technology, and the Future: Posthumanity Emerging or Sub-humanity Descending?” in Appraisal (Vol. 7, No. 3, March 2009, pp. 39-54), “Technologies and Species Transitions: Polanyi, on a Path to Posthumanity?” in the Bulletin of Science, Technology & Society (Vol. 31, No. 3, June 2011, pp. 225-235), and “The Pedagogy of Indirection” in Facing Challenges: Feminism in Christian Higher Education and Other Places, edited by Allyson Jule and Bettina Tate Pedersen (Cambridge Scholarly Press, 2015). Recently Dr. Doede has also been interviewed on CBC Radio One and on Lorna Dueck’s Listen Up TV regarding technology and education. You can read more about the author here.

Browse Our Archives