Humanized Machines and Mechanized Humans

Humanized Machines and Mechanized Humans July 30, 2023

There have been times over the past few years when I’ve gotten the deeply unsettling feeling that I was observing society around me careening toward some sort of catastrophe, as if in slow motion yet simultaneously at breakneck speed, and that although I could see it coming, I could do nothing to stop it. I started having that feeling while seeing a certain famous narcissist attempt to use governmental power to maintain (and now to regain) that power, fueled by a seemingly undying cult of personality. And more recently, the same uneasy feeling has returned on seeing the Frankenstein’s monster paradoxically named “Artificial Intelligence” – specifically of the currently much-hyped “generative” kind – being unleashed into the world, with alarmingly little thought given to the havoc it may wreak.

Blurring the Human/Machine Distinction

A few recent warnings by astute observers closer to the phenomenon than I am have helped to gel my creeping unease into some measure of coherent thought. Writing in The Conversation, philosophy and ethics professor Nir Eisikovits helpfully points past the more fantastical imaginings about potential AI sentience to the underlying, probably truer and definitely more present danger: the extent to which post-industrial human societies already attribute human-like qualities like sentience to our technological tools.

Indeed, our habituation to anthropomorphizing our technology has come so far so fast, it’s easy to forget that when answering machines were a novelty a few decades ago – not long in historical terms, and within the lifetimes of most adults today – it was not uncommon to hear people express reluctance to “talk to a machine.” If such complaints sounded a bit curmudgeonly then, they sound quaintly antiquated now that talking not only to but with machines has become a commonplace feature of post-industrial daily life.

Eisikovits gestures toward the social and psychological dangers of anthropomorphizing technologies that catch on faster than society’s ability or willingness to build guardrails against their more malicious uses. While science fiction, as Eisikovits observes, has long primed us to imagine AI attaining consciousness and turning on its human creators, its manipulative use by humans against other humans – enabled by our tendency to anthropomorphize – is a far more imminent danger.

To this observation I would add that we’ve primed ourselves in the other direction as well: at the same time as we’ve applied humanizing language to human-made machines and algorithms, certain mechanizing language has also crept into popular parlance as metaphors for human experience. Tips for accomplishing day-to-day tasks are called “life hacks”; having the energy to attend to something is referred to as having “bandwidth”; “hashtag” has extended its usage from internet punctuation to spoken slang. From a strictly linguistic standpoint, such semantic drifts are perfectly natural ways of making sense of our surroundings using familiar reference points, which is an essential function of language. And yet, on a psycho-social level, they may be surface manifestations of a more perilous drift into increasingly blurred distinctions between ourselves and our tools, between (objectified) people and (personified) things.

Voices of Caution

All this priming and blurring has set the scene for the present onslaught of experimental interactions with chatbots, producing everything from whimsical images to unsettling conversations to a sinister potential for mass deception. I’ve observed much of it with a feeling of incredulous horror: are we so utterly heedless of every dystopia ever imagined? After all, dystopian writing is meant to serve as a prophecy of sorts – not in the conventional sense of the clairvoyant’s direct prediction, nor the biblical sense of a message from God, but as a creatively imagined warning of the dehumanizing path we’re traveling and where it may lead if left unchecked.

I see a glimmer of hope in a small but significant chorus of more direct warnings coming from some of the very makers and one-time champions of the present monster, though I worry it may be too little too late. An instantly famous open letter, signed by an impressive list of tech entrepreneurs and over 30,000 others, calls for a six-month pause in the development of more powerful AI systems. I’m skeptical that this would be a sufficient timeframe to establish necessary safeguards against foreseeable large-scale risks, but at least there are a sizeable number of credible voices (not to mention the letter’s publisher, the Future of Life Institute, whose existence gives me more hope than that of the letter itself) insisting that it’s important to try. And surely a six-month pause would be preferable to a headlong rush into world-altering, possibly even world-ending, developments that could very soon outpace human capacity to control them.

The direct warnings from modern-day Dr. Frankensteins go back at least to 2014 but have more recently grown to keep a near-steady pace with the rapid-fire evolution of generative AI itself – while at the same time carrying echoes of the much older literary-cinematic genre of dystopia, in which fantastical scenarios often point symbolically to the all-too-present dangers of human technocratic arrogance. While some of the most flamboyantly apocalyptic doomsday scenarios could be a strategic distraction to keep regulation of existing technologies at bay, the more sincere among these quasi-prophetic cautionary figures could perhaps be called clairvoyants in a more literal sense: people who see clearly enough to name specific dangers.

The Beast Unleashed (And the Beast is Us)

Any one of the dangerous potential (and in some cases actual) uses of generative AI named in recent months should be sufficient to give serious pause: the liar’s dividend that makes it easy to take advantage of the confusion produced by the existence of deepfakes, open floodgates of misinformation with endless real-world implications, the capacity for horrendous mistakes or simply all-too-efficient mass killing through military use, exacerbation of already existing systemic biases and socio-economic inequalities, easy enablement of the spread of prejudice and facilitation of hate crimes, and on and on.

The common denominator in all these dangers is human use. The real and imminent danger is not so much in AI’s capacity to become maliciously conscious and turn on humans as it is in the human capacity to turn its power on ourselves – or more precisely, on each other. It should surprise no one who has spent any time on the internet that chatbots that derive their “knowledge” (such as it is) from the full breadth of extant online text might turn to childish insults when their own veracity is called into question, or that savvy users could and would find ways to circumvent safeguards against their use for malicious purposes – to say nothing of the use of technological advances for state-sponsored violence with fewer and fewer human consciences to overcome.

For all its good, bad and ugly utilitarian uses, perhaps the most helpful function of AI in a moral sense (ironically similar to that of dystopian fiction) is as a mirror showing us our worst tendencies, many of which have long been exacerbated by the casual anonymity of online interactions. Helpful, that is, as long as we view it with a critical eye rather than using it as a permanent crutch to avoid critical thinking. Like any skill, our capacity for critical thinking can be severely weakened by prolonged lack of practice, as Eisikovits warns in a more recent article:

So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.

The human species will survive such losses. But our way of existing will be impoverished in the process.

The Los Angeles Times‘ Jane Rosenzweig puts it even more starkly:

[I]f we no longer value doing our own writing — if every time we open a Google or Word document, we’re prompted to save time by turning to the bot — we may get to the point when we don’t know how to think for ourselves anymore. Even if we don’t lose our jobs to AI, we’ll lose what matters about them….

While the rollout of writing assistants is inevitable, our relationship to them, no matter how much tech companies suggest otherwise, is not. If we wave that magic wand uncritically, we risk outsourcing not just the mundane but the meaningful.

The sort of loss described by writers like these is one step in the direction of a posthuman dystopia more realizable than an evil robot apocalypse, partly because there are humans who intend it to be realized. Whether on the scale of a human-induced evolutionary advancement beyond homo sapiens, or simply of taking technological development and optimized productivity as ends in themselves without regard for human dignity, such visions sound a lot like what Pope Francis has called the technocratic paradigm: the blind irrational faith that our own technological innovations will somehow save us from the messes we’ve made by our very use of them, enabled by the very same blind faith in the intrinsic goodness of human-made technological “progress”.

The Prophet as Luddite

It is against such Babel-like presumption that prophetic warnings, both religious and secular, are needed. The idea of the “prophetic voice” has, ironically, often been associated with an ideology broadly called “progressivism” – which may be fitting, in a superficial sense, insofar as self-described “progressives” are concerned with human dignity and therefore social justice. But there is also a sense in which a prophet must sometimes be a sort of Luddite (a term that after all has its origins in a labor justice movement). Not necessarily to call for a given technological genie to be put wholly back into its bottle – admittedly an unlikely if not impossible prospect – but always, and repeatedly, to bring morality to bear on its development and use. To tell its creators, potential regulators, and users – contra Mark Zuckerberg’s famous motto – to slow down and stop breaking valuable things: human dignity, creativity, critical thinking, democracy, social trust at all levels, our mental and psychological health and overall grasp of reality both individually and societally, a social fabric already dangerously frayed. To remind us to resist the cultural currents that would humanize objects and objectify humans. To insist that the use of any tool (and, it bears reminding, it is only that, not a person nor a god) need not and should not be as cruel as our unchecked worst inclinations could make it – and that people must never be sacrificed on the altar of progress.

I must acknowledge some strangely blurred lines even between opposite ends of a spectrum from starry-eyed technophiles to skeptical Luddites (prophetic and otherwise) and with various shades of ambivalence in between, and hence some possibility for error in my categorization of relevant voices. But it’s safe to say that the authors of the Hill op-ed linked immediately above are pitching their tents on the side of human dignity contra technocracy by naming a worst-case AI scenario as “a technocentric world where the blind pursuit of AI growth and optimization outweighs the imperative for human flourishing.”

They later elaborate:

The worst-case scenario about AI isn’t about AI at all. It’s about humans making active decisions to pursue technological growth at all costs. Both AI doomer-speak and AI utopia-speak use the same sleight of tongue when they anthropomorphize AI systems. Moral outsourcing is insidious; when we ask whether “AI will destroy/save us,” we erase the fact that human beings create and deploy AI in the first place. Human-like interfaces and the allure of data-driven efficiencies trick us into believing AI outputs are neutral and preordained. They are not.

This insight echoes that of a solidly established herald of human dignity contra technocracy, Pope Francis, to whom I’ll give the final word here (from Laudato Si 107):

We have to accept that technological products are not neutral, for they create a framework which ends up conditioning lifestyles and shaping social possibilities along the lines dictated by the interests of certain powerful groups. Decisions which may seem purely instrumental are in reality decisions about the kind of society we want to build.

"The painting is from 19th century Poland, so calling him "woke" literally has no meaning. ..."

Ash Wednesday in Portugal
"Why is the priest in white vestments? Is he woke?"

Ash Wednesday in Portugal
"Perhaps you could go out in public with your Catholic symbols in the company of ..."

Marked with the Sign of Faith
"The case of the African Bishops is "different." "Special."No. What is the case is that ..."

A personal reflection on Fiducia Supplicans

Browse Our Archives