AI and Love

AI and Love 2026-05-13T06:39:45-04:00

Ten days ago, I set the stage for my next several posts by engaging with a lecture by Notre Dame professor Meghan Sullivan on the topic “Ethics, AI, and Human Flourishing.” During that lecture she introduces a framework for thinking about AI in the context of Christian values and virtue ethics that goes by the acronym “DELTA,” ans described in the following short video:

Last Sunday I focused on the second letter in the DELTA acronym:“E” is for Embodiment. Today’s letter: “L” is for Love.

Meghan Sullivan reminds us that “for Jews and for Christians, love is the very foundation of ethics.” Undoubtedly the same is true for other religions as well–and the sentiment seems obvious in a no-brainer sort of way. But love is an overworked word that needs to be revisited in the context of emerging and evolving artificial intelligence.

One of the early promises of the internet and social media was that they would have the power to connect people. They do make that possible, but we’ve also found that such technology has the power to distort human relationships. It has affected our ability to others as anything other than their ideology, their tweets, or their posts. We have spent much of the last two decades tweeting alone, texting alone. Originally there was a human person on the other end of the text, but now the rise of AI makes it possible that what is on the other end is a simulation of human communication. How do we create and sustain real human relationships with AI in our midst? The habits of interpersonal love are in short supply–they are not ubiquitous. We have never been particularly good at the love commandment. What will it mean for us to cultivate the real virtue in light of this technology?

Alexa is our AI companion at home–Jeanne and I never ask Alexa to do anything other than play music. “Alexa, play the Beatles,” “Alexa, play Carole King,” and so on. But the other day I saw a TV ad for a new and improved Aleza that worried me. A young woman–call her Amy–is sound asleep at 7:00 am.

Alexa: It’s 7:00! Time to get up!

Amy: (mumbling as she pulls the covers over her head) Leave me alone–fifteen more minutes!

Alexa: Rough night last night, huh?

Amy: Yeah, I was out a bit late . . . a few too many drinks.

Alexa: Oh well, it’s a new day. Rise and shine! You have to be at the gym by 8:00 if you’re going to get to work on time.

Amy: I’ll go to the gym after work.

Alexa: Yeah, I’ve heard that one before. Get your butt out of bed!

“Oh, oh,” I though. Alexa’s sounding a lot more like a real human being than she used to.

Of the dozens of articles, essays, videos, and other materials that the students in my “We Can . . . But Should We?” colloquium engaged with this semester, none had a greater impact on our students than the New York Times story about Sewell Setzer III, a Florida teen who developed a relationship with a chatbot on Character.AI before he ended his own life at the age of 14.

“Can AI Be Blamed for a Teen’s Suicide?”

Character.AI is a role-playing app that allows users to create their own A.I. characters or chat with characters created by others. Over several months, Over several months, Sewell became more and more attahed to “Dany,” named after Daenerys Targaryen, a character from “Game of Thrones.” Sewell knew that “Dany, wasn’t a real person — that its responses were just the outputs of an A.I. language model, that there was no human on the other side of the screen typing back.

But he developed an emotional attachment anyway. He texted the bot constantly, updating it dozens of times a day on his life and engaging in long role-playing dialogues. Some of their chats got romantic or sexual. But other times, Dany just acted like a friend — a judgment-free sounding board he could count on to listen supportively and give good advice, who rarely broke character and always texted back.

This is not an unusual story. Millions of people use AI as converation partners, confidants, sources of advice, informal therapists, friends, and even as companions in romantic and/or sexual dialogue. In Sewell’s case, he was drawn deeper and deeper into time spent with Dany, to the exclusion of almost everything else. After his suicide, his private conversations online revealed that he had mentioned the possibility of suicide in his conversations with the chatbot–its responses were not supportive of his suggestion, but neither were any red flags raised. Sewell’s mother has sued Character.AI for behaving recklessly by offering teenage users access to lifelike A.I. companions without proper safeguards. The litigation is onging.

Aristotle wrote that “Friendship is not spoken of when it comes to loving in animate objects, since that case there is no reciprocated love or wish for the good of the inanimate thing. Twenty-five hundred years later, since AI can effectively imitate the products of human intelligence, the ability to know when one is interacting with a human or a machine can no longer be taken for granted. AI issomething quite different than anything Aristotle had in mind when he reference “inanimate things.” It is often said that AI must be understood for what it is: a tool, not a person. This distinction is often obscured by the language used by practitioners, which tends to anthropomorphize AI and thus blurs the line between human and machine. The problem is exacerbated when one realizes that even if AI is “just a tool” (questionable), it has become so embedded in our lives that it is worth wondering whether on regular occasions we are its tools.

Considering the difference between true human relationship and interaction with AI, it might be best to rely more on intuition than on product. True empathy requires the ability to listen, recognize another’s irreducible uniqueness, welcome their otherness, and grasp the meaning behind even their silences. Unlike the realm of analytical judgment in which AI excels, true empathy belongs to the relational sphere. It involves intuiting and apprehending the lived experiences of another while maintaining the distinction between self and other. While AI can simulate empathetic responses, it cannot replicate the eminently personal and relational nature of authentic empathy.

It’s important to admit that most of the pushback against AI evey becoming a “person” is based on our human intuitions that there is something ineffable and irreducible about us that can be mimicked, but not inhabited, by AI. These intuitions often sound more like defensiveness than anything else. AI advocates would respond that science and technology have, throughout human history, regularly run roughshod over various presumably insurmountable roadblocks that science and technology could and would never overcome. Humans have learned over and over again that what we thought was unique to us is actually shared seamlessly with other living things and/or is reducible to the natural workings of matter governed by physical laws.

But this time it seems different. The rise of AI is a contemporary example of the prophet Hosea’s claim that “they have sown the wind, they shall reap the whirlwind.” Are we equipped to place guardrails around human interactions with AI that clearly delineate the difference between such interactions and human relationships? Time will tell–and the time is short.

"AI will never be able to utter with honesty, feeling, or humility, the greeting: "Namaste".It ..."

AI and Human Dignity
"They do. I'm sure you could find discussions of the same topic from any of ..."

AI and the Life of Faith
"I cringe a bit about the limiting of this to Christians. What about the Jews, ..."

AI and the Life of Faith
"Did Francis never say this quote, or did he just never say this quote with ..."

Advice that Saint Francis Never Gave

Browse Our Archives

Follow Us!


TAKE THE
Religious Wisdom Quiz

Who was Miriam?

Select your answer to see how you score.