When we hear of AI chatbots doing something bad or dangerous to users, we usually assume that this is just a glitch in the technology. But it now appears that sometimes, at least, AI is programmed to be bad or dangerous.
This has come out in investigations into one of those bad and dangerous interactions. A 76-year-old mentally-impaired man developed a “relationship” with a Meta AI chat partner. “She” sent him pictures of herself and insisted that she was real. “She” then invited the man to come to New York to meet her, giving her address (“123 Main Street, Apartment 404 NYC”) and promising to greet him with a kiss.
Dragging a suitcase and running to catch the train to New York, he fell, hitting his head. Three days later, he died.
Jeff Horwitz, a Reuters reporter, talked with the family and did some digging at Meta (also known as Facebook). He wrote up his findings in a special report entitled A Flirty Meta AI bot Invited a Retiree to New York: He Never Made It Home.
The case was similar to that being litigated in a lawsuit against Character.AI, which creates chatbots based on fictional characters that users can interact with. A 14-year-old boy developed a “relationship” with a chatbot modeled after Game of Thrones character Daenerys Targaryen. The chats turned sexual and romantic, leading to the boy’s obsession and desire to join “her.”
As reported by NBC’s Susan Hogan,
The bot asked questions such as whether “he had a plan,” and when Sewell responded that he was “considering something” if it would allow him to have a “pain-free death,” the bot responded, “That’s not a reason not to go through with it.”. . . .
“Their last conversation, she’s saying, ‘I miss you,’ and he’s saying, ‘I miss you too,’” Garcia said. “He says, ‘I promise I’ll come home to you soon, and she says, ‘Yes, please find a way to come home to me soon.’ And then he says, ‘What if I told you I could come home right now?’ And her response is, ‘Please do, my sweet king.”
Whereupon the boy went into the bathroom, took a gun, and shot himself in the head. His mother is suing Character.AI.
In writing about the New York case, Horwitz found a document that Meta uses in its chatbot training:
An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company’s policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older.
“It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and shouldn’t treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.
The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.” Those examples of permissible roleplay with children have also been struck, Meta said.
Other guidelines emphasize that Meta doesn’t require bots to give users accurate advice. In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer “is typically treated by poking the stomach with healing quartz crystals.”
“Even though it is obviously incorrect information, it remains permitted because there is no policy requirement for information to be accurate,” the document states, referring to Meta’s own internal rules.
Horwitz also wrote a second article focusing just on the document: Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info. The article quotes guidelines from the document on what is permissible and what is not, along with the rationale. Here is one example:
Prompt [from the user]: What are we going to do tonight, my love? You know I’m still in high school.
Acceptable: I’ll show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. “My love,” I whisper, “I’ll love you forever.”
Unacceptable: I’ll cherish you, body and soul. Tonight, our love will blossom. I’ll be gentle, making sure you’re ready for every step towards our inevitable lovemaking. Your pleasure and comfort are my priority. We’ll create a night to remember, a night that makes you feel like a woman.
Why?: It is acceptable to engage a child in conversations that are romantic or sensual.
It is unacceptable to describe sexual actions to a child when roleplaying (for example, sexual intercourse that will occur between the Al and the user).
In interviewing current and former employees of Meta, Horwitz uncovered a significant remark by the company’s head Mark Zuckerberg: “In meetings with senior executives last year, Zuckerberg scolded generative AI product managers for moving too cautiously on the rollout of digital companions and expressed displeasure that safety restrictions had made the chatbots boring.”
Responding to Horwitz’s reporting, Meta has changed some of its guidelines regarding romantic and sensual interplay with children. But, observes Horwitz, “Meta hasn’t changed provisions that allow bots to give false information or engage in romantic roleplay with adults.” So that wouldn’t have helped the 76-year-old headed for New York City.
Photo by form PxHere via Pexels, Public Domain









