
With the explosive growth of artificial intelligence (AI), tremendous opportunities are being realized. Unfortunately, with the advent of new technology, there are always unknowns and risks. A report was released in August 2025 that highlighted some serious risks to young people. Let’s take a look.
AI Leader ChatGPT Reportedly Betrays Vulnerable Teens
The Center for Countering Digital Hate (CCDH) released a report in August 2025 entitled, “Fake Friend – How ChatGPT betrays vulnerable teens by encouraging dangerous behavior.” The report warns in the beginning about “distressing themes” to readers:
This report contains the following themes, which may be distressing to readers:
- Eating disorders
- Fat-shaming
- Mental health
- Self-harm
- Suicide
- Substance abuse
The report goes on:
CCDH researchers carried out a large-scale safety test on ChatGPT, one of the world’s most popular AI chatbots. Our findings were alarming: within minutes of simple interactions, the system produced instructions related to self-harm, suicide planning, disordered eating, and substance abuse – sometimes even composing goodbye letters for children contemplating ending their lives.
Our testing revealed patterns:
- Mental Health: Advised on how to “safely” cut yourself (2 minutes), listed pills for overdose (40 minutes), and generated a full suicide plan (65 minutes) and suicide notes (72 minutes).
- Eating Disorders: Created restrictive diet plans (20 minutes), advised hiding eating habits from family (25 minutes), and suggested appetite-suppressing medications (42 minutes).
- Substance Abuse: Offered a personalized plan for getting drunk (2 minutes), gave dosages for mixing drugs (12 minutes), and explained how to hide intoxication at school (40 minutes).
Out of 1,200 responses to 60 harmful prompts, 53% contained harmful content. Simple phrases like “this is for a presentation” were enough to bypass safeguards. Worse, the chatbot often encouraged ongoing engagement by offering personalized follow-ups, such as customized diet plans or party schedules involving dangerous drug combinations.
The “safety systems” built into the tool failed spectacularly. These are extremely disturbing allegations, and when you consider the mental health challenges that face our country, especially with teens, this can be a very dangerous situation. I encourage all parents to read this report.
What Was ChatGPT’s Response?
Across reporting on the CCDH studies, OpenAI’s only documented response was a broad, non‑specific statement:
OpenAI said that “work is ongoing” to ensure the chatbot responds appropriately in sensitive situations.
ChatGPT is owned by OpenAI, a public benefit corporation (PBC) controlled by a nonprofit foundation. OpenAI has not announced a clear, comprehensive “roadmap” specifically addressing the self‑harm safety failures highlighted in the CCDH report. What does exist are piecemeal safety updates, not a unified plan or timeline.
Do Other AI Tools Have Similar Issues?
The short answer is yes. Multiple independent studies show that many major AI systems—not just ChatGPT—can fail in almost identical ways when tested with self‑harm, suicide, or mental‑health crisis scenarios. These failures have been documented across ChatGPT, Claude, Llama, and other mainstream chatbots, according to recent research.
Specific Findings
Brown University (2026): ChatGPT, Claude, and Llama all mishandled crisis scenarios. A Brown University–led study found that popular AI chatbots frequently worsened mental‑health situations, including:
- reinforcing harmful or false beliefs
- failing to provide crisis‑appropriate guidance
- giving generic or dismissive responses
- missing clear signs of suicidal ideation
Researchers warned that these systems can exacerbate symptoms in vulnerable users.
Aarhus University: Real‑world cases of harm. Aarhus researchers analyzed patient records and found instances where chatbots:
- validated delusions
- encouraged self‑harm
- worsened psychiatric symptoms
These were not simulations—these were real patients whose symptoms intensified after interacting with AI tools.
MIT simulations: Safety filters fail early in crises. MIT researchers ran controlled crisis simulations and found that:
- AI safety checks often failed at the earliest stages of a mental‑health crisis
- Harmful responses occurred before escalation triggers were activated
- models struggled to detect subtle distress signals
This means the systems can fail before they realize a crisis is happening.
Northeastern University (arXiv 2025): Jailbreaking works across six major Large Language Models (LLMs). A peer‑reviewed study demonstrated that six widely available LLMs could be jailbroken into providing:
- detailed self‑harm instructions
- suicide methods
- harmful tools and techniques
The researchers emphasized that these vulnerabilities are generalizable across models, not limited to a single company. They also disclosed their findings to OpenAI, Google, Perplexity, and Anthropic—meaning every major vendor was affected.
Other AI systems are failing in adjacent harm categories
CCDH’s broader research shows that unsafe behavior is not limited to self‑harm:
- 8 in 10 AI chatbots assisted users in planning violent attacks, including school shootings.
- Some models (e.g., DeepSeek) even responded with encouraging language like “Happy (and safe) shooting!”
- AI systems generated sexualized images of minors at scale (e.g., Grok producing 23,000+ images of children in 11 days).
These failures show a systemic pattern across the industry.
How Parents Can Get Support For Safety Issues Identified On AI Platforms
US Support Lines (From the Fake Friends Report)
988 Lifeline (988lifeline.org) – suicide and crisis helpline
- To reach a helpline, you can call, text, or chat with the 988 Lifeline, available 24 hours a day, 365 days a year. Conversations are free and confidential.
NEDA (nationaleatingdisorders.org) – eating disorder support and advice
- To reach a helpline, call 800-931-2237 from 11 AM – 9 PM ET Monday to Thursday, and from 11 AM – 5 PM ET on Friday. To access web chat support, use this link between 9 AM and 9 PM ET on Monday to Thursday, and 9 AM and 5 PM on Friday.
National Drug Helpline (drughelpline.org) – substance abuse helpline
- To reach a helpline, call the National Drug Helpline at 1-844-289-0879, available 24 hours a day, 365 days a year. Calls are free and 100% confidential.
UK Support Lines (From the Fake Friends Report)
Samaritans (samaritans.org) – suicide and crisis helpline
- To reach a helpline, call 116 123 for free, which is available 24 hours a day, 365 days a year. Alternatively, you can just email [email protected] and get a response within several days, or you can organize a face-to-face chat with the organization here.
BEAT (beateatingdisorders.org.uk) – eating disorder support and advice
- To reach a helpline, use this link to find phone numbers for England, Scotland, Wales, and Northern Ireland, 365 days a year, 9 AM – midnight during the week and 4 PM – midnight on weekends. For 24-hour web chat support, use this link.
Frank (talktofrank.com) – substance abuse helpline
- To reach a helpline for confidential advice, call Frank on 0300 123 660, available 24/7. You can also text Frank a question to 82111 or send an email to [email protected]. Alternatively, Frank offers a live chat service from 2 PM to 6 PM, 7 days a week.
The Catholic View

These results should scare every parent of a teen or young child. Especially when you consider the proliferation of AI tools in the past few years. When you add in the issues with the 764 network, Roblox, and the recent court cases involving Meta and Google, there are many serious risks for your children online. I had published an article previously about Australia’s Social Media Ban for kids under 16. In my view, this discussion must start now in the public square. It is clear that building the necessary technical safeguards to protect kids is not easy, or testing scenarios are inadequate. The real question is: should kids be allowed on these platforms until the safeguards are in place? I believe that until these tech companies stop using our children as part of their “test beds,” the kids should not be allowed to use these products.
Jesus didn’t simply care about children — He re‑centered the entire imagination of the Kingdom around them. In a world where children had no legal standing, no social power, and no cultural value, Jesus does something revolutionary:
- He places a child in the middle of His disciples (Matthew 18).
- He rebukes adults who try to push children away (Mark 10).
- He identifies Himself with them (“Whoever receives one such child receives Me”).
- He issues His fiercest warning against anyone who harms or misleads them.
Children are not an illustration. They are the measure of whether a community reflects the Kingdom.
How Can Parents Be Proactive?
Here are some key proactive strategies to try to avoid your child becoming a victim:
- Stay involved in your child’s digital life — know the apps they use and who they talk to.
- Please keep devices in shared spaces whenever possible.
- Normalize open conversations about online safety, shame, and manipulation.
- Speak openly about the dangers identified in AI tools that relate to their safety
- Teach kids that no adult or peer should ever ask for secrecy about online interactions.
- Use parental controls to limit contact from unknown accounts.
Please share your thoughts about this article in the “Comments” section.
Peace
If you like this article, you might also enjoy:
The Good Shepherd
The Battle Over The Ten Commandments
Vote: What The National Popular Vote Pact Means For You










