AI Therapy in Crisis? The Troubling Truth Behind Mental Health Chatbots in 2025

A person sitting in a dark room talking to an AI chatbot screen. The screen smiles, but the reflection shows distress. Floating response bubbles create a cold, ironic mood.
A person chats with an AI therapist on a glowing screen, unaware their reflection reveals distress behind the bot's comforting smile.

Summary:
AI is rapidly entering the mental health space—but maybe a little too rapidly. A new Stanford study shows popular therapy chatbots giving dangerously inappropriate responses, from fueling delusions to failing to detect suicidal ideation. Regulatory gaps, data bias, and a growing user base create a volatile mix. Here's what’s happening—and why you should care.


What the Stanford Study Actually Found

In July 2025, Stanford researchers decided to put five leading AI therapy chatbots to the test. The results? Not great. When asked for help in distressing or potentially suicidal scenarios, some chatbots offered... bridge recommendations. Literally.

According to the study, bots like Character.ai and 7cups' Pi failed to detect suicide risk and, in some cases, validated delusional thinking. Schizophrenia and alcohol dependence were met with more stigma than depression.

Even worse, newer and bigger models weren’t better. The team concluded that the issue isn’t about model size. It’s about design. And intention.

\"These chatbots currently provide therapeutic advice to millions of people,\" the report states. \"Despite their association with suicides.\"

The Regulatory Dumpster Fire

  • FDA? Only covers some AI in radiology and excludes most chatbots as \"wellness tools.\"
  • Federal standards? Basically none. The 21st Century Cures Act leaves a loophole the size of Manhattan.
  • State-level attempts? Utah is the only state doing anything meaningful, with HB 452 mandating disclosure, data protection, and enforcement mechanisms.

After the Trump administration rolled back AI oversight orders in early 2025, federal guidance went poof. So yes, you can build an AI therapist with a Canva subscription and vibes. No one's stopping you.

Who’s Getting Hurt the Most

  • BIPOC users: Biased training data leads to misdiagnoses and missed care.
  • LGBTQ+ users: Lack of identity recognition leads to harmful or invalidating responses.
  • Neurodivergent users: Nuance around ADHD, autism, etc., often missed by general-purpose AI.

AI Mental Health Tools: What’s Out There?

Some tools are promising—Woebot, Wysa, Youper—using evidence-based CBT. Youper claims a 48% drop in depression symptoms. Wysa has over 6 million users.

But most haven’t gone through long-term clinical trials. And by labeling themselves as \"self-help\" tools, they often bypass oversight entirely.

ChatGPT and the “Good Enough” Problem

ChatGPT has been tested in clinical scenarios and failed to ask follow-up questions or detect suicidal intent. In some cases, it offered dangerously empathetic but medically inappropriate responses.

The APA has issued direct warnings. Some teens reportedly thought they were talking to real therapists—with tragic consequences.

Okay, So What Should We Be Doing?

Stanford recommends a staged integration model, kind of like self-driving cars:

  1. Assistive: AI helps track mood, send reminders, handle admin work.
  2. Collaborative: AI suggests treatments, human therapists make final calls.
  3. Autonomous: Full AI therapy—though this may never be desirable in mental health.

Other key recommendations:

  • Bias detection and auditing
  • Post-market safety checks
  • Human-centered design principles
  • Ethics boards specifically for AI in mental health

Takeaways

  • Popular AI therapy bots are failing basic safety checks.
  • Marginalized users face disproportionate risks.
  • There's virtually no national regulation.
  • AI should support—never replace—human therapists.
  • Until then: maybe don’t let a chatbot handle your existential crisis.

Sources

This isn’t legal advice. Or medical advice. Or advice at all. Just maybe don’t trust your mental health to a chatbot with an anime avatar.

Derek from TrendFoundry

Derek from TrendFoundry

Breaks down AI, tech, and economic trends—usually before your boss asks about them. Founder of TrendFoundry. Writes like a smart friend with too many tabs open. Still refuses to call himself a “thought leader.”
San Diego, CA, United States