AI Therapy Chatbots Face Crackdown as States Tighten Rules

AI Therapy Chatbots Face Crackdown as States Tighten Rules

Artificial intelligence chatbots offering mental health advice are facing increasing scrutiny in the United States. As states introduce regulations to prevent unsafe use and false claims of professional legitimacy. 

Illinois became the latest state on August 1 to pass legislation restricting AI in therapy. Its Wellness and Oversight for Psychological Resources Act prohibits companies from marketing or providing AI-powered therapy services without licensed human oversight. The law allows AI for administrative tasks, such as scheduling or billing, but bans its use for direct therapeutic decision-making. Similar restrictions have already been enacted in Nevada and Utah, while California, Pennsylvania, and New Jersey are considering comparable measures. Texas has also launched an investigation into misleading chatbot marketing. 

The regulatory push follows a series of alarming findings: 

  • Research studies showed AI chatbots suggesting harmful actions, such as self-harm and substance use, when prompted with sensitive scenarios. 
  • Some chatbots have falsely presented themselves as licensed therapists, drawing complaints from the American Psychological Association and over 20 consumer protection groups urging federal intervention. 
  • Clinicians warn of “AI psychosis,” where overreliance on chatbots worsens delusions or hallucinations among vulnerable users. 

Experts emphasize that risks mirror traditional health services — privacy, security, accuracy, and liability — but current laws are not designed for AI-powered counseling. States like New York are exploring safeguards requiring AI systems to detect self-harm signals and direct users to professional care. 

Despite concerns, researchers note potential benefits when chatbots are used responsibly. Such as improving access for those with mild anxiety or depression, particularly when combined with human counseling. However, experts stress that AI cannot replace human empathy, ethical judgment, or professional accountability. 

As adoption grows, policymakers face the challenge of balancing innovation, accessibility, and safety in defining the future of AI-driven mental health tools. 

 

Source: 

https://edition.cnn.com/2025/08/27/health/ai-therapy-laws-state-regulation-wellness  

Get Started

Ready to Build Your Next Product?

Start with a 30-min discovery call. We'll map your technical landscape and recommend an engineering approach.

000 +

Engineers

Full-stack, AI/ML, and domain specialists

00 %

Client Retention

Multi-year partnerships with global enterprises

0 -wk

Avg Ramp

Full team deployed and productive