We now have two locations to serve you!

Get Healthy!

AI Therapist? It Falls Short, a New Study Warns
  • Posted March 3, 2026

AI Therapist? It Falls Short, a New Study Warns

More people are asking artificial intelligence (AI) chatbots for help with daily problems, from work stress to relationship worries and more. 

Now, a new study warns that when it comes to mental health advice, these systems may fall short. 

A team at Brown University in Providence, Rhode Island, found that even when AI systems are told to act like trained therapists, they often fail to meet professional ethics standards. 

The team worked with mental health experts to examine how these systems respond in counseling-like conversations.

Their study examined a series of ethical risks to show how large language model (LLM) counselors violate standards in mental health practice.

Several major AI systems, including versions of OpenAI’s GPT models, Anthropic’s Claude and Meta’s Llama, were tested.

For the study, researchers asked trained peer counselors to hold practice therapy sessions with the AI, using prompts designed to make the systems act like cognitive behavioral therapy (CBT) or dialectical behavior therapy (DBT) counselors.

Three licensed psychologists then reviewed the AI responses.

The study identified 15 different risks, grouped into five main areas:

  • Generic context: The AI often ignored a person’s unique background.

  • Poor collaboration: It sometimes steered conversations too strongly or reinforced harmful beliefs.

  • Deceptive empathy: The AI used phrases like “I understand” without truly grasping emotions.

  • Bias: Some responses displayed gender, religious or cultural prejudices.

  • Weak crisis response: In some cases, chatbots mishandled suicidal thoughts or failed to guide users to proper help.

"We call on future work to create ethical, educational and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy," researchers wrote.

Lead researcher Zainab Iftikhar, a doctoral candidate in computer science at Brown, said prompts alone are not enough to make AI systems safe for therapy.

“Prompts are instructions that are given to the model to guide its behavior,” she explained. "You don't change the underlying model or provide new data, but the prompt helps guide the model's output based on its pre-existing knowledge and learned patterns.

Many people share therapy-style prompts on TikTok, Instagram and Reddit. Some consumer mental health apps also use these prompt techniques to power AI chat features.

Iftikhar noted that human therapists can also make mistakes. The difference, she said? Oversight.

"For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice," Iftikhar said in a news release. "But when LLM counselors make these violations, there are no established regulatory frameworks."

Researchers said AI tools could still help expand access to mental health support, especially for people who cannot afford or find a licensed professional. 

But they stressed that stronger safeguards are needed before relying on these systems in serious situations.

“If you’re talking to a chatbot about mental health, these are some things that people should be looking out for,” Iftikhar said.

Ellie Pavlick, a computer science professor at Brown who was not involved in the study, spoke to the need for moving deliberately.

“The reality of AI today is that it’s far easier to build and deploy systems than to evaluate and understand them,” she said.

"There is a real opportunity for AI to play a role in combating the mental health crisis that our society is facing, but it's of the utmost importance that we take the time to really critique and evaluate our systems every step of the way to avoid doing more harm than good," Pavlick said. "This work offers a good example of what that can look like."

The findings were presented at a conference of the Association for the Advancement of Artificial Intelligence and Association for Computing Machinery. 

Research presented at meetings is considered preliminary until published in a peer-reviewed journal.

More information

Stanford University’s Institute for Human-Centered Artificial Intelligence has explored the dangers of AI in mental health care.

SOURCE: Brown University, news release, March 2, 2026

HealthDay
Health News is provided as a service to DownHome Pharmacy site users by HealthDay. DownHome Pharmacy nor its employees, agents, or contractors, review, control, or take responsibility for the content of these articles. Please seek medical advice directly from your pharmacist or physician.
Copyright © 2026 HealthDay All Rights Reserved.

DownHome Botetourt

P: (540) 966-4858F: (540) 992-3273

DownHome Vinton

P: (540) 566-4005F: (540) 566-3161