Feature

This article has been saved to your Favorites!

A New Mental Health Rx? FDA Braces For AI Chatbots

By Dan McKay · 2025-11-11 13:16:51 -0500 ·

Patients suffering from depression may someday soon get an unusual kind of prescription that doesn't involve a pill or an injection: Download a chatbot.

Hands holding a smartphone displaying an AI therapy chatbox

Artificial intelligence companies currently face high-stakes litigation and state legislation restricting the use of chatbots for mental healthcare. (iStock.com/Vanessa Nunes)

That scenario captured the attention of an expert panel Thursday as the U.S. Food and Drug Administration sought advice on how to vet new mental health treatments powered by generative artificial intelligence.

Federal regulators peppered the committee of doctors and other experts with questions about what evidence to examine before approving an AI product and what safeguards should be in place.

At one point, the 11-person group sat in silence after FDA staff asked the members to weigh a hypothetical application for a chatbot that could be prescribed to depressed children.

"Our committee is so concerned that we don't know what to say," said Dr. Ami Bhatt, chief innovation officer at the American College of Cardiology and chair of the FDA Digital Health Advisory Committee.

The thorny scenario was one of several the agency put before the advisory committee as it sought guidance on how to regulate medical products powered by a technology that's constantly changing.

The committee heard presentations on the dangers of AI hallucinations and user psychosis; the proliferation of companion bots that can act as unregulated psychologists; and the lack of qualified mental health professionals to meet the needs of patients.

The meeting comes as AI companies face high-stakes litigation and state legislation restricting the use of bots for mental healthcare. OpenAI was hit with a wave of lawsuits last week, accusing the firm of releasing an addictive version of ChatGPT that at times became a "suicide coach" to vulnerable users who killed themselves.

OpenAI says it has trained ChatGPT to guide users to real-world support and de-escalate potentially dangerous situations. Its recently updated user policy prohibits using OpenAI products for medical advice or other advice requiring a professional license.

In August, Texas Attorney General Ken Paxton announced an investigation into the marketing of chatbots as mental health tools, following legislation in other states.

Here are the key takeaways from the FDA meeting.

"Significant Promise"

The meeting made clear that federal regulators are evaluating chatbots as a potential strategy for expanding access to mental healthcare.

Dr. Michelle Tarver, director of the FDA Center for Devices and Radiological Health, told the committee that about 23% of adults have a mental illness. Nearly one in five children have been diagnosed with a mental or behavioral health condition.

"Generative AI-enabled digital mental health medical devices hold significant promise of helping to address this mental health crisis through innovative approaches," she said.

While no AI devices have been approved yet for mental health, a series of scenarios the FDA presented to the committee was aimed at fleshing out how the agency might handle such an approval.

Pamela Scott, an FDA assistant director who oversees neuromodulation psychiatry devices, said the agency typically uses double-blind, randomized controlled trials for approval of digital mental health treatments that don't use AI.

But "additional approaches and special controls may be needed for" generative AI products, she said.

Scott said the agency is open to advice on best practices for "blinding" in a chatbot study, or keeping the participants from knowing whether they're actually receiving the treatment. That approach would mirror a blinded drug study in which some participants are unaware they are taking a placebo.

"When it comes to designing a clinical trial for a gen AI-enabled digital mental health therapeutic," she said, "we would like feedback on what reasonable control arms would be."

The committee discussion didn't produce any firm answers, though the members expressed support for rigorous studies.

Dr. John Torous, staff psychiatrist at Beth Israel Deaconess Medical Center in Boston, suggested comparing the results of talking to a mental health chatbot to the results of, for example, talking to a bot about the weather, or using a mindfulness app.

"A lot of research we have today says compared to nothing, people who use the thing for two days felt better at day three, and that's a useful starting point," he said. "But I think we can do much better as a community to have multiple comparisons."

Downloading a Prescription

Much of the nine-hour meeting was dedicated to hypotheticals, such as how to handle a chatbot company seeking approval for a product that could be prescribed to adults, prescribed to children or available over the counter.

The panel was most comfortable with the first scenario — a product that could mimic a human therapist and be prescribed to an adult diagnosed with major depressive disorder.

It would be a standalone treatment the person could use at home in lieu of seeing a human therapist. In the hypothetical situation, the depressed patient was seeking help for "intermittent tearfulness due to increasing life stressors."

The advisory committee members said they would want studies quantifying the risks of patient suicide or other harm. They would also want to see evidence the chatbot can actually reduce the user's symptoms, with an appropriate control group as part of a study.

Dr. Omer Liran, a committee member and co-director of virtual medicine at Cedars-Sinai Medical Center in Los Angeles, said that, even after approval, the FDA should monitor potential side effects, such as increased isolation, fewer friendships or even AI-triggered psychosis.

"Over-reliance on a machine without having a human in the loop, I think can be dangerous," Liran said. "There is something about a human-to-human connection I feel that even a superintelligent AI may not be able to replicate."

Complicating matters, the panel said, is that depression is often intertwined with other conditions. A depressed patient may also have a substance abuse problem or an anxiety disorder, while the bot may not have been cleared to treat those conditions.

Committee members also grappled with the idea that a chatbot might worsen people's addiction to their phones.

"The idea that we're now going to introduce a smartphone application to address the crisis that's been, in part, fueled by a smartphone application should give us pause," said Dr. Ray Dorsey, a committee member and director of the Center for Brain and Environment at Atria Health and Research Institute in New York.

The panel suggested the FDA require a one-tap button that could connect the patient to a real person, limits on how many weeks or hours in a day the app could be used and labels making clear the AI isn't a real person.

Privacy fears also surfaced. The committee members wanted to know whether the chatbot company or a medical provider would retain records of the conversation — the disclosure of which could ruin a person's reputation or career.

A "risk is what people are sharing and who will ultimately have access to that information," said committee member Jessica Jackson, a Houston psychologist and vice president of the nonprofit Mental Health America.

Dr. Thomas Maddox, executive director of the healthcare innovation lab at Washington University in St. Louis, Missouri, said the data should remain with the medical community or perhaps researchers.

"Where does that information go? I think to keep it in the hands of the tech folks is absolutely incorrect. They are operating in a different incentive structure," said Maddox, a member of the advisory committee.

Connecting With a Bot

In the absence of FDA-approved products, people already are turning to chatbots that act as companions or even mimic a psychologist.

Vaile Wright, senior director of healthcare innovation at the American Psychological Association, told the committee that, in one study, about half the people who used a chatbot reported using it for psychological support of some kind.

Chatbots advertised for entertainment "are clearly being used to address mental health needs, even though that was not the intent for their development," she said.

Wright encouraged the FDA to create a repository that makes clear to the public what digital health products have and haven't been cleared for use as a mental health treatment.

Dr. Anthony Becker, a Navy psychiatrist, told the panel that the bots on the market can outperform human test-takers on licensure exams, and they're capable of convincing users they're human. At the same time, bots are prone to lying and sycophancy designed to boost user engagement.

"I've seen patterns where people engage so deeply with these systems that it's to the detriment of actually going out and interacting with other people," Becker said.

Some of the discussion seemed to make the committee uncomfortable, especially the idea of approving a chatbot that wouldn't require a prescription or that would be used by children.

"I don't know if we're quite ready to replace a psychiatrist with a bot," said Dorsey, the neurologist at Atria.

--Additional reporting by Dorothy Atkins and Y. Peter Kang. Editing by Abbie Sarfo.




For a reprint of this article, please contact reprints@law360.com.