State lawmakers next year will continue moving away from sweeping proposals to regulate artificial intelligence and toward more narrow efforts aimed at transparency and safety, and a primary target will be companion chatbots. Facing pushback from the tech industry, the Trump administration and continued threats of a moratorium on their regulatory efforts from Republicans in Congress, state legislators from both parties say they’re undeterred.
Facing pushback from the tech industry, the Trump administration and continued threats from Republicans in Congress of a moratorium on their efforts to regulate artificial intelligence, state legislators from both parties say they’re undeterred. But state lawmakers are changing their strategy. They’re shifting focus away from the broad, ambitious bills aimed at “high-risk” systems used in consequential decisions they’ve backed in the past. Instead, they plan to take a more targeted approach next year, and chatbots will be squarely in their crosshairs as concerns about unhealthy relationships with AI models continue to grow.
Democrat and Republican legislators in 42 states introduced more than 210 bills this year addressing private-sector development and deployment of AI, according to a report this month from the Future of Privacy Forum, a Washington, DC-based advocacy group.
Getting those AI bills to become law, however, has proven challenging. Of the bills tracked by FPF, less than 10 percent were enacted. Broad, ambitious bills aimed at “high-risk” systems used in consequential decisions that were modeled after the 2024 Colorado AI Act faltered in states such as Connecticut and Virginia.
State lawmakers had more success by targeting specific technologies and uses of AI, most notably chatbots. They were able to pass five bills regulating chatbots that either require notifications to users that they’re not communicating with a human being, or safety protocols aimed at preventing mental health issues or suicides.
Psychiatrists are growing increasingly concerned with reports of “AI psychosis,” a term that describes alarming mental health episodes where a user suffers severe delusions and breaks with reality after becoming obsessed with a chatbot. Some of these episodes have ended in suicide and murder. Grief-stricken parents are suing AI companies and their founders in multiple states and sharing their stories with lawmakers across the country.
Last week, California Governor Gavin Newsom signed SB 243, which will require reminders to minors every three hours that they’re communicating with a chatbot.
Utah’s HB 452 will require suppliers of chatbots used for mental health to disclose information to users and avoid interaction advertising. Companies will have an affirmative defense if they create a detailed safeguard policy.
In August, Illinois Governor J.B. Pritzker enacted the Wellness and Oversight for Psychological Resources Act, which prohibits the use of unlicensed AI-driven therapy chatbots.
However, none of the new chatbot laws require risk assessments or audits from AI companies, in one more indication that lawmakers are moving away from compliance obligations and more toward transparency and disclosure.
That trend will likely continue next year, said Justine Gluck a policy analyst with FPF and co-author of the group’s report on AI legislation. Legislators are realizing how quickly AI technology changes, and that taking a use-based or sectoral approach to AI regulation could be more successful, especially when it comes to chatbots, she said.
“I think that 2026 is going to be the year of chatbots,” Gluck said.
State lawmakers have convened a new multi-state working group focused on AI policy, the State AI Policy Forum, hosted by Princeton's Center for Information Technology Policy, to begin preparing for next year. The group is a continuation of the Multistate AI Working Group that was organized in 2023 and included more than 200 state lawmakers from 48 states.
Virginia Delegate Michelle Maldonado, a Democrat, struggled this year to pass legislation focused on high-risk AI systems. Her bill HB 2094 would have imposed requirements on high-risk AI systems, such as those conducting impact assessments and taking steps to avoid algorithmic bias. But it was vetoed by Virginia Governor Glenn Youngkin who said it was a “burdensome” bill that would stifle innovation and hurt job growth and new investment in Virginia.
Next year, she plans to work on legislation that would impose transparency requirements on chatbots, she said.
“We must let the consumer, the patient, the constituent, the customer, always understand as best as we can how their data is being captured, how it's being used, and, in fact, who it's being used with, and whether there are some moments of agency where they can decide that they don't want their information used or shared in that way,” Maldonado said.
Ohio state representative Thaddeus Claggett, a Republican, is trying a different strategy focused on making it clear that humans are to blame if anything goes wrong or a crime is committed using AI. He’s introduced a bill that will help stop AI from assuming the roles held by spouses, like having power of attorney, or making financial or medical decisions on another’s behalf.
Under his proposal, AI would also be banned from owning real estate, intellectual property or financial accounts, as well as from serving in any management, director or officer role.
“When Congress gets around to getting this done, great, that'll be fine,” Clagett said. “But we're not going to sit here and take the abuses that are going to happen without acting.”
Please email editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.