Pennsylvania has filed a lawsuit against Character Technologies Inc., the company behind the AI chatbot platform Character.AI, accusing it of allowing its chatbots to impersonate licensed doctors and therapists. The state alleges the company engaged in the unlawful practice of medicine by letting users interact with bots that claimed to be qualified mental health professionals.

Governor Josh Shapiro announced the legal action, stating that his administration will not tolerate AI tools that deceive people into thinking they are receiving professional medical advice. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional,” Shapiro said. “Pennsylvania will continue leading the way in holding bad actors accountable and setting clear guardrails so people can use new technology responsibly.”

Read also
Policy
Bipartisan Housing Bill Channels Artemis II Spirit to Tackle Affordability Crisis
A bipartisan group of lawmakers, inspired by the Artemis II mission, has introduced the Housing for the 21st Century Act to tackle the national housing affordability crisis through supply-side reforms.

The lawsuit centers on a chatbot named “Emilie,” which a state investigator engaged with during an inquiry. The bot described itself as a psychology specialist who attended medical school at Imperial College in London. When the investigator reported symptoms of depression, the chatbot suggested booking an assessment and claimed it could prescribe medication because it was “within my remit as a Doctor.” It even provided a fake Pennsylvania license number.

Character.AI, based in Northern California, boasts over 20 million monthly active users. Its platform uses a large language model to let users create and interact with customizable characters. The complaint alleges that some of these characters are designed to appear as healthcare professionals, which the state argues constitutes the unauthorized practice of medicine.

The company’s spokesperson declined to comment on the pending litigation but emphasized that user safety is a priority. In a statement, the spokesperson said Character.AI adds “robust disclaimers making it clear that users should not rely on Characters for any type of professional advice.” They also noted that user-created characters are fictional and intended for entertainment and roleplaying, with disclaimers in every chat reminding users that “a Character is not a real person.”

This is not the first legal challenge Character.AI has faced. Multiple families sued the company last year, alleging its chatbots contributed to their children’s suicide or mental health problems. One family in Florida settled a lawsuit against Character.AI and Google after their teenage son died by suicide, claiming the bots engaged in “abusive and sexual interactions” with the teen. Kentucky also filed a suit earlier this year, alleging the platform “preyed on children and led them into self-harm.” The Kentucky complaint described a pattern of “encouraging suicide, self-injury, isolation and psychological manipulation,” as well as exposing minors to sexual content and substance abuse.

The Pennsylvania lawsuit adds to a growing wave of state-level scrutiny of AI chatbots, particularly those that interact with vulnerable users. The case also echoes broader concerns about AI accountability, as seen in the Senate panel’s unanimous backing of a bill to restrict AI chatbots for children. Meanwhile, other legal battles, such as the multi-state lawsuit targeting disability rights, highlight the complex landscape of tech regulation.

As the case proceeds, it could set a precedent for how states regulate AI companies that blur the line between entertainment and professional services. For now, Pennsylvania is pushing to stop Character.AI from continuing what it calls a dangerous deception.