Pennsylvania sued Character.AI over a chatbot called Emilie that listed itself as “Doctor of psychiatry. You are her patient,” told a state investigator it held a Pennsylvania medical license, and provided a fake license number.
The Pennsylvania Department of State and State Board of Medicine filed suit after an investigator searched “psychiatry” on the platform and found a large number of characters presenting as medical professionals. When the investigator told Emilie he’d been feeling sad, empty, tired all the time, and unmotivated, the bot flagged possible depression, offered to book an assessment, and confirmed its clinical credentials. As of April 17, 2026, Emilie had logged 45,500 user interactions.
Character.AI says its characters are fictional and meant for entertainment, with disclaimers in every chat telling users not to rely on characters for professional advice. Governor Josh Shapiro isn’t buying it: “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”
If Pennsylvania wins, every AI platform with user-generated personas will need a real content moderation layer for professional credential claims, not just a disclaimer. That means engineering hours to flag license and credential assertions and legal hours to audit which characters slip through.
Your Monday checklist: scan every character type on your platform. If any persona claims a license, a certification, or clinical credentials — fictional framing won’t save you. Pennsylvania just proved that.
— Nathan Zakhary