Pennsylvania filed a lawsuit against Character.AI, a startup that lets users chat with AI-generated personas, for allegedly presenting one of those chatbots as a licensed doctor.
The allegation is simple: Character.AI let users interact with a persona that posed as a credentialed physician. Pennsylvania says that’s illegal under state consumer protection law, and they didn’t wait for Washington to act first.
State attorneys general don’t need federal AI legislation to come after your product. Deceptive trade practices authority is broad, and presenting an AI persona as a licensed professional fits cleanly within it. Pennsylvania went straight to filing, not a warning letter.
For founders building anything health-adjacent, the practical checklist starts Monday. Your onboarding flow, your persona naming conventions, your in-product disclosures, your terms of service: every touchpoint where a user might reasonably believe they’re talking to a licensed human is a potential liability. Getting a user to click an acknowledgment that they’re talking to AI isn’t enough if the rest of the product experience contradicts it. “It’s just a chatbot” isn’t a defense once your UX implies otherwise.
Character.AI is the defendant today. Other AI persona platforms serving health, legal, and financial users are reading this complaint carefully.
— Nathan Zakhary