California SB 243, the companion chatbot safety law, took effect January 1, 2026. Oregon SB 1546 and Washington HB 2225 take effect January 1, 2027. The West Coast now has a three-state AI-chatbot regime that any consumer product with an LLM front-end has to comply with simultaneously.
The laws overlap more than they differ. All three require operators to:
- Disclose when a user is interacting with AI rather than a human
- Implement protocols for detecting minor users and suicidal ideation
- Refer users expressing self-harm to crisis resources
- Publish an annual transparency report
The divergences matter for implementation. California uses “companion chatbot” as the trigger — a chatbot designed to form emotional relationships. Oregon and Washington use broader triggers that sweep in customer-support bots, coding copilots, and vanilla LLMs with a chat interface. If you ship any product with an LLM chat surface to West Coast users, plan for Oregon/Washington as the binding constraint.
California’s enforcement is the state AG with limited reporting obligations that kick in July 1, 2027. Oregon and Washington both give their state AGs enforcement authority from day one.
For operators, this is a three-state compliance ticket but essentially one engineering effort: age-estimation flow, AI-disclosure banner, self-harm intent detection, crisis-hotline escalation, annual report template. Build it once against the strictest of the three (probably Washington HB 2225 based on its broader scope), ship it to all three, and document the decision tree for your next vendor diligence call.
The Future of Privacy Forum’s April 2026 analysis flags that Oregon and Washington’s narrower private-right-of-action language gives operators more room than SB 243’s class-action exposure. But in all three, the AI-disclosure banner is the baseline — and “you are chatting with a human” claims will be treated as material misrepresentations.
— Nathan Zakhary