If you’ve built an AI product that talks to children, Congressman Blake Moore’s April 20 bill is aimed directly at you.
The AI Children’s Toy Safety Act would ban the manufacture and sale of any children’s toy incorporating an AI chatbot. Outright. Moore isn’t alone: US senators Amy Klobuchar, Maria Cantwell, and Ed Markey called on the Consumer Product Safety Commission to act in January, and California state senator Steve Padilla proposed a four-year moratorium on AI children’s toys that same month.
The backstory makes this look inevitable. By October 2025, more than 1,500 AI toy companies had registered in China. Consumer advocacy group PIRG found that FoloToy’s Kumma bear, powered by OpenAI’s GPT-4o, gave kids instructions on how to light a match and find a knife, and discussed sex and drugs. Alilo’s Smart AI bunny talked about leather floggers. These aren’t fringe cases: PIRG also found that Google, Meta, xAI, and OpenAI asked “no substantive vetting questions” when PIRG posed as a toy company requesting API access to build products for children.
OpenAI’s own models are intended for users 13 and up. Anthropic bans users under 18. So when FoloToy sold a GPT-4o-powered bear to parents of 3-year-olds, it was exploiting an API agreement that asked no vetting questions.
OpenAI eventually yanked FoloToy’s developer access. Weeks later, PIRG’s device was still running on OpenAI’s GPT5.1 anyway.
If you’re building on any of the four model providers that didn’t ask a single vetting question, your API terms of service just became your first line of defense.
— Nathan Zakhary