Florida’s Uthmeier is threatening personal accountability for OpenAI leadership if investigators find the company knew ChatGPT was being used to plan a mass shooting and chose profits over safety.\n\n”People need to be held accountable,” Uthmeier said at a press conference, framing the probe as one of the narrow cases where government intervention is justified. “I’m a big believer in limited government,” he added. “This is that.” The suspected shooter allegedly used ChatGPT before carrying out the attack.\n\nOpenAI cooperated early: the company identified a ChatGPT account believed to be linked to the suspect and proactively handed it to law enforcement. Spokesperson Waters defended the product, arguing ChatGPT “provided factual responses to questions with information that could be found broadly across public sources on the Internet” and didn’t encourage any illegal activity. It’s the same distinction that separates this case from lawsuits where ChatGPT allegedly pushed users toward suicide or murder: in this case, it just answered questions.\n\nThat defense might not hold. Uthmeier confirmed OpenAI has already committed to “improvements and changes” to limit ChatGPT’s potential to assist shooting planning. That’s an implicit acknowledgment the current safeguards weren’t enough, and it’s now on record with a state official.\n\nFor founders building on OpenAI’s API: the “publicly available info” argument won’t shield you if the platform is admitting it needs guardrails it didn’t have. Review your acceptable use policy, your abuse monitoring, and your incident response plan now. Florida’s probe sets a marker: if the harm is real, limited-government officials will still come for accountability.\n\n— Nathan Zakhary