The Texas Responsible AI Governance Act — TRAIGA, Governor Abbott’s answer to the EU AI Act — took effect January 1, 2026. We’re 110 days in. Texas AG Ken Paxton gets exclusive enforcement; no private right of action. The operator-friendly feature buried in the statute is a 60-day cure period: the AG has to notify a suspected violator and give them two months to fix the problem before any civil penalty attaches.
That cure period is the single most misunderstood piece of the law. It is not immunity. Once you get a cure letter, you still have to fix the problem on the AG’s timeline, not yours — and re-offending after a cure is treated as a knowing violation. The law also creates a regulatory sandbox for AI developers to test novel systems under agency supervision, which is rare in US state law.
Pieces Technologies is the warning shot. In September 2024, before TRAIGA was even enacted, AG Paxton settled with the Austin-based generative-AI healthcare company over deceptive accuracy claims in hospital documentation software. Five-year compliance term, no civil penalty disclosed, but real operational teeth. If AG Paxton was willing to chase an AI company under pre-existing consumer-protection theory, he will certainly use TRAIGA’s explicit authority now.
TRAIGA applies to “consequential decisions” about Texas residents made by AI: employment, lending, housing, healthcare, insurance, education. Same list as Colorado. Different enforcement posture. Texas skews operator-friendly on paper. In practice, AG Paxton reaches for AI cases faster than most state AGs.
What operators should have shipped by mid-2026: a Texas-specific AI inventory (what models touch Texas users), a cure-letter response plan (who picks up the phone when the AG emails), and a documented bias review trail. The sandbox application is worth considering for genuinely novel systems — but it triggers ongoing reporting obligations.
— Nathan Zakhary