The Colorado AI Act takes effect June 30, 2026. That’s ten weeks from now. If your AI system makes “consequential decisions” about Colorado residents — lending, housing, employment, insurance, healthcare access, education — the compliance clock already started.

Colorado is the first US state with a comprehensive AI law modeled on the EU AI Act. Developers and deployers of “high-risk AI systems” owe documented risk impact assessments, consumer pre-use notices, post-adverse-decision appeal rights, and an annual review of algorithmic discrimination. The law almost didn’t make June 30 — SB 25B-004 signed August 28, 2025 pushed the original February 2026 date back four months after industry pushback.

Enforcement is Colorado Attorney General only. No private right of action. $20,000 per violation, with each affected consumer counted as a separate violation. For a lender serving 10,000 Colorado borrowers with an underwriting model that fails a bias audit, the math gets ugly fast.

Two operator traps hiding in the statute. First, the developer versus deployer distinction. If you train the model, you’re a developer. If you use someone else’s model in production, you’re a deployer. Most SaaS companies are both — and the disclosures run in both directions. Second, there’s no 60-day cure period like Texas TRAIGA. Violations hit on day one.

What to have done by June 1, 2026: an impact assessment for every customer-facing AI decision, a “we use AI” notice on your checkout or onboarding flow, a human-review escalation path for consumer appeals, and a designated person who signs the annual discrimination review.

The xAI v. Weiser case (filed April 2026) may narrow the law before it takes effect. Do the impact assessments anyway.

— Nathan Zakhary