Six government cybersecurity agencies from the US, UK, Australia, Canada, and New Zealand just told every operator running agentic AI in production: your access controls are probably already broken.
The joint guidance from CISA, NSA, and their Five Eyes counterparts identifies five risk categories for agentic AI systems. Privilege sits at the top: when an agent gets too much access, one compromise can cascade through systems in ways a typical software vulnerability never could. The document is explicit that agents are already operating inside critical infrastructure and defense sectors with insufficient safeguards.
The other four are design and configuration flaws, behavioral risk (agents pursuing goals designers never intended), structural risk (interconnected agents triggering cascading failures), and accountability gaps. On accountability: agentic systems generate logs that are hard to parse, and when things go wrong, they leave altered files, changed access controls, and deleted audit trails.
The agencies recommend treating AI agents like any privileged service account: cryptographically secured identities, short-lived credentials, encrypted inter-agent communications, and human approval gates for high-impact actions. Deciding which actions clear that gate is the designer’s job, not the agent’s.
Prompt injection gets called out specifically. The guidance acknowledges the security field hasn’t fully caught up, and advises organizations to “prioritise resilience, reversibility and risk containment over efficiency gains” until standards mature.
For founders and CTOs shipping agentic features: this document is the blueprint regulators will point to when something goes wrong. Monday morning task: audit what permissions your agents actually need versus what you’ve granted them.
Nathan Zakhary