The FCA hasn’t published a dedicated AI-in-AML rulebook. It doesn’t need to.
UK financial firms running AI for transaction monitoring, customer screening, and adverse-media checks already operate under an outcomes-based supervisor: the regulator judges what your controls produced, not what tools produced it. Tech-neutral on the means, strict on the result.
PSR director David Geale framed the shared UK-regulator philosophy: “We’re not lowering our standards. We’re applying them in a way that allows us to step back when markets deliver safely, and step in when they don’t.” Translate to AML: if your AI delivers, you have room. If it misses a SAR-worthy pattern, the algorithm is no defense.
The practical bar industry has converged on, per Napier AI’s reading of FCA expectations:
- Audit trails that trace every decision back to underlying data
- Human-in-the-loop oversight for high-risk entity and transaction calls
- Explainability of AI reasoning, especially for elevated-risk flags
This aligns with the EU AI Act’s human-oversight obligations, which puts UK and EU compliance posture on the same square.
The MLRO carries this. If a regulator asks why an AI cleared a counterparty that later showed up in a typology report, “the model said so” is not an answer. The MLRO needs the audit trail, the model logic, and the override record.
Lloyds named its first Chief AI Officer last month for exactly this reason. The job is to make sure the answer exists before the question is asked.