Morning Overview

Mythos report warns regulators lag banks on AI oversight and controls

When the Office of the Comptroller of the Currency rewrote its model risk guidance in early 2026, it acknowledged something bank examiners had been grappling with privately for months: the rulebook was built for a different era of technology. Weeks later, a report from Cambridge Judge Business School made the point more bluntly. Banks worldwide have pushed artificial intelligence deep into lending, fraud detection, and software engineering, while the regulators responsible for supervising those banks are still catching up, both in their own use of AI and in their ability to oversee it.

The Cambridge findings land alongside a U.S. Government Accountability Office audit, GAO-25-107197, that examined how federal financial regulators handle AI risk. The GAO found that agencies rely almost entirely on existing supervisory frameworks rather than rules tailored to machine learning or generative AI. Three risk categories surfaced repeatedly across the agencies the GAO reviewed: explainability, third-party dependencies, and operational or control failures. Each of those risks grows as AI models become more autonomous, yet the supervisory playbook has barely changed.

Banks are moving fast; the rulebook is not

The Cambridge report identified software engineering as one of the most mature AI use cases inside banks. Development teams now use large language models to write, review, and deploy code at a pace that would have been unthinkable five years ago. That speed, however, creates a cyber risk channel: AI-generated code can introduce vulnerabilities at scale if testing and review processes have not been redesigned to match the volume.

The primary control document that examiners still carry into bank audits is the Federal Reserve’s SR 11-7 guidance, issued in 2011. SR 11-7 sets expectations for model development, validation, and governance. Its examples and testing expectations center on traditional statistical models, and it does not address systems that can generate text, make decisions, or take actions without direct human instruction. Banks and supervisors treat it as a baseline. The problem is that the distance between that baseline and the technology it is supposed to govern widens with every quarterly model release.

The OCC’s updated bulletin, released in April 2026, is the most concrete step any U.S. regulator has taken to close the gap. It replaced the agency’s earlier model risk management issuances, including prior OCC bulletins on the topic, with a consolidated update that signals examiners should evaluate AI-driven models with fresh criteria. But revised guidance is not the same as enforceable regulation. The bulletin leaves open how examiners will assess risks specific to generative AI, such as hallucinated outputs in customer-facing tools or emergent behaviors in agentic systems that chain multiple decisions together without human review.

Regulators are experimenting, but slowly

A handful of supervisory bodies are trying to build their own technical muscle. The Bank for International Settlements launched Project Noor through its Innovation Hub, partnering with the Hong Kong Monetary Authority and the UK Financial Conduct Authority to develop explainable AI techniques that supervisors can use directly. The project is one of the few instances where regulators are attempting to understand AI outputs on their own terms rather than relying solely on banks to explain what their models do.

On the standards side, the National Institute of Standards and Technology published its AI Risk Management Framework (AI RMF 1.0), which organizes risk management into four functions: govern, map, measure, and manage. Both regulators and banks reference the framework frequently. It provides a shared vocabulary, which matters in a field where terminology can shift from one vendor pitch to the next. The catch is that AI RMF 1.0 is voluntary. Banks can adopt it selectively, and regulators can cite it without requiring full compliance. Its real-world influence depends on whether individual agencies fold it into their examination manuals, a decision none has publicly committed to in detail as of May 2026.

Notably absent from the U.S. approach is any equivalent to the European Union’s AI Act, which entered into force in 2024 and includes provisions that apply to financial services. The EU framework classifies certain AI applications in banking, such as creditworthiness assessments, as high-risk, triggering mandatory transparency and testing requirements. No comparable classification system exists in U.S. federal regulation, leaving American supervisors to stretch legacy frameworks across use cases the original authors never anticipated.

The data gap makes risk hard to measure

One of the most striking findings in the current reporting is what does not exist in the public record. No regulator has published a firm timeline for issuing AI-specific binding rules. No primary source provides incident counts, loss figures, or failure rates tied to AI deployments in financial services. The Cambridge report establishes that adoption is widespread and that software engineering is a mature use case, but without hard data on what goes wrong, it is difficult to judge whether the regulatory lag has already caused measurable harm or whether the risk remains largely prospective.

Project Noor faces a similar transparency challenge. The BIS describes the initiative’s goals and partnership structure at a high level, but no published metrics show how its explainability tools perform when applied to live supervisory decisions. Whether techniques developed in a research setting will translate into routine examination practice across jurisdictions is unclear.

Regulators may be collecting incident data confidentially, and banks may be handling AI-related failures internally without disclosure. From the outside, though, those blind spots limit the ability of policymakers, researchers, and the public to assess whether current safeguards are adequate.

A widening gap with no clear closing date

The picture that emerges from the GAO audit, the OCC bulletin, the Cambridge report, and the BIS initiative is consistent: banks are embedding AI into core operations faster than supervisors can retool their oversight. Some regulators are updating legacy guidance, others are experimenting with their own AI capabilities, and voluntary frameworks like NIST’s AI RMF provide useful scaffolding. But binding, AI-specific rules and transparent performance metrics remain over the horizon. Until they arrive, the regulatory perimeter around AI in financial services will continue to rest on documents written for an earlier generation of models, stretched to cover technology that has already moved well beyond them.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.