In late April 2026, the CEOs and security chiefs of America’s largest banks were summoned to a closed-door meeting in Washington with a single, pointed agenda: Anthropic’s newest AI model and the possibility that it could hand sophisticated hacking capabilities to attackers who previously lacked the skill to wield them.
Attendees included executives from JPMorgan Chase and other major financial institutions, according to The Guardian, citing Bloomberg reporting. The session was not a routine regulatory check-in. It was convened in direct response to a disclosure from Anthropic itself: the company said it had identified and shut down an AI-driven hacking campaign with links to China.
That disclosure, reported independently by the Associated Press, described how threat actors used generative AI to accelerate the production of malicious code, turning what had been a painstaking, manually intensive process into something closer to assembly-line output. Anthropic called the disruption a first-of-its-kind intervention, though it did not release detailed technical specifics about the operation or the exact model capabilities that were exploited.
Days later, JPMorgan CEO Jamie Dimon reinforced the alarm in his annual shareholder letter. “The threat of cyberattacks may be the biggest risk to the U.S. financial system,” Dimon wrote, adding that AI advancements are enabling attacks that are both more frequent and more sophisticated. It was the most direct language Dimon has used on the subject, and it landed at a moment when the rest of the industry was already rattled.
What the Washington meeting signals
The decision to pull bank leaders into a room over a specific AI company’s product is unusual. Federal regulators have long monitored cyber threats to the financial system, but those conversations have typically centered on nation-state hacking groups, ransomware gangs, or vulnerabilities in banking software. This meeting was different because it focused on a category of risk that barely existed two years ago: the possibility that a commercially available AI model could serve as a force multiplier for attackers.
The logic is straightforward. If Anthropic detected one campaign, how many others are already running undetected? Banks process trillions of dollars in transactions daily and sit on vast stores of personal and corporate data. A tool that lets a mid-tier hacker operate like an elite one changes the math on who can target those systems and how often.
No official readout from the meeting has been released. The specific federal body that convened the session has not been publicly confirmed, and it remains unclear whether the discussion produced any concrete policy steps, voluntary commitments, or regulatory timelines. Bloomberg’s reporting relied on sources familiar with the meeting, meaning the details passed through at least one layer of anonymity before reaching the public.
What Anthropic disclosed, and what it didn’t
Anthropic’s public account of the China-linked hacking campaign was deliberately broad. The company confirmed that generative AI was used to speed up the creation of attack tools but did not specify whether the threat actors accessed its model through the official API, used a fine-tuned open-source alternative, or combined multiple AI systems. The AP treated the claim with appropriate caution, summarizing Anthropic’s account without endorsing its full scope.
The attribution to China also carries inherent limits. Cyber attribution is one of the hardest problems in information security. Sophisticated actors routinely route operations through infrastructure in other countries, use false flags, and mimic the tactics of rival groups. Anthropic’s public statements have not included the kind of forensic evidence, such as indicators of compromise or command-and-control infrastructure analysis, that independent researchers or agencies like the Cybersecurity and Infrastructure Security Agency (CISA) typically require before making definitive state-level attributions.
There is also a structural tension worth noting. Anthropic is simultaneously the builder of the technology that allegedly enables these attacks and the company claiming credit for catching one. That dual role could reflect genuine responsibility, or it could reflect a calculated effort to shape the regulatory conversation before someone else does. The available reporting does not resolve that question.
The broader threat beyond banking
Banks were the first to be called to Washington because of their systemic importance and long history as targets. But the underlying problem is not confined to finance. If generative AI can lower the skill threshold for launching complex intrusions, then hospitals, power utilities, water systems, local governments, and small businesses all face a version of the same exposure. Most of those institutions have far fewer resources to defend themselves than JPMorgan does.
The SEC’s cyber-incident disclosure rules, which took effect in 2024, require public companies to report material breaches within four business days. But those rules were designed for a threat landscape that predates the current generation of AI models. No federal framework currently requires AI developers to disclose when their systems are used in cyberattacks, and no standardized reporting mechanism exists for tracking AI-assisted intrusions across sectors.
Other major AI developers, including OpenAI and Google DeepMind, have published threat reports describing misuse of their platforms, but the industry lacks a shared standard for what to disclose, how quickly, and to whom. That gap leaves regulators, investors, and the public relying on voluntary, sometimes self-interested accounts from the very companies whose products are at the center of the risk.
Where the policy conversation goes from here
For years, debates about the most powerful AI models focused on hypothetical future dangers. This episode is different. Regulators and bank executives are now reacting to a concrete allegation that a cutting-edge system was used in an attempted real-world intrusion. Even with the technical details still opaque, that shift from theory to practice is likely to influence how lawmakers approach licensing, red-teaming requirements, and ongoing monitoring of the most capable models.
JPMorgan’s shareholder letter did not include specific dollar figures for AI-related cyber losses or projected exposure, and no bank involved in the Washington meeting has publicly confirmed whether AI-assisted attacks have already caused financial damage at their institutions. Until that kind of institutional data enters the public record, the conversation will remain anchored in directional warnings rather than measurable risk.
What is already measurable is the speed at which the landscape is changing. In the span of a few weeks in spring 2026, an AI company acknowledged its technology was weaponized, federal officials pulled the country’s top bankers into a room to discuss it, and the CEO of America’s largest bank told shareholders that AI is making the threat worse. Each development reinforced the others, and none of them came with a clear answer about what happens next.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.