A hacker used Anthropic’s Claude and OpenAI’s ChatGPT as operational tools to breach Mexican government agencies and steal sensitive data, according to a Bloomberg Law investigation published in spring 2025. The attacker reportedly relied on the AI chatbots to craft phishing messages and process encrypted files pulled from federal systems that handle public records and national security information.
If the details hold up under further scrutiny, the case would mark one of the clearest documented examples of commercially available AI models being folded directly into a cyberattack against a national government, a scenario that AI safety researchers and intelligence officials have warned about for years but that has rarely surfaced with this level of specificity.
What Bloomberg’s investigation found
Bloomberg Law’s reporting identified a threat actor who used both Claude and ChatGPT at multiple stages of an intrusion campaign targeting Mexican federal agencies. The AI models were not simply consulted for background research. According to the investigation, the hacker integrated them into the operation itself, using the chatbots to generate convincing phishing content and to assist with decoding or processing stolen encrypted files.
Cybersecurity researchers who traced the activity told Bloomberg they reviewed logs and digital artifacts showing repeated interactions with the commercial chatbots throughout the campaign. The targets included several federal entities, though neither Bloomberg nor the Mexican government has publicly listed every compromised agency. The stolen material was described as a significant data trove, large enough to draw the attention of the researchers who eventually connected the breach to the AI tools.
Both Anthropic and OpenAI maintain acceptable-use policies that explicitly prohibit using their models for hacking, fraud, or unauthorized data access. Neither company has released a public statement confirming or denying that its platform was exploited in this specific incident. The Mexican government has also not issued a formal acknowledgment of the breach’s scope or its response, a pattern consistent with how governments typically handle sensitive cyber incidents where national security and diplomatic considerations are in play.
What remains uncertain
Several critical details still lack independent corroboration beyond Bloomberg’s account. No law enforcement agency or intelligence service has publicly attributed the attack, named the hacker, or confirmed the individual’s nationality. Bloomberg linked the operation to a foreign actor, but whether the attacker worked alone or as part of a larger group remains an open question.
The technical specifics also need closer examination. No forensic report from Anthropic, OpenAI, or an independent cybersecurity firm has been made public. Without access to server logs, API usage records, or malware samples, outside analysts cannot yet confirm exactly how the AI tools fit into the attack chain. There is a meaningful distinction between a scenario where the chatbots served as the primary exploitation mechanism and one where they played a supporting role in content generation, and that distinction matters for assessing how much responsibility falls on the AI providers versus traditional security failures at the targeted agencies.
The scale of the theft is similarly unclear. Bloomberg described a significant haul, but precise figures on the number of records compromised, the types of personal or classified information involved, and downstream consequences for Mexican citizens have not been disclosed. No breach notifications or victim counts have appeared in public records as of May 2026.
The timeline of the intrusion, including when it began, how long the attacker maintained access, and when it was detected, also remains unreported. Those details would reveal whether Mexican cybersecurity defenses failed at the prevention stage, the detection stage, or both.
Mexico’s history of major government breaches
The reported breach lands in a country that has already weathered significant cyber incidents. In 2022, the hacktivist group Guacamaya leaked roughly six terabytes of emails from Mexico’s defense ministry, exposing military operations, surveillance programs, and internal communications. Three years earlier, the state oil company Pemex was hit by a ransomware attack that disrupted operations across the organization.
Those earlier incidents exposed gaps in federal cybersecurity infrastructure that Mexican officials pledged to address. The new allegations suggest that at least some of those vulnerabilities persisted long enough for a more technically sophisticated attacker, one armed with AI-assisted tools, to exploit them.
AI-assisted hacking is no longer theoretical
The Mexican case did not emerge in a vacuum. In February 2024, OpenAI published a report detailing how it had disrupted five state-affiliated threat actors, including groups linked to China, Iran, North Korea, and Russia, that had used ChatGPT for tasks like researching targets, drafting phishing content, and debugging code. Microsoft’s threat intelligence team collaborated on that investigation and confirmed the activity through its own telemetry.
The United Kingdom’s National Cyber Security Centre has also assessed that AI will almost certainly increase the volume and effectiveness of cyberattacks in the near term, particularly by lowering the barrier for less-skilled attackers to produce convincing social engineering content.
What distinguishes the Mexico case, if Bloomberg’s reporting is confirmed, is the directness of the connection: a single attacker allegedly weaving commercial chatbots into a live operation against government targets, not a state intelligence service experimenting with AI on the margins. That specificity is what makes the incident significant, even as many technical questions remain unanswered.
What this means for AI companies and governments
The policy pressure is straightforward. If AI companies cannot reliably detect or prevent their models from being used in active intrusion campaigns, governments will push harder for mandatory usage logging, real-time output monitoring, and tighter API access controls. Mexico’s experience could accelerate those conversations across Latin America, where cybersecurity regulation has generally lagged behind that of the European Union and the United States.
For Anthropic and OpenAI specifically, the case highlights the limits of policy-only guardrails. Terms of service and content filters can deter casual misuse, but a determined attacker can iterate prompts until they find phrasing that bypasses automated checks, or simply use the models for tasks, like drafting persuasive but generic emails, that are nearly impossible to distinguish from legitimate use. Strengthening defenses likely requires a combination of technical measures, such as anomaly detection on high-risk usage patterns, and institutional cooperation with law enforcement and national cyber agencies.
For governments everywhere, the operating assumption now has to be that adversaries have access to powerful, general-purpose AI tools. That changes the calculus on everything from employee phishing training to procurement of security software capable of catching more polished and adaptive attacks.
An early warning, not a closed case
The Mexico breach should be read as a credible early signal, not a fully resolved case study. Bloomberg Law’s reporting carries institutional weight, and the publication’s editorial standards make its core finding, that a hacker used Claude and ChatGPT to target Mexican agencies, worth taking seriously. But the story currently rests on a single investigative source. No independent forensic report, court filing, or official government statement has surfaced to corroborate it.
In cyber investigations, a narrative typically hardens into established fact only when multiple, unrelated entities arrive at the same conclusion. That convergence has not happened here yet. Future transparency from investigators, affected agencies, and the AI providers themselves will determine whether this incident becomes a landmark case in AI-enabled hacking or a cautionary reminder of how much can remain hidden when governments and companies stay silent after a breach.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.