When Anthropic quietly told a small group of cybersecurity firms in early April 2026 that it had built an AI tool capable of scanning software for vulnerabilities faster than any human team, the company delivered an unusual second message: it would not be releasing the tool publicly. The system, called Mythos, could identify deeply buried flaws in code that has run critical infrastructure for decades. But Anthropic concluded that putting it on the open market risked handing attackers a skeleton key to power grids, banking systems, and government networks.
The decision, first reported by The Guardian on April 8, 2026, has drawn sharp reactions from security professionals who say the tool represents exactly the kind of AI capability they have been warning about for years.
“Automated vulnerability discovery at scale is the thing that keeps defenders up at night,” said Katie Moussouris, founder of Luta Security and a longtime advocate for coordinated vulnerability disclosure. “If this tool works as described, the question isn’t whether attackers will build something similar. It’s how much time we have before they do.”
What Mythos does and why Anthropic is holding it back
Mythos is designed to scan large codebases and system configurations for security flaws, including the kind of obscure, long-standing bugs that persist in legacy software because no one had the time or resources to find them manually. Traditional vulnerability scanners like Nessus or Qualys check for known issues against databases of cataloged flaws. Mythos, according to Anthropic’s partner communications, goes further: it uses large-model reasoning to identify patterns that suggest previously unrecognized weaknesses.
Rather than releasing Mythos through its standard API or as a downloadable product, Anthropic created a restricted distribution channel called Glasswing. Under this framework, only vetted cybersecurity firms receive access, though the company has not publicly disclosed the selection criteria, contractual terms, or oversight mechanisms governing the program.
The restricted approach breaks from the industry’s default playbook. Over the past several years, AI companies from OpenAI to Google DeepMind have generally favored broad releases, betting that wide adoption and rapid feedback outweigh the risks of misuse. Anthropic’s decision to absorb the cost of lost revenue and developer adoption suggests its internal red-teaming surfaced risks serious enough to override those commercial incentives.
Anthropic has not published a technical paper on Mythos, and no independent benchmarks or third-party audits have surfaced as of late April 2026. That means the public conversation about the tool is shaped almost entirely by the company’s own framing.
Why security researchers are alarmed
The concern is not abstract. In May 2021, a ransomware group exploited a single compromised password on a legacy VPN to shut down Colonial Pipeline, cutting fuel supplies across the eastern United States for days. CISA’s post-incident review documented how the attack cascaded precisely because an old, unpatched vulnerability went unaddressed in critical infrastructure.
A tool like Mythos could compress the timeline between discovering such flaws and exploiting them. What once took a skilled attacker weeks of manual analysis could, in theory, be accomplished in hours. That speed advantage is the core of the threat: coordinated disclosure processes, which typically give software vendors 90 days to issue patches before a flaw is made public, assume that discovery is slow and labor-intensive. Automated scanning at machine speed breaks that assumption.
“The whole responsible-disclosure ecosystem is built on the idea that finding bugs is hard,” said Bruce Schneier, a security technologist and fellow at Harvard’s Kennedy School. “Once you remove that bottleneck, the economics of offense and defense shift dramatically.”
Independent researchers have not yet published analyses or red-team simulations showing how Mythos performs against real-world targets. The warnings circulating in the security community are based on reasonable inference from the tool’s described purpose, not empirical testing. That distinction matters: a tool that finds known vulnerability classes faster poses a different risk than one that autonomously discovers entirely new attack surfaces.
Unanswered questions about access and oversight
The Glasswing model raises its own set of concerns. Concentrating powerful scanning capabilities in a small circle of private firms means that governments and the broader security community may lack visibility into the full landscape of latent vulnerabilities these tools uncover. If only a handful of Anthropic’s partners can see systemic weaknesses in widely used software, policymakers could be making resource and defense decisions with an incomplete picture.
There is also the question of partner security. The tool’s safety depends on the operational discipline of every organization granted access. A single firm with weak internal controls could become the vector through which Mythos’s capabilities leak to adversaries, whether through a breach, an insider threat, or simple negligence.
Anthropic’s move aligns with its broader Responsible Scaling Policy, which commits the company to evaluating catastrophic-risk thresholds before expanding access to its most capable systems. But voluntary corporate restraint has limits. No federal agency has issued guidance specifically addressing AI-driven vulnerability discovery tools. The post-Colonial Pipeline reforms focused on incident reporting requirements and baseline security standards for critical infrastructure operators, frameworks designed before automated AI scanning became a realistic concern.
Internationally, the EU AI Act, which began phased enforcement in 2025, classifies AI systems by risk tier, but its provisions for dual-use tools with both defensive and offensive applications remain untested. Whether Mythos would fall under the Act’s high-risk category or trigger additional obligations is an open question that European regulators have not publicly addressed.
The competitive pressure behind closed doors
Anthropic is not the only company pursuing AI-assisted security analysis. Microsoft’s Security Copilot integrates large-model capabilities into threat detection workflows. Google has used AI to improve its open-source fuzzing tools. Startups across the cybersecurity sector are racing to build similar products. The underlying research direction, using large language models to automate code review and vulnerability identification, is widely pursued across academia and industry.
That competitive landscape creates a strategic tension. If rivals release comparable tools with fewer restrictions, Anthropic’s restraint may have limited practical impact on attacker capabilities while costing the company market position. Conversely, if major players converge on similar access controls, a de facto industry norm for handling high-risk AI security tools could emerge before formal regulation catches up.
For now, there is no public evidence that any competing tool matches Mythos’s described capabilities. But the gap may not last. The techniques underlying automated vulnerability discovery are advancing rapidly, and the incentive to deploy them, both for defense and for profit, is enormous.
What comes next for Mythos and AI security policy
Several developments will determine whether Anthropic’s gamble pays off or becomes a cautionary tale. The most important is transparency. Any future publication of technical details, whether from Anthropic, its Glasswing partners, or independent researchers who gain access, would provide a firmer basis for judging whether Mythos represents a genuine leap or an incremental advance wrapped in cautious branding.
Regulatory signals matter too. If CISA or its international counterparts issue guidance on AI-assisted vulnerability discovery, that would mark a shift toward formal oversight of a category of tools that currently operates in a policy vacuum. Such guidance could address deployment contexts, logging and audit requirements, and expectations for incident reporting if tools are misused or compromised.
The evolution of the Glasswing program itself will be telling. Clearer information about eligibility, security obligations, and audit practices could either build confidence that restricted release meaningfully reduces risk or expose the controls as porous. Any breach or misuse involving a partner firm would trigger intense scrutiny.
For the moment, the verified facts are narrow: Anthropic built an AI tool it considers too dangerous to release broadly and is testing a tightly controlled alternative. But the stakes reach well beyond a single product. As AI systems grow more capable of probing the digital infrastructure that underpins modern life, the decisions companies and governments make now about who wields those capabilities, and under what constraints, will shape cybersecurity for years to come.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.