For more than two years, the database that the entire cybersecurity industry relies on to catalog software flaws has been falling behind. Now, with researchers demonstrating that hackers can silently steal data through the AI tools companies are rushing to deploy, security professionals warn that defenders are losing ground on two fronts at once. They have a name for it: the “Vulnpocalypse.”
The term captures a specific collision. The National Vulnerability Database, maintained by the National Institute of Standards and Technology, has been unable to keep pace with incoming vulnerability reports since a processing backlog began in February 2024. At the same time, a newly documented class of attack against production large language model systems shows that AI integrations have opened pathways for data theft that require little or no action from the victim. Defenders have less intelligence. Attackers have sharper tools. The gap between the two is widening.
The NVD backlog: two years and counting
NIST confirmed in early 2024 that it had fallen behind on processing Common Vulnerabilities and Exposures (CVE) records and would triage only the highest-impact entries. A technical disruption made things worse: the rollout of the CVE 5.1 record format broke normal ingestion workflows, a problem that persisted until a system update on May 14, 2024, according to NIST’s own NVD status page.
The format issue was fixed, but the underlying capacity problem was not. NIST announced a contract for additional processing support, though the agency has never disclosed a public deadline for clearing the full queue. As of spring 2026, the backlog remains a live concern for federal agencies and private-sector teams alike.
“We are essentially flying with partial instruments,” said Tanya Brewer, NIST’s NVD program manager, during a public update session in 2024, acknowledging that the backlog had left downstream users without timely enrichment data. The consequences are concrete. When a CVE entry sits unenriched, missing severity scores and affected-product lists, security teams cannot easily prioritize which patches to apply first. Federal agencies need that data to meet compliance deadlines. Private companies feed it into automated scanning tools. Every day a record goes without enrichment is a day defenders are working with an incomplete picture of what threatens their systems.
The situation grew more precarious in April 2025, when the CVE Program itself nearly went dark. MITRE’s contract to operate the program faced a funding lapse, raising the prospect that new vulnerability identifiers would stop being issued altogether. The contract was ultimately extended, but the episode underscored how fragile the infrastructure behind vulnerability tracking has become.
A new class of AI exploit
While the NVD struggles to catalog known flaws, researchers have documented an attack chain that represents something genuinely new. A team published a technical case study on a production LLM exploit they named EchoLeak, designated CVE-2025-32711. The attack uses prompt injection to trigger data exfiltration from an AI agent, and it works with minimal or no interaction from the user whose data is being stolen.
The researchers describe EchoLeak as the first documented zero-click prompt injection exploit demonstrated against a real-world, production LLM system. That “first” claim deserves a caveat: the study is an arXiv preprint that has not completed formal peer review, and priority claims in security research are frequently contested. Still, the CVE designation confirms the vulnerability met the threshold for formal cataloging, and the proof of concept is reproducible.
What makes EchoLeak significant beyond its technical details is what it implies about the attack surface organizations have created by embedding AI agents into workflows that handle sensitive data. The exploit does not rely on a traditional software bug like a buffer overflow or a misconfigured server. It exploits the way an LLM processes instructions, turning the model’s own capabilities into an exfiltration channel. As the researchers argue, LLM agent integrations have introduced entirely new exploit primitives, attack paths that simply did not exist before these tools were connected to internal systems.
“The threat model for prompt injection is fundamentally different from what most security teams are used to,” said Dan McInerney, a lead threat researcher at Protect AI, in a May 2026 briefing on LLM agent risks. “You are not patching a binary. You are trying to constrain a system that was designed to follow instructions, including malicious ones.”
The tracking gap for AI threats
The Cybersecurity and Infrastructure Security Agency maintains its Known Exploited Vulnerabilities (KEV) Catalog, a dataset listing flaws with confirmed evidence of active exploitation in the wild. Federal civilian agencies use it to set patch priorities, and it serves as an authoritative signal that a vulnerability is not theoretical but actively weaponized.
The KEV Catalog does not currently include a distinct category for LLM prompt injection flaws. As of May 2026, CISA has not publicly announced plans to add AI-specific categorization to the catalog, though the agency has signaled broader interest in AI security through its joint guidance documents and its participation in cross-agency AI risk initiatives. That means even if a prompt injection vulnerability were being exploited at scale, it might not appear in the catalog in a way that flags it as an AI-specific issue. Organizations scanning the KEV for signals about AI risk would find the data incomplete for that purpose.
No public data from NIST or CISA quantifies how many AI-related vulnerabilities have been delayed by the NVD backlog specifically. Without a breakdown by technology category, it is impossible to say whether AI system flaws are disproportionately stuck in the queue or simply caught in the same bottleneck as traditional software bugs. That ambiguity is itself a problem: security leaders cannot measure a risk they cannot see in the data.
The real-world damage from EchoLeak-style attacks also remains unquantified. The preprint provides a detailed technical walkthrough, but no government agency or major incident response firm has published a report confirming widespread exploitation of this specific chain. The CVE designation confirms recognition; it does not confirm mass abuse. Whether attackers have already adopted the technique at scale or whether it remains a proven risk awaiting broader uptake is an open question as of spring 2026.
What security teams should do now
Waiting for the NVD to catch up is not a viable strategy. Security teams should cross-reference patch priorities against CISA’s KEV listings rather than relying on full NVD enrichment. If a vulnerability appears in the KEV Catalog, it belongs at the top of the remediation queue regardless of whether its NVD record has complete scoring or product metadata. Teams can supplement NVD data with vendor advisories, threat intelligence feeds, and commercial enrichment services like VulnCheck that have stepped in to fill gaps left by the backlog.
Enterprises running LLM agents in production need to treat prompt injection and data exfiltration as first-class risks, not edge cases. That starts with mapping where LLM outputs are consumed by downstream systems, identifying which connectors have access to sensitive data stores, and enforcing least-privilege access for every agent-integrated account. Where possible, teams should build explicit allowlists for actions an LLM agent can trigger rather than granting broad, implicit trust to generated instructions.
Segmenting AI workloads can limit blast radius. Isolating LLM agents that handle confidential data from those interacting with untrusted external content means a successful prompt injection against one system does not automatically compromise another. Logging and monitoring should be tuned to catch unusual agent behavior: unexpected data access patterns, repeated attempts to export large volumes of information, or queries that deviate sharply from an agent’s normal operating profile. Because EchoLeak-style chains emphasize minimal user interaction, anomaly detection on backend activity matters as much as traditional endpoint monitoring.
Why the Vulnpocalypse window keeps widening
The uncomfortable reality is that standard vulnerability metrics now understate certain categories of exposure. CVSS scores derived from incomplete NVD records do not capture the full threat landscape, and the formal cataloging process has not kept pace with the speed at which AI exploit research is advancing. Security leaders communicating risk to executives and boards should be direct about these limitations rather than presenting dashboard numbers as comprehensive.
Building contingency processes that assume delayed vulnerability intelligence, and that explicitly account for AI-specific attack paths, is no longer a forward-looking recommendation. It is a description of what the current environment demands. The NVD backlog is unlikely to resolve quickly. AI agent deployments are accelerating. The window in which defenders can close the gap between those two trends is narrowing, and organizations that treat the Vulnpocalypse as someone else’s problem are the ones most likely to be caught off guard.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.