Morning Overview

AI-assistedphishing phishing has made the ‘first click’ nearly impossible to spot — 28% of exploits now arrive before patches exist

In early 2024, a finance employee at a multinational engineering firm in Hong Kong joined a video call with what appeared to be the company’s chief financial officer and several colleagues. Every face on the screen was a deepfake. By the time the deception was discovered, the employee had authorized transfers totaling roughly $25 million. The incident, first reported by CNN, was not a traditional phishing attack, but it demonstrated something security teams had been warning about for months: generative AI has made social engineering so convincing that even cautious, trained professionals can be fooled.

That case involved video. The far more common vector is email, and the problem there is accelerating. AI-crafted phishing messages now arrive free of the grammatical stumbles and generic phrasing that once made them easy to spot. At the same time, a growing share of the software vulnerabilities those messages are designed to exploit are being weaponized before vendors can ship a fix. The result is a security environment where the traditional defenses on both sides of the equation, employee vigilance and timely patching, are failing simultaneously.

The shrinking patch window, by the numbers

The Cybersecurity and Infrastructure Security Agency maintains the Known Exploited Vulnerabilities (KEV) catalog, the federal government’s authoritative list of software flaws confirmed to be under active attack. Every entry must meet a documented exploitation threshold before inclusion, making the catalog a ground-truth record of what threat actors are actually using in the field. Under Binding Operational Directive 22-01, federal civilian agencies are required to remediate listed flaws within prescribed deadlines.

Analysis of KEV entries cross-referenced with vendor patch timelines shows that roughly 28% of cataloged vulnerabilities were exploited before a patch was publicly available. Research published by Carnegie Mellon University’s Software Engineering Institute, which examined exploit timelines across the vulnerability lifecycle, reinforces the pattern: threat actors are not waiting for proof-of-concept code to circulate on forums. They are racing vendors to the finish line, and in more than one out of four confirmed cases, they arrive first.

The real-world consequences are not abstract. The mass exploitation of Progress Software’s MOVEit Transfer flaw in mid-2023 compromised data from more than 2,700 organizations and affected over 93 million individuals, according to tallies maintained by security researcher Brett Callow. Citrix Bleed, disclosed in October 2023, was exploited by ransomware groups within days. In January 2024, Ivanti disclosed two zero-day vulnerabilities in its Connect Secure VPN appliances that were already under active exploitation by a suspected state-sponsored group, prompting CISA to issue an emergency directive ordering federal agencies to disconnect affected products entirely.

Why AI-generated phishing defeats traditional defenses

Phishing has always depended on social engineering, but generative AI has changed both the economics and the quality of attacks. Large language models can produce polished, context-aware messages that mimic internal corporate tone, reference real projects, and address recipients by name and title. IBM’s X-Force Threat Intelligence Index found that phishing and the exploitation of public-facing applications were the top initial access vectors in 2023, a ranking that held steady into 2024. Separately, research from SlashNext documented a 1,265% increase in phishing emails in the period following the broad availability of ChatGPT.

Traditional email security gateways rely on signature matching, domain reputation, and known-bad indicators. AI-generated phishing sidesteps all three. Each message can be unique, sent from freshly registered domains, and free of the telltale patterns that rule-based systems flag. For an employee scanning a busy inbox, the difference between a genuine request from a colleague and a weaponized lure has effectively vanished.

The scale problem compounds the quality problem. AI allows attackers to personalize thousands of messages with minimal effort, tailoring content to specific departments, roles, or even ongoing projects scraped from LinkedIn profiles and corporate websites. Generic awareness training that teaches employees to look for obvious red flags, misspellings, suspicious sender addresses, urgent wire-transfer requests, loses its value when every lure references a real initiative and appears to come from a trusted partner. Cognitive fatigue sets in, and even well-trained staff begin defaulting to trust simply to keep up with their workload.

This matters because phishing remains the primary delivery mechanism for initial access. Once an attacker gains a foothold through a clicked link or opened attachment, they can deploy payloads targeting unpatched vulnerabilities, including flaws sitting in the KEV catalog or, worse, zero-days with no patch at all. A nearly undetectable entry point combined with a shrinking remediation window creates compounding risk that neither user training nor perimeter defenses can fully address alone.

What remains genuinely uncertain

Several claims circulating in industry commentary deserve caution. The precise share of KEV-listed vulnerabilities exploited before a patch exists depends on how researchers define “patch availability” and whether they count vendor advisories, partial mitigations, or only full remediation packages. Different analytical frameworks yield different percentages. The 28% figure reflects one credible methodology applied to KEV data, but it is not a number CISA itself publishes as an official statistic.

Attribution is another gray area. Threat-intelligence firms have flagged AI-generated lures in campaigns linked to specific nation-state actors, but the evidence is often circumstantial. Language style, infrastructure reuse, and targeting patterns can all be spoofed, and no public government report reviewed for this article has drawn a precise line between state-sponsored and financially motivated use of AI in phishing operations.

There is also an open question about whether AI will permanently favor attackers. Some researchers argue that the same models used to craft phishing emails can be turned around to detect them, analyzing subtle linguistic cues or behavioral anomalies at machine speed. Others caution that attackers can iterate more freely, unconstrained by the regulatory and ethical guardrails that slow defensive deployment. As of mid-2026, there is not enough longitudinal data to declare a winner in that arms race.

What security teams should actually do now

For organizations trying to prioritize limited resources, the evidence points toward a two-front strategy.

Patch what is confirmed exploited, first. The KEV catalog exists precisely to answer the question “what should we fix today?” If a vulnerability appears there, it has been confirmed as actively weaponized, and remediating it should take precedence over theoretical weaknesses that carry high severity scores but no evidence of real-world exploitation. Automating KEV-based patch prioritization, rather than relying solely on CVSS ratings, aligns defensive effort with actual attacker behavior.

Layer email defenses beyond signatures. Organizations still relying primarily on gateway-level filtering are fighting the last war. Behavioral analysis tools that flag anomalies in message metadata, sending patterns, and request types offer a better chance of catching AI-generated lures. Internal phishing simulations should evolve to include AI-crafted scenarios, not just the clumsy templates that employees have learned to recognize. And for high-value actions like wire transfers or credential resets, out-of-band verification through a phone call, a Slack message, a walk down the hall should be policy, not optional.

Assume the perimeter will be breached. When the first click is nearly impossible to prevent and a meaningful share of exploits arrive before patches exist, detection and response inside the network become the critical layer. Endpoint detection and response tools, network segmentation, and identity-based access controls all reduce the blast radius of an initial compromise. The goal shifts from “stop every attack at the door” to “limit what an attacker can reach once inside.”

None of this eliminates risk. But anchoring decisions in confirmed exploitation data rather than theoretical threat models, and acknowledging that AI has fundamentally changed the phishing landscape, gives defenders a more honest and more effective starting point than the one most organizations are working from today.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.