Morning Overview

New AI models raise fears of faster, more automated cyberattacks

Government agencies on both sides of the Atlantic are warning that artificial intelligence is accelerating the speed, scale, and sophistication of cyberattacks. The UK National Cyber Security Centre assesses that AI will “almost certainly” increase the volume and impact of cyber threats over the next two years, while the U.S. National Institute of Standards and Technology has published a formal taxonomy of attacks targeting AI systems themselves. At the same time, reporting by The Associated Press says Microsoft and OpenAI have described state-linked actors using generative AI tools to support parts of their cyber and influence operations, a development that suggests the risk is moving from theory toward real-world use.

AI-Powered Reconnaissance and Social Engineering


The most immediate concern is not some futuristic autonomous weapon but something far more mundane: better phishing emails and faster target research. The NCSC’s assessment on the near-term impact of AI on cyber threats found that reconnaissance and social engineering are becoming more effective, more efficient, and harder to detect when AI tools assist attackers. That judgment carries weight because it comes from a UK government cyber agency assessing threat activity, not from a vendor selling security products.

What makes this shift dangerous is the combination of speed and personalization. AI models can scrape public data, synthesize a target’s professional history, writing style, and organizational relationships, then generate convincing lure messages in seconds. Traditional phishing campaigns required manual effort that limited their reach. AI removes that bottleneck. The NCSC also flagged faster analysis of exfiltrated data as a growing concern, meaning that once attackers breach a network, they can sort through stolen files and identify high-value information far more quickly than a human analyst could.

These trends are consistent with the NCSC’s wider public reporting and guidance, including its reporting portal that compiles cyber incident information and resources for organizations. The pattern is consistent: attackers who previously relied on generic lures are now tailoring messages to specific roles, projects, and even internal jargon, making it harder for employees to distinguish legitimate communication from malicious outreach.

From Phishing to Exploit Development


The threat extends well beyond social engineering. A separate NCSC report covering AI’s impact through 2027 details how AI is already being used for vulnerability research and exploit development, not just for crafting deceptive messages. Attackers are feeding AI models information about software targets and receiving guidance on potential weaknesses, cutting the time between discovering a flaw and weaponizing it.

The same report notes that AI is being applied to the creation of basic malware. This does not mean AI is writing sophisticated zero-day exploits from scratch, at least not yet. But it does mean that less-skilled attackers can now produce functional malicious code with AI assistance, effectively lowering the barrier to entry for cybercrime. The NCSC assesses that AI will “almost certainly” make elements of cyber intrusion more effective and efficient, increasing both the frequency and intensity of threats. That language, “almost certainly,” underscores how seriously the agency views the trajectory.

Over the medium term, the NCSC expects AI tools to support more automated scanning for misconfigurations, more precise selection of high-value targets inside compromised networks, and more resilient command-and-control infrastructure. In practice, that could mean intrusions that adapt in real time to defenders’ responses, shifting tactics faster than human operators can track.

NIST Maps the Attack Surface of AI Itself


While the NCSC focuses on how attackers use AI as a tool, NIST has turned its attention to a related but distinct problem: how AI systems themselves become targets. The agency’s publication AI 100-2 E2025, titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” establishes a standardized framework for classifying threats to machine learning systems. The taxonomy covers the entire AI lifecycle, from data collection through deployment, and catalogs attacker goals at each stage.

The attack categories NIST identifies include evasion, where adversaries manipulate inputs to fool a deployed model; poisoning, where training data is corrupted to alter a model’s behavior; and privacy breaches, where attackers extract sensitive information embedded in a model’s parameters. NIST frames these categories as reflecting a “security community consensus” on AI-enabled attack surfaces, a deliberate choice that signals broad agreement among researchers rather than a single agency’s opinion. The Information Technology Laboratory at NIST developed the publication, drawing on its role as the federal government’s primary standards body for cybersecurity and information technology.

This taxonomy matters for a practical reason that most coverage overlooks. Without agreed-upon definitions, defenders cannot measure whether their protections actually work. If one organization calls something “data poisoning” and another calls it “training-set manipulation,” comparing defenses or sharing threat intelligence becomes difficult. NIST’s framework gives the security community a common vocabulary, which is a prerequisite for building effective, interoperable defenses and aligning research investments around the most consequential risks.

Standardization also feeds directly into workforce development. The National Initiative for Cybersecurity Education helps define roles and skills for cybersecurity professionals, and its work increasingly intersects with AI security. Clear terminology around adversarial machine learning makes it easier to design training curricula, certification programs, and job descriptions that reflect the real attack surface organizations now face.

Nation-State Adversaries Already Using Generative AI


The theoretical risks outlined by government agencies have already crossed into practice. Microsoft disclosed that U.S. rivals are beginning to use generative AI in offensive cyber operations, according to reporting that tied the revelation to efforts by OpenAI and Microsoft to identify and disrupt state-linked accounts. These accounts were allegedly using AI services to accelerate research on targets, refine spearphishing content, and explore ways to evade detection.

This development matters because it confirms that the gap between AI capability and AI exploitation is closing faster than many analysts expected. Nation-state groups typically operate at the top of the sophistication ladder, but generative AI tools are broadly accessible. If state actors are already integrating these tools into their workflows, criminal groups with fewer resources will follow the same path with even less restraint. The NCSC’s own forward-looking assessment aligns with this trajectory, projecting that AI will increase the frequency and intensity of threats as the technology matures and spreads across both advanced persistent threats and financially motivated actors.

For defenders, the implication is that threat models must now assume AI-augmented adversaries as a baseline, not a niche concern. Security operations centers will have to contend with more polished lures, more rapid exploitation of newly disclosed vulnerabilities, and potentially more adaptive malware families that can be tuned on demand.

Defensive Standards Lag Behind Offensive Innovation


One assumption worth challenging is the idea that standardized mitigations, like those NIST proposes, will keep pace with offensive applications of AI. The taxonomy in AI 100-2 E2025 is rigorous and necessary, but taxonomies describe problems; they do not solve them. Adoption of common frameworks across the private sector, allied governments, and international partners takes years, and attackers iterate in months or weeks. During that lag, adversaries can probe for gaps between what standards recommend and what organizations have actually implemented.

There is also a structural imbalance between offense and defense. Attackers need only find one overlooked configuration, one unpatched system, or one employee who clicks the wrong link. Defenders, by contrast, must secure sprawling digital estates, integrate legacy systems, and maintain compliance with evolving regulatory regimes. AI does not change this asymmetry; it amplifies it. Offensive actors can use generative models to test thousands of variations of an attack path or phishing template, while defenders struggle to retrofit AI-aware controls into existing tools and processes.

Bridging this gap requires more than guidance documents. It demands sustained investment in AI-literate security teams, integration of adversarial testing into model development, and closer collaboration between entities like the NCSC and NIST. Intelligence-driven assessments from the UK side, such as the NCSC’s projections on AI-enabled intrusion, can inform where U.S. standards bodies prioritize mitigations. Conversely, NIST’s structured view of AI attack surfaces can help operational agencies and industry partners translate high-level warnings into concrete controls.

Ultimately, the message from both sides of the Atlantic is consistent: AI is not only a tool for defenders but also a powerful accelerator for attackers. As reconnaissance, social engineering, exploit development, and adversarial machine learning all evolve under the influence of AI, organizations that treat these developments as distant or hypothetical risk being outpaced. Aligning intelligence, standards, and workforce skills around this new reality will determine whether the next wave of AI-driven cyber threats overwhelms existing defenses or becomes a catalyst for a more resilient digital ecosystem.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.