Morning Overview

U.K. tech secretary warns AI-driven cyberattacks could hit within months

The success rate of AI models on apprentice-level hacking tasks jumped from roughly 10% to 50% in a single year, according to the U.K. government’s own research. Now Britain’s technology secretary, Peter Kyle, is telling businesses they may have only months before those capabilities translate into real-world attacks.

Kyle, the Secretary of State for Science, Innovation and Technology, issued the warning in a letter directed at small and mid-sized businesses in early 2025. The letter, first reported by British media outlets covering the Department for Science, Innovation and Technology, urged firms to treat AI-augmented cyber threats as imminent rather than theoretical. While the full text has not been published as an official government release, the core message aligns tightly with data from the agency Kyle oversees.

The data behind the warning

The primary evidence comes from the Frontier AI Trends Report, published in May 2025 by the U.K. AI Safety Institute (formerly known as AISI). The report tracked how frontier AI models perform on structured cybersecurity tasks and found a sharp acceleration:

  • In early 2024, leading models completed apprentice-level cyber tasks at a rate just above 10%.
  • By the time the report was finalized, that figure had climbed to approximately 50%.
  • The length and complexity of cyber tasks AI could sustain was doubling rapidly.
  • Based on the trend line, the institute projected that the first model capable of completing expert-level cyber tasks could emerge as early as 2025.

That last point is critical. An AI system that handles apprentice-grade intrusion steps today sits on a trajectory toward orchestrating far more complex attack chains, the kind that currently require skilled human operators working over days or weeks.

The National Cyber Security Centre had already flagged the direction of travel. Its assessment on the near-term impact of AI on the cyber threat, published before the latest AISI data, warned that AI would amplify the volume and effectiveness of attacks while enabling faster exploitation of stolen data. The AISI findings suggest that assessment, written when model performance was still in the low double digits, now understates the pace of change.

What an AI-powered attack actually looks like

For a business owner wondering what this means in practice, the threat is less about a sentient machine breaking through a firewall and more about automation at every stage of an attack. AI can already draft convincing phishing emails tailored to a specific employee’s role and writing style. It can scan publicly exposed systems for known vulnerabilities faster than any human team. And it can analyze stolen databases to identify the most valuable records in seconds rather than hours.

What the AISI benchmarks suggest is coming next is the ability to chain those steps together autonomously: identify a target, craft the lure, exploit a vulnerability, move laterally through a network, and extract data, all with minimal human direction. That shift from AI-assisted attacks to AI-directed attacks is the threshold Kyle’s warning is built around.

Private-sector threat intelligence supports the general trajectory. Major cybersecurity firms have reported a measurable increase in AI-generated phishing campaigns and automated vulnerability scanning throughout 2024 and into 2025. While no public U.K. government dataset currently catalogs confirmed AI-executed breaches tied specifically to the capability growth AISI documented, the building blocks are already in active use by threat actors.

What remains uncertain

The AISI benchmarks measure model performance on structured tasks in controlled environments. Whether a 50% success rate in a lab translates into a 50% success rate against a real corporate network with layered defenses, monitoring tools, and human security staff is a different question. Network complexity, patching cadence, and employee training all create friction that benchmarks do not fully capture.

The projection that expert-level capability could arrive in 2025 is a trend-based estimate, not a guaranteed milestone. If frontier model development slows, or if defensive AI tools improve at a comparable rate, the timeline could stretch. Conversely, a breakthrough in agentic AI reasoning, meaning the ability of models to plan and execute multi-step tasks independently, could compress it further.

Kyle’s “within months” framing should be understood as a political judgment informed by the AISI data, not as a precise technical forecast. It signals that senior officials believe the risk is close enough to warrant public action now, which carries weight for policy and business planning even if the exact date remains unknowable.

Where U.K. warnings fit in the global threat picture

Kyle’s letter does not exist in isolation. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the FBI have issued joint advisories throughout 2024 and into 2025 warning that AI tools are lowering the barrier to entry for cyber threat actors. NATO’s 2024 annual threat assessment similarly flagged AI-augmented cyberattacks as a near-term concern for member states. And private-sector reports from firms such as Google Mandiant and Microsoft Threat Intelligence have documented real-world cases of state-linked groups experimenting with large language models to automate reconnaissance and craft social-engineering lures.

The convergence of these warnings across governments and the private sector reinforces the AISI data rather than contradicting it. Where the U.K. assessment stands out is in quantifying the pace of improvement on structured benchmarks, giving the broader international consensus a specific, measurable backbone.

How NCSC tools map to the accelerating threat curve

The uncomfortable reality for small and mid-sized firms is that AI offensive capabilities are improving on a curve that outpaces most organizations’ security refresh cycles. A company that completed a cybersecurity audit a year ago may now face threat profiles that audit was never designed to anticipate.

The NCSC offers two practical starting points. Its Cyber Essentials scheme provides a baseline certification framework covering the five most common attack vectors. A newer cyber security toolkit is tailored specifically to smaller organizations that lack dedicated IT security staff.

Neither tool was designed with the latest AISI findings in mind, but both address the fundamentals that still stop the majority of attacks: patching known vulnerabilities, enforcing multi-factor authentication, controlling administrative privileges, and training staff to recognize social engineering. Those basics will not become obsolete when AI-driven threats arrive. They will simply become the minimum, and any firm that has not reached that minimum is already behind.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.