Morning Overview

U.K. technology secretary warns AI-powered hacking is accelerating fast

Peter Kyle, the U.K.’s technology secretary, has put the country’s cyber establishment on notice. Speaking at the NCSC’s CYBERUK conference, Kyle warned that artificial intelligence is supercharging hacking operations faster than most defenders anticipated, and the window to prepare is shrinking. “AI is already being used to increase the speed and scale of cyber attacks,” Kyle said, underscoring that the threat has moved from theoretical to operational. His warnings, reinforced by a string of assessments from the National Cyber Security Centre, have set the stage for new legislation aimed at hardening Britain’s critical infrastructure before AI-driven intrusions become routine.

The NCSC, the U.K.’s technical authority on cyber security, has concluded that AI will “almost certainly” increase both the volume and the impact of cyber attacks through 2027. That phrase sits at the top of the agency’s probability scale, meaning analysts treat the outcome as near-inevitable. The judgment is laid out in the centre’s detailed analysis of AI-driven cyber risks, which identifies reconnaissance and social engineering as the two areas where attackers gain the most from the technology.

Why the threat is different now

What separates AI-assisted hacking from earlier automation is the combination of speed, scale, and plausibility. AI tools can scan thousands of systems for misconfigurations in minutes, map network vulnerabilities, and then generate phishing messages tailored to individual targets with fluent language and accurate personal detail. The result is that the early stages of an attack, the probing and deception that precede a breach, look far more legitimate than they used to. A convincing email that once took a skilled operator hours to craft can now be produced in seconds and sent to thousands of recipients, each version slightly different.

A forward-looking NCSC assessment extending the timeline to 2027 reinforces the point. It states that AI will continue making cyber intrusion operations more effective and more efficient, increasing both their frequency and intensity. The same report flags a second risk: as organizations rush to adopt AI tools themselves, they inadvertently widen their own attack surface. Integrating third-party AI services into business workflows can expose sensitive data flows, and poorly secured application programming interfaces can hand attackers new routes into systems that were previously walled off.

Frontier models are improving at intrusion tasks

The NCSC has tied its urgency to measurable progress in the most advanced AI models. In guidance aimed at preparing cyber defenders for frontier AI, the agency cited evaluation results from the AI Safety Institute showing rapid improvement on multi-step cyber attack scenarios. Those evaluations, conducted during 2024 and 2025, tested how well leading AI systems could chain together the individual phases of a real intrusion, from initial access through lateral movement to data extraction. The improvement trajectory was steep enough that the NCSC explicitly linked policy urgency to the pace at which model capabilities are advancing.

Still, evaluation benchmarks capture what a model can do in a controlled setting. How quickly criminal or state-sponsored groups translate that capability into real-world operations is a separate question. Attackers need time, resources, and experimentation to operationalize new tools, and some capabilities may remain unused if they prove too complex or costly to deploy at scale. The AISI results were published before March 2026, and no public follow-up covering the months since has appeared, meaning the data may already lag behind the frontier.

Legislation is coming, but details are not settled

The government’s response centers on the Cyber Security and Resilience Bill, first announced in the King’s Speech in July 2024 and fleshed out in a policy statement published in April 2025 by the Department for Science, Innovation and Technology. The NCSC’s own explainer of the proposed legislation outlines three priorities: protecting vital services such as energy, water, and health care; expanding the scope of regulators to cover a wider set of digital providers; and responding to the proliferation of threats that AI accelerates.

The bill sits within a broader national resilience strategy that includes guidance on incident reporting, supply-chain security, and minimum security baselines for operators of essential services. But as of May 2026, the legislation remains at the policy-statement stage. Its final scope, enforcement mechanisms, and compliance timelines have not been enacted. Parliamentary debate could narrow or expand its regulatory reach, and further consultation rounds and sector-specific guidance are expected before any obligations take effect. Organizations that operate critical services or supply digital infrastructure in the U.K. would be wise to begin reviewing their cyber resilience posture now rather than waiting for deadlines to be set.

The defensive side of the ledger is thin

The NCSC frames the contest as a race between offensive and defensive applications of AI, but the defensive picture remains largely anecdotal. Some organizations are experimenting with AI for anomaly detection, log analysis, and automated incident triage, yet there is no consolidated public data on adoption rates, effectiveness, or failure modes across the U.K.’s critical sectors. Whether operators have the budgets, talent, and organizational readiness to deploy AI-powered defenses at scale is an open question.

No official case study or named breach has been published by the NCSC to illustrate a confirmed AI-driven attack in the wild. That absence does not mean such attacks have not occurred; the agency’s 2024 Annual Review noted that AI was already being used to enhance phishing campaigns. But the public evidence base still relies more heavily on projections from model evaluations and classified intelligence than on disclosed intrusions that can be independently scrutinized.

Preparing for a regulatory landscape about to shift

The core finding is well supported: AI already lowers the barrier to entry for common cyber attacks, and frontier models are rapidly improving at the kind of multi-step intrusion tasks that once required highly skilled human operators. What remains less certain is how quickly those capabilities will be fully weaponized at scale and how effectively defenders will keep pace. That uncertainty does not diminish the risk. It argues for investment in basic cyber hygiene, closer attention to evolving NCSC guidance, and preparation for compliance obligations that could arrive quickly once the Cyber Security and Resilience Bill moves through Parliament. The NCSC’s regularly updated annual threat overview remains the best public starting point for tracking how the threat develops from here.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.