Morning Overview

UK cyber chief says AI hacking tools can aid defense if secured

When Dr. Richard Horne took the stage at the RSAC conference in late April 2026, the head of the UK’s National Cyber Security Centre did not sugarcoat the threat. Frontier AI models, he told the audience, are already accelerating the discovery of software vulnerabilities at a pace that would have seemed implausible just two years ago. But Horne’s message was not purely a warning. He argued that the same tools giving attackers new speed could tilt the balance toward defenders, on one hard condition: organizations must stop treating basic cyber hygiene as optional.

A race measured in hours, not weeks


Horne’s case, laid out in both his keynote and a detailed NCSC blog post, centers on a specific technical shift. AI models can now surface software flaws that once required skilled human researchers weeks of painstaking work. That compression of timelines is a gift to anyone hunting for exploitable bugs, whether they work for a government security agency or a criminal syndicate.

The defensive opportunity, Horne argued, is that organizations can use the same acceleration to find and patch their own weaknesses before attackers strike. But that opportunity only materializes if defenders are ready to act. In his NCSC analysis, he spelled out the non-negotiable baseline: reduce exposure by cutting unnecessary internet-facing services, patch disclosed vulnerabilities within days rather than weeks, and maintain real-time monitoring capable of catching suspicious activity as it happens.

None of those steps are new advice. What is new, according to Horne, is the penalty for ignoring them. When AI compresses the attacker’s timeline from weeks to hours, a sluggish patch cycle stops being a minor risk and becomes an open invitation.

The problem with “vibe coding”


Horne also turned his attention to a practice gaining traction across the software industry: using AI tools to generate or refine code at speed. He borrowed the term “vibe coding,” popularized by AI researcher Andrej Karpathy, to describe developers who lean on large language models to write functional software with minimal manual oversight.

The approach can accelerate delivery, Horne acknowledged, but it carries a trap. Without secure-by-design principles baked into the development process, AI-generated code risks introducing new vulnerabilities as fast as it eliminates old ones. His prescription was blunt: treat AI-generated code as untrusted until it has been verified through rigorous review and automated security testing. Speed without discipline, he said, creates more problems than it solves.

For development teams already under pressure to ship faster, that message adds friction. But Horne framed it as a necessary cost. Organizations that skip verification are essentially outsourcing their security posture to a model they do not fully understand.

Beyond technical fixes


Horne’s RSAC remarks went beyond patching and code review. He called for coordinated pressure from law enforcement, infrastructure operators, and the private sector to raise the cost of cybercrime. That means disrupting criminal infrastructure, improving cross-border cooperation between police forces, and building faster channels for sharing threat intelligence across critical sectors.

The framing reflects a broader shift in UK cyber policy. The NCSC has increasingly positioned itself not just as a technical advisory body but as a coordinator pushing for collective action. Horne’s argument is that no single company, no matter how well defended, can manage AI-era cyber risk alone. Sector-wide exercises, shared incident reporting, and tighter collaboration with national agencies will shape whether frontier AI ultimately favors attackers or defenders.

Separately, the UK government has released new open-source tools designed to help developers strengthen security within AI models. Published under the Open Government Licence, these resources are intended to lower the barrier for organizations looking to adopt government-developed guidance in their own products.

What the evidence does not yet show


Horne’s “net positive” framing is a judgment call, not a conclusion drawn from published research. The NCSC has not released quantified assessments comparing AI’s offensive impact against its defensive benefits. Without that data, the claim that defenders can come out ahead rests on assumptions about how quickly organizations will adopt the recommended practices, and adoption speed varies enormously across industries, budgets, and geographies.

One AI hacking tool called Mythos has appeared in BBC reporting as an example of the capabilities Horne described. But the NCSC’s own publications do not name Mythos, and the technical details of what it does, who built it, and how it performs in real-world defensive scenarios remain unclear from primary sources. The practical value of any individual tool depends on context: who deploys it, how it is configured, and whether it sits on top of the baseline defenses Horne insists are essential.

There is also a gap between Horne’s recommendations and the reality facing many organizations. Large enterprises with mature security teams may be able to compress patch cycles and deploy AI-assisted monitoring within months. Smaller organizations, public bodies running on tight budgets, or operators of legacy systems could face years of work to reach the same baseline. The global impact of AI on cyber risk will depend heavily on how these slower-moving actors fare, not just on the performance of well-resourced leaders.

What organizations should do now


For IT teams and business leaders trying to act on Horne’s guidance, the starting point is an honest self-assessment. Are internet-facing services minimized? Are patches applied within days of disclosure? Does the organization have monitoring that can detect and respond to threats in real time? If the answer to any of those questions is no, then layering on sophisticated AI tools is unlikely to deliver the defensive advantage Horne described. In that scenario, AI simply helps attackers move faster than defenders can respond.

The second priority is governance around AI-assisted development. Organizations adopting vibe coding or similar practices should pair them with automated security testing, clear secure-by-design standards, and human review of generated code. AI-assisted security tools themselves should be evaluated against concrete metrics, such as time to detect and remediate vulnerabilities, rather than vendor marketing.

Horne’s core message at RSAC 2026 was not that AI will save defenders or doom them. It was that the outcome depends on choices being made right now. The organizations that move fastest to strengthen their fundamentals, and that engage in the collective defense structures Horne advocates, are the ones most likely to turn AI’s acceleration to their advantage. Those that wait may find the window has already closed.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.