Morning Overview

OpenAI widens access to cyber tools as misuse risks rise, Axios reports

OpenAI is opening its artificial intelligence technology to a broader set of cybersecurity researchers and defenders, according to a report from Axios, a move that arrives while federal agencies are still running tests to determine whether AI actually makes networks safer. The expansion, which Axios says includes wider availability of tools that can help identify software vulnerabilities and analyze threats, comes at a moment when the rules governing such technology are in flux and the threat landscape is shifting fast.

The company has not published a detailed public announcement outlining which tools are affected or what new access criteria apply. But the timing is notable: OpenAI has spent the past two years documenting how state-linked hacking groups have attempted to exploit its own models, and the broader cybersecurity industry is racing to embed generative AI into defensive products before adversaries gain a lasting edge.

Federal testing is underway but incomplete

The most concrete government effort to measure AI’s value in cyber defense is a pilot program run by the Cybersecurity and Infrastructure Security Agency. CISA’s operational pilot is designed to test whether AI and large language model tools can detect software flaws more effectively than traditional, non-AI scanning methods. The pilot uses a structured, evidence-based framework, comparing results side by side rather than relying on vendor marketing.

That distinction matters. Private cybersecurity companies have made sweeping claims about AI-powered detection for years, but CISA’s pilot is one of the first federally controlled experiments that could produce data rigorous enough to shape procurement standards and security policy. As of spring 2025, however, the agency has not released final results. Until it does, the question of whether AI tools genuinely outperform conventional scanners in government environments remains open.

For security teams weighing whether to adopt AI-assisted tools now, the pilot is a useful reference point but not yet a verdict. Organizations that move early will need to run their own controlled comparisons and treat AI as a layer that supports human analysts rather than a replacement for established workflows.

The regulatory picture is shifting

A presidential executive order issued in January 2025 aimed at removing barriers to American AI leadership set the tone for the current policy environment. The order directed agencies to reduce regulatory friction around AI development and deployment, effectively replacing the more cautious approach laid out in the Biden administration’s October 2023 AI executive order, which the new action revoked.

That shift created a more permissive climate for companies like OpenAI to expand tool access across sectors, including cybersecurity. But the January 2025 order did not include specific guardrails for offensive cyber applications or dual-use AI systems. Executive orders set direction and assign responsibilities; they do not, on their own, rewrite the rules. Follow-up guidance spelling out how the order affects cybersecurity tool distribution, export controls, or access policies has not been consolidated into any single public resource.

Meanwhile, several Department of Homeland Security policy frameworks tied to technology deployment are approaching scheduled review or expiration dates. The practical significance of those timelines is unclear without more detail about which authorities are affected, but the pattern is familiar: vendors can ship product updates in weeks, while federal oversight mechanisms operate on multi-year cycles. When those timelines fall out of sync, the gap tends to be filled by ad hoc guidance or industry self-regulation.

Misuse is documented but hard to measure

OpenAI itself has provided some of the strongest public evidence that its models attract adversarial interest. In February 2024, the company published a threat intelligence report detailing how state-affiliated hacking groups from China, Iran, North Korea, and Russia had attempted to use ChatGPT for tasks like researching vulnerabilities, drafting phishing messages, and writing code. A follow-up report in October 2024 described additional disruption efforts against more than 20 operations that tried to misuse OpenAI’s platforms.

Those disclosures are valuable because they come from a primary source with direct visibility into its own systems. But they also highlight a measurement gap: no federal agency currently tracks AI-assisted cyberattacks in a standardized, public dataset. Private threat intelligence firms have flagged AI-enhanced phishing campaigns and automated malware generation, yet without a common framework for counting and categorizing these incidents, it is difficult to compare the scale of misuse against the defensive gains that broader AI access might deliver.

This is the core tension behind the Axios report. Widening access to powerful AI tools can accelerate defensive research, giving security teams faster ways to find and patch vulnerabilities. It can also lower the skill barrier for attackers, enabling less sophisticated threat actors to punch above their weight. Both dynamics are already playing out, and neither cancels the other.

A crowded and competitive field

OpenAI is not operating in isolation. Microsoft, its largest investor and partner, has integrated GPT-4 into Security Copilot, a product designed to help analysts triage alerts, investigate incidents, and summarize threat intelligence. Google has folded its Gemini models into its own threat intelligence platform. CrowdStrike, Palo Alto Networks, and a growing roster of startups have all embedded generative AI into their defensive toolkits.

The competitive pressure helps explain OpenAI’s timing. As rivals ship AI-powered security products to enterprise and government customers, staying on the sidelines risks ceding a market that could define the next generation of cyber defense. But competition also means that restricting access at one company does little to limit the overall availability of capable AI models. The policy challenge is not about controlling a single vendor’s decisions; it is about establishing norms and standards that apply across an ecosystem where multiple frontier models exist.

Where policy and technology need to meet

For policymakers, the immediate task is straightforward even if the execution is not: finish the federal pilots, publish the data, and use the results to write procurement and deployment standards grounded in evidence rather than vendor promises. CISA’s vulnerability detection pilot is the closest thing to a controlled experiment in this space, and its findings could anchor guidance that agencies, contractors, and private companies all reference.

For the public, the honest summary is that AI’s role in cybersecurity is still being defined. Verified government efforts show serious investment in harnessing AI for defense, but outcome data is not yet available, and the guardrails governing dual-use tools are incomplete. As models grow more capable and more widely distributed, the balance between empowering defenders and enabling attackers will depend less on any single company’s product roadmap and more on how quickly evidence-based oversight can keep pace with the technology it aims to govern.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.