Sixteen months after scrapping the federal government’s AI safety rules, the Trump administration is now weighing something stricter than what it tore up.
The White House is actively discussing an executive order that would create an AI working group with the power to review and potentially vet new AI models before companies can release them to the public, according to reporting first published by The New York Times on May 4, 2026, and subsequently confirmed by Reuters, which cited unnamed U.S. officials and people familiar with the discussions. No draft text has been released, and the White House has not commented publicly. But the discussions represent a concrete policy shift, not speculation, and they carry a striking irony: the administration that dismantled federal AI oversight in January 2025 is now exploring a framework that could go further than the one it removed.
The federal government is already building the infrastructure
The proposed executive order would not emerge from a vacuum. NIST’s Center for AI Safety and Innovation, known as CAISI, has already signed agreements with Google DeepMind, Microsoft, and xAI to conduct pre-deployment evaluations and targeted research assessing frontier AI capabilities and national security risks. Those agreements, announced in May 2026, mean the federal government is already testing powerful AI systems before they reach the public, albeit on a voluntary basis.
A binding executive order would convert that voluntary cooperation into a requirement, and potentially a more demanding one. Under the Biden administration’s Executive Order 14110, signed on October 30, 2023, developers of powerful dual-use AI models had to share safety-testing results with the government. The order, published as an official White House directive, treated AI safety as a national priority and directed agencies to develop standards and evaluation tools. But it did not explicitly condition a model’s release on government approval. Companies reported their results; they did not need a green light to ship.
The new proposal, as described in reporting, would flip that dynamic. Instead of disclosing test outcomes after the fact, companies would need to satisfy government reviewers that their models are safe enough for release. That is the difference between a reporting obligation and a licensing gate.
Why the reversal matters
The policy history makes this shift especially notable. President Trump rescinded Biden’s AI safety order on January 20, 2025, the day he took office, as documented by NIST’s own records. That rescission left a regulatory gap: no federal mandate compelled AI developers to report safety data or submit to government review. For more than a year, the most powerful AI systems in the world were developed and released without any formal federal oversight requirement.
During that gap, the AI landscape changed. Frontier models grew more capable, with companies racing to build systems that can autonomously write code, conduct research, and operate across the internet with minimal human supervision. Concerns about AI-enabled threats, from sophisticated cyberattacks to the potential for models to assist in designing biological or chemical weapons, have moved from theoretical discussions into active policy debates in Washington and allied capitals. The EU’s AI Act entered its enforcement phase, and several U.S. states pursued their own regulatory frameworks, creating a patchwork that industry leaders have publicly said they want federal policy to supersede.
Against that backdrop, the administration’s shift from deregulation to potential pre-release vetting reflects a recognition that the risks have outpaced the laissez-faire approach. Whether the motivation is national security, global competitiveness, or both remains unclear from the public record.
Critical questions without answers
The proposal is still more outline than policy, and several fundamental questions remain unresolved.
No one outside the administration knows what “proving” a model is safe would look like in practice. Would companies need to pass standardized evaluations? Submit to government-run red-team exercises? Clear a formal certification process? The CAISI agreements with Google DeepMind, Microsoft, and xAI describe “pre-deployment evaluations,” but neither NIST nor the companies have detailed the criteria or thresholds those evaluations use.
The structure of the proposed AI working group is equally undefined. It is unclear whether it would sit within the Commerce Department, the White House Office of Science and Technology Policy, or a new interagency body. The enforcement mechanism is an open question: would companies face penalties for releasing models without approval, or would the review function as a strong recommendation without legal teeth?
Then there is the question of scope. The Biden-era order used computational thresholds and dual-use criteria to define which models were covered. The new proposal might follow a similar approach, focus narrowly on national-security-relevant capabilities, or adopt different triggers entirely, such as model performance on specific safety benchmarks. Each choice would determine whether the regime targets only a handful of the most powerful systems or sweeps in a broader range of commercial AI products.
None of the three companies that signed CAISI agreements have publicly commented on whether they support mandatory pre-release vetting as opposed to the voluntary cooperation they have already agreed to. That silence is itself significant. Strong industry opposition could push the administration toward a narrower framework, while quiet acceptance could embolden a more stringent approach. The gap between voluntary testing partnerships and a government mandate with real consequences is wide, and how companies respond will likely shape the final order.
A practical bottleneck looms
Even if the executive order is signed, implementation raises a question the policy discussion has not yet addressed: can the federal government actually do this at scale?
NIST’s existing agreements cover three companies. The frontier AI field includes additional major players, among them OpenAI, Anthropic, and Meta, plus a growing number of well-funded startups. Any binding pre-release review framework would need to evaluate models at the pace companies produce them, which has accelerated sharply over the past year. If the government cannot staff and resource the review process adequately, the system risks becoming a bottleneck that delays American AI development while competitors in other countries face no equivalent constraint.
For AI developers, a pre-release requirement could mean longer timelines between finishing a model and releasing it commercially, adding a new step to the deployment pipeline that does not currently exist. For the broader technology sector, it raises questions about whether the U.S. regulatory environment will remain attractive compared to jurisdictions with lighter oversight.
What the policy arc tells us
The trajectory is unmistakable. The Biden administration established a framework for AI safety reporting. The Trump administration removed it. And now the same administration is exploring something potentially more aggressive in its place. The CAISI agreements show that even during the 16-month gap between the rescission and the current discussions, federal agencies quietly continued building testing relationships with AI companies. The infrastructure for pre-release review did not vanish when the old executive order was rescinded; it shifted from a mandate to a partnership model that could now serve as the foundation for a more formal regime.
Until draft language appears or the White House issues a formal announcement, the specifics remain provisional. But the direction is clear: the federal government is moving back toward a gatekeeping role over the most capable AI systems. The question is no longer whether Washington will reassert oversight, but how demanding that oversight will be, and whether the government can build the capacity to enforce it before the next generation of frontier models is ready to ship.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.