Morning Overview

AI experts weigh how the technology could help or threaten humanity

The rescission of a major U.S. executive order on artificial intelligence, combined with a growing body of international safety research, has sharpened a debate among AI experts: whether the technology’s benefits in medicine, climate science, and productivity will outweigh its potential to disrupt labor markets, amplify misinformation, and introduce new security threats. That tension now plays out across a fragmented policy environment where federal mandates have been rolled back while voluntary frameworks and global declarations attempt to fill the gap. The stakes are not abstract. They affect how AI systems are built, tested, and deployed in ways that touch ordinary people, from automated hiring tools to medical diagnostics.

What is verified so far

The strongest confirmed anchor in the U.S. approach to AI risk is a document published by the National Institute of Standards and Technology. Titled AI RMF 1.0, the framework is available directly from NIST as a detailed risk management reference. It defines four core functions—govern, map, measure, and manage—and describes traits that characterize trustworthy AI, giving organizations a structured method for identifying and reducing harm before systems reach the public. Because it is voluntary rather than enforceable, AI RMF 1.0 operates as a reference standard, not a regulation. That distinction matters because the binding federal directive that once accompanied it no longer exists.

Executive Order 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” laid out the federal government’s most detailed approach to AI opportunity and risk. The order directed safety and security guidance, evaluation, reporting requirements, and specific agency responsibilities, according to the American Presidency Project’s archival full-text record. It required red-teaming of advanced models and created accountability timelines across multiple departments, aiming to embed AI considerations into everything from critical infrastructure protection to civil rights enforcement.

That architecture, however, no longer has legal force. The order was rescinded on January 20, 2025, as confirmed by the official NIST information page devoted to EO 14110, which now lists the rescission date prominently. The same page notes that while the order is no longer in effect, some of the technical work it initiated—such as standards development and evaluation methodologies—continues under NIST’s broader AI portfolio.

Internationally, the AI Safety Summit held in 2023 produced the Bletchley Declaration, an official text published under the UK’s Open Government Licence. The licence terms are spelled out on the National Archives’ open data page, which sets conditions for reusing government documents. The Bletchley text called on governments and AI developers to cooperate on identifying and mitigating risks from frontier AI systems, emphasizing transparency, evaluation, and information sharing across borders.

Separately, research tied to the forthcoming International AI Safety Report has been catalogued under a specific DOI, with an arXiv-linked preprint accessible through the identifier 10.48550/arXiv.2509.14233. That record establishes the existence of a coordinated research effort on AI safety scenarios and methodologies, even though the complete empirical findings have not yet been publicly released.

What these documents share is a common vocabulary of risk. AI RMF 1.0 treats risk as something that can be mapped and measured through structured organizational processes. EO 14110 treated it as something that required federal enforcement and interagency coordination. The Bletchley Declaration treated it as a global challenge that demanded diplomatic and technical cooperation among states and leading AI firms. All three recognized that AI systems can produce both significant benefits and serious dangers, but they differed sharply on who should be responsible for managing those dangers and how binding that responsibility should be.

What remains uncertain

The rescission of EO 14110 created a gap that no publicly available policy document has yet filled. No official statement from the current U.S. administration explains what, if anything, will replace the order’s safety evaluation and reporting requirements. NIST’s Computer Security Resource Center continues to host extensive cybersecurity and AI guidance, including technical reports and draft standards, but the enforcement mechanisms that EO 14110 established are no longer in effect. Whether agencies will continue voluntary compliance with the order’s directives, or whether new binding guidance will emerge, is not confirmed by any institutional source reviewed for this analysis.

The Bletchley Declaration’s real-world impact also remains unclear. No institutional records examined for this article document measurable implementation progress by signatory nations since the 2023 summit. Governments have not, for example, published standardized reporting on how they are assessing frontier models or coordinating incident response in line with the declaration’s language. Media coverage has described the declaration as a diplomatic milestone, but the absence of follow-up metrics from participating governments makes it difficult to assess whether the commitments translated into enforceable domestic policy or remained aspirational statements of intent.

The arXiv preprint associated with the International AI Safety Report offers a citation trail but not a complete dataset. It signals the topics researchers consider salient—such as model evaluation, systemic risk, and governance options—but the full report’s empirical findings on specific AI threat scenarios, including how effectively current mitigation strategies reduce harm, are not yet available for independent review. Readers and policymakers relying on that research should treat preliminary summaries with caution until the complete findings are published and, ideally, peer-reviewed or otherwise independently validated.

A broader uncertainty concerns the practical effect of voluntary frameworks when binding rules are withdrawn. AI RMF 1.0 provides a clear methodology for organizations that choose to adopt it, but compliance is optional. Companies building and deploying AI systems face no federal penalty for ignoring its recommendations. In the absence of statutory requirements, it is unclear whether market pressure, reputational risk, procurement conditions, or international standards will substitute for domestic regulation. The available evidence does not yet resolve whether voluntary adherence will be widespread enough to meaningfully reduce systemic risk.

There is also limited clarity on how different layers of governance will interact. State and local authorities in the United States, as well as regulators in other countries, may develop their own AI rules that reference or diverge from federal frameworks like AI RMF 1.0. Without a replacement for EO 14110, there is a risk of inconsistent expectations across jurisdictions, potentially creating compliance complexity for developers and uneven protection for the public. No comprehensive mapping of these emerging rules is yet reflected in the primary sources cited here.

How to read the evidence

Not all sources carry equal weight in this debate, and distinguishing between primary evidence and contextual material is essential for anyone trying to form an informed view. AI RMF 1.0 is a primary source: it is the actual framework document, not a summary or interpretation of it. The full text of EO 14110, archived by the American Presidency Project, is likewise a primary source that allows readers to verify specific directives and timelines rather than relying on secondhand accounts. The NIST hub page confirming the order’s rescission date is a primary institutional record that establishes the current legal status of that directive.

The Bletchley Declaration, accessible through the UK government’s publication channels, is a primary diplomatic document, while the Open Government Licence page is a meta-level legal instrument that governs reuse. Readers seeking the declaration’s specific commitments should therefore follow links from official summit materials or government portals to reach the text itself, and treat the licence page as a background reference rather than the substantive policy statement.

The arXiv preprint linked to the International AI Safety Report sits in a different category. Preprints have not undergone formal peer review, which means their conclusions carry less evidentiary weight than finalized studies or consensus assessments. They are valuable for understanding what questions researchers are asking and what methods they are testing, but they should not be treated as definitive answers. When preprints influence policy discussions, it is important to check whether later versions, replications, or critical commentaries have altered the picture.

For readers, policymakers, and practitioners, a careful hierarchy of evidence can help navigate the current AI governance landscape. Primary legal and technical documents establish what has actually been decided or proposed. Institutional notices, such as NIST’s confirmation of EO 14110’s rescission, clarify which instruments are still in force. Declarations like Bletchley indicate political will but may lack enforcement detail. Preprints and working papers signal emerging research but require cautious interpretation. In an environment where a major U.S. executive order has been withdrawn and international commitments remain largely untested, understanding these distinctions is essential for assessing how seriously governments and organizations are confronting the risks—and opportunities—of advanced AI systems.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.