On May 14, 2024, American and Chinese officials sat across from each other in Geneva for what both governments confirmed was their first formal dialogue on artificial intelligence safety. The session covered AI-related technological risks and global governance, and both sides described it as the start of an ongoing conversation, not a one-off meeting. The agenda touched on some of the most unsettling possibilities in modern technology: AI systems that behave in ways their creators did not anticipate, weapons that select targets without a human pulling the trigger, and the growing risk that terrorist groups or criminal networks could weaponize AI for cyberattacks or even biological strikes.
Now, more than two years later, the question is whether that initial handshake has led to anything durable, or whether it was a diplomatic gesture that quietly stalled.
What both governments have confirmed
The Geneva meeting is well-documented on both sides. China’s State Council published an account describing the first intergovernmental AI dialogue between the two countries, noting that the agenda addressed technological risks and global governance. On the U.S. side, an archived White House statement identified Tarun Chhabra of the National Security Council and Seth Center of the State Department as the American delegation leads. Both records confirm the date, the location, and the fact that the two largest AI powers treated the session as significant enough to announce publicly.
The specific risk categories that dominated the discussion trace to policy documents each government had already developed. The National Institute of Standards and Technology’s AI Risk Management Framework (AI RMF 1.0) gives technical precision to the concept of unexpected model behavior, outlining how AI systems can produce outputs that diverge sharply from their designers’ intentions and how organizations should test for such failures. NIST’s framework is the standard reference across U.S. federal agencies when officials talk about AI safety and reliability.
On the military front, the U.S. State Department’s Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy lays out norms for keeping humans in control of lethal systems. It calls for rigorous testing, mechanisms to detect unintended consequences, and the ability to disengage autonomous weapons when they deviate from intended behavior. That declaration, issued during the Biden administration, remains one of the most detailed U.S. attempts to set guardrails around autonomous weapons in the absence of a binding international treaty.
The threat from non-state actors has been mapped most thoroughly by the Brookings Institution, which published a policy paper detailing how groups outside government control could exploit AI for cyberattacks, biological or chemical weapons development, deepfake-driven disinformation, and manipulation of military attribution systems. The same paper proposed that Washington and Beijing establish AI “hotlines” and “regular exchanges” during non-crisis periods to build trust and reduce the chance of miscalculation. That recommendation aligns with the spirit of the Geneva session, though neither government has confirmed adopting it.
What remains unclear as of mid-2026
No publicly available government document from either capital specifies a fixed schedule for follow-up meetings after the May 2024 session. The phrase “regular meetings” in public commentary draws partly from the Brookings proposal rather than from a confirmed bilateral commitment. Whether Washington and Beijing have held additional rounds of talks since Geneva, and on what terms, has not been confirmed in any primary source reviewed for this report.
The political landscape has shifted considerably since that Geneva afternoon. The November 2024 U.S. presidential election brought a change in administration, and the Trump White House has taken a markedly different posture on both AI regulation and China policy. Ongoing tensions over semiconductor export controls and trade disputes have complicated the broader diplomatic relationship, raising questions about whether a channel built under one administration can survive the transition to another with different priorities.
Details about who sat on the Chinese side of the table remain thin in English-language sources. The U.S. delegation’s leadership is documented, but Beijing has not publicly identified its lead negotiators or clarified which ministries or agencies held decision-making authority during the talks. That gap matters because it makes it difficult to assess how seriously China’s bureaucratic apparatus treated the dialogue and whether the officials present had the standing to make commitments.
Specific threat scenarios that analysts consider most urgent, such as AI-assisted bioweapon design or deepfake attacks that trick one nation into misattributing a military strike, appear in think-tank analyses but have not been confirmed as topics raised during the Geneva session itself. No participant has gone on record describing what was discussed in the room, which means there is a meaningful gap between what outside experts believe should be on the agenda and what diplomats actually addressed.
Why these risks are not abstract
For readers who do not follow AI policy closely, the three risk categories at the center of this dialogue have real-world stakes that are growing fast.
Unexpected model behavior is not a theoretical concern. Large language models and other AI systems have already demonstrated the ability to produce outputs their developers did not predict, from fabricating legal citations to generating instructions for dangerous activities when guardrails are bypassed. As these systems are integrated into critical infrastructure, financial markets, and military decision-making tools, the consequences of a model behaving unpredictably become far more severe than an embarrassing chatbot response.
Autonomous weapons are no longer confined to science fiction or Pentagon research labs. Multiple countries are developing or deploying systems with increasing levels of autonomy in target selection and engagement. The core worry is not that a robot army will go rogue overnight, but that the speed of autonomous systems could compress decision-making timelines in a crisis to the point where human oversight becomes functionally impossible.
Non-state actor exploitation of AI is perhaps the fastest-moving threat. Open-source AI models have lowered the barrier to entry for sophisticated cyberattacks, and researchers have demonstrated that AI tools can accelerate the process of identifying biological agents with pandemic potential. Criminal organizations and terrorist groups do not need to build frontier AI systems from scratch; they only need to misuse what is already publicly available.
Where the diplomatic channel stands
Reading the available evidence carefully, a few things are clear and several are not. The Geneva meeting proved that Washington and Beijing are willing to discuss AI risks at the government-to-government level. The U.S. has articulated detailed positions on AI safety, reliability, and responsible military use through frameworks like NIST’s AI RMF and the State Department’s Political Declaration, giving American negotiators a structured vocabulary for defining unacceptable risk. China has pursued its own AI governance initiatives, including interim safety measures and proposals at international forums, though the degree of overlap with U.S. frameworks is still being mapped by analysts on both sides.
What is missing is evidence that the initial contact has matured into something institutional. There are no confirmed follow-up sessions, no joint communiqués outlining shared principles, and no public indication that either the hotline concept or the regular-exchange model proposed by Brookings has been adopted. The bilateral AI channel, as of mid-2026, remains closer to a proof of intent than a functioning regime.
That does not make it insignificant. In arms control and technology diplomacy, the hardest step is often the first conversation. The Geneva session established that both governments recognize AI risks as a legitimate subject for bilateral engagement, a baseline that did not exist before May 2024. Whether that baseline holds through a period of strained U.S.-China relations, shifting domestic politics, and rapidly advancing AI capabilities will determine whether the dialogue becomes a lasting institution or a footnote in the early history of AI governance.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.