US and China are exploring official AI talks ahead of the Trump-Xi summit on May 14 — including risks from autonomous weapons and open-source attacks
When President Trump lands in Beijing on May 14 for his first summit with Chinese President Xi Jinping since taking office, artificial intelligence will be near the top of the agenda. During a March 25, 2026, White House briefing, Press Secretary Karoline Leavitt stated that the two-day visit will include preparatory discussions on AI risks that both governments consider urgent. Leavitt referenced the prospect of autonomous weapons systems operating beyond human control and the growing threat of open-source AI models being weaponized for cyberattacks, while pointing reporters to administration policy resources available through the trumpcard portal and related sites.
The Associated Press independently confirmed the May 14 to 15 dates and reported that the trip had been rescheduled after a delay linked to the U.S. military engagement in Iran. A reciprocal White House visit for Xi was discussed, the AP added, though no date has been set.
A policy blueprint already exists in NSTM-4
The administration is not starting from scratch. The Office of Science and Technology Policy has published National Science and Technology Memorandum-4, listed on its official OSTP information resources page. According to the memorandum’s publicly available text, NSTM-4 addresses U.S. AI policy and international engagement on emerging technology risks, calling for coordination with other major AI-producing nations on standards that could limit dangerous applications.
The memorandum’s framework places what it describes as catastrophic misuse scenarios — including loss of human control over weapons systems and large-scale disruption of critical infrastructure — at the top of its risk hierarchy. It identifies export controls, testing and evaluation regimes, and incident-reporting mechanisms as tools that could move from voluntary norms to binding agreements. The memo does not name China directly, but its repeated references to engagement with “leading AI states” leave little ambiguity about the intended counterpart.
Important caveat: The specific provisions described above are drawn from the memorandum as listed on the OSTP resources page. No direct excerpts or page references from NSTM-4 have been published in the reporting reviewed for this article. Readers who want to verify the characterizations should consult the full memorandum through the OSTP link above.
That document gives U.S. negotiators a reference point they have not always had. During the November 2023 Biden-Xi summit in San Francisco, the two leaders agreed in principle to discuss AI risks in military contexts, but no formal follow-up mechanism was established. The intervening two and a half years produced no joint working group, no shared definitions of autonomous weapons thresholds, and no mutual vulnerability-disclosure process for open-source AI models. NSTM-4 is the first executive-branch attempt to fill that gap with a structured policy framework ahead of direct talks.
What Beijing has and has not signaled
China has its own AI governance track record, though it has been largely silent about the May summit specifically. Beijing published its Global AI Governance Initiative in October 2023, calling for international cooperation on AI safety and opposing “drawing ideological lines” in technology regulation. Domestically, China’s Interim Measures for the Management of Generative AI Services, which took effect in August 2023, imposed content-moderation and registration requirements on companies deploying large language models.
But Chinese officials have not made any on-the-record statements about AI cooperation tied to the Trump-Xi meeting, at least not in English-language institutional sources available as of late May 2026. That silence creates a lopsided picture. Washington is publicly signaling eagerness for AI safety talks; Beijing has not matched that signal. Past U.S.-China technology dialogues have stalled when one side treated the conversation as a venue for tightening export controls while the other sought recognition of its own regulatory model.
The asymmetry matters for expectations. If China arrives in Beijing focused on civilian AI standards, research collaboration, or transparency principles, there may be limited room for the hard security commitments on autonomous weapons that U.S. defense planners are hoping to secure. Conversely, if Washington pushes too aggressively on military AI restrictions, Beijing could frame the effort as an attempt to constrain China’s defense modernization rather than a genuine safety dialogue.
The compressed timeline raises the stakes
The Iran-related delay shrank the preparation window from months to weeks. Diplomatic teams on both sides now face a tight schedule to define which AI topics will appear on the formal agenda and which will be deferred to lower-level channels. For defense contractors embedding machine learning into surveillance and targeting systems, and for technology companies building on open-source AI frameworks, the difference between a summit that produces concrete language and one that yields only vague affirmations is significant.
Even a modest joint statement would carry weight. An acknowledgment of shared risks, a commitment to regular technical dialogues, or an endorsement of basic norms around human oversight of lethal autonomous systems would represent the first formal U.S.-China agreement on military AI. It would also set a precedent that other nations, including European allies who have pushed for AI governance through the Bletchley Declaration and the EU AI Act, could build on.
A summit that produces nothing specific, on the other hand, would reinforce a growing concern among arms-control experts: that AI is becoming another domain of unconstrained great-power competition, much as nuclear weapons were before the first arms-limitation treaties of the 1960s and 1970s.
Where the verified evidence stands before May 14
Readers should know what is confirmed and what is not. The summit dates, the existence of NSTM-4, and the White House briefing are all verified through primary sources. The specific mention of autonomous weapons and open-source cyber threats as agenda items draws on interpretations of the briefing and the broader NSTM-4 policy language, not on a published negotiating text or joint statement from both governments. No quantified risk assessment from OSTP or its Chinese equivalent has been released detailing specific open-source AI vulnerabilities or autonomous weapons incident probabilities.
No direct quotes from Leavitt’s briefing, from Chinese officials, or from independent analysts have been included in this article because the available sourcing consists of summary-level reporting and policy documents rather than verbatim transcripts or interview records. What is clear is that the administration has placed AI safety on the diplomatic calendar with the world’s other leading AI power and built a policy scaffold in NSTM-4 to support real negotiations. The gap between that scaffold and an enforceable agreement is where previous U.S.-China technology talks have broken down. May 14 will test whether both sides can cross it.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.