Chinese AI lab DeepSeek released preview versions of its latest system, DeepSeek V4, in late May 2026, making the model’s weights freely available to developers worldwide. The company called it an open-source release, a move that lets anyone download, modify, and deploy the model without licensing fees or approval from a U.S. provider.
But according to analysts at the Council on Foreign Relations and reporting from the Associated Press, V4 does not match the performance of the strongest American AI systems, including OpenAI’s GPT-4o, Anthropic’s Claude 4, and Google’s Gemini 2.5 Pro. That gap raises a pointed question: does DeepSeek’s rapid output signal that China is closing in on U.S. labs, or does it reveal that the distance at the very top is proving harder to cross than expected?
What DeepSeek V4 actually is
V4 is the successor to DeepSeek V3 and the reasoning-focused R1, both of which drew global attention in early 2025 when they demonstrated that a Chinese startup could produce competitive AI on a fraction of the budget American labs typically spend. V3, in particular, rattled investors because it suggested U.S. companies might be overspending on training infrastructure.
With V4, DeepSeek is pushing further. The company released model weights under an open framework, meaning outside researchers can run the system on their own hardware and fine-tune it for specific tasks. DeepSeek has not, however, published full training data, detailed architecture documentation, or the kind of reproducibility package that would satisfy a strict open-source definition. The AI industry has debated this distinction for years: releasing weights is valuable, but it is not the same as releasing everything needed to rebuild a model from scratch.
Concrete technical specifications, including parameter count, mixture-of-experts configuration, and training compute, have not been independently verified in available reporting. DeepSeek’s own documentation remains limited, and no peer-reviewed paper accompanies the release.
Why analysts say it trails U.S. frontier systems
The Council on Foreign Relations published an analysis arguing that V4 represents a new phase in the bilateral AI rivalry but stops short of frontier performance. In the council’s framing, as the CFR authors wrote, “the release matters more as a geopolitical event than as a pure technical breakthrough,” because it lowers the barrier for governments and companies outside the United States to access capable AI, even if that AI is not best-in-class.
No independent benchmark suite results comparing V4 head-to-head against GPT-4o, Claude 4, or Gemini 2.5 Pro have been published as of early June 2026. OpenAI, Anthropic, and Google have not issued public statements responding to the V4 launch. That means the “falls short” assessment, while consistent with the broader pattern of U.S. leadership at the AI frontier, rests on institutional analysis rather than raw performance data that outside experts can independently verify.
Readers should treat current capability assessments as informed estimates, not definitive rankings. Independent evaluations from groups like LMSYS or academic benchmarking efforts could change the picture once they test V4 at scale.
The chip question that shadows the release
Earlier this year, Reuters reported that an official said DeepSeek had trained an AI model using Nvidia’s most advanced chips, despite U.S. export restrictions designed to keep that hardware out of Chinese hands. The Financial Times has separately examined supply-chain routes that allow restricted semiconductors to reach Chinese labs through intermediaries or pre-ban stockpiles.
It is not confirmed whether V4 specifically was trained on banned Nvidia chips or whether the Reuters report referred to earlier DeepSeek models. But the broader implication is the same: if a Chinese lab can train a capable model on restricted hardware and then release it openly, the fruits of that workaround become available to anyone with an internet connection. Export controls can target chips before they ship. They cannot recall a model once it is posted online.
How DeepSeek obtained advanced hardware, whether through intermediaries, stockpiles accumulated before restrictions tightened, or other channels, has not been fully detailed. Nvidia has not publicly commented on the sourcing question.
What the open-weights strategy means for U.S. leverage
Washington’s AI strategy has relied on two pillars: keeping the most powerful chips out of Chinese data centers and maintaining a lead in proprietary model performance. DeepSeek’s open-weights approach pressures both.
By giving away V4, DeepSeek offers governments and startups worldwide a capable alternative to American providers. Countries wary of dependence on OpenAI or Google Cloud now have another option, even if it is not the strongest available. That diversification chips away at the commercial and diplomatic leverage U.S. firms hold as the default suppliers of advanced AI.
At the same time, the gap between open and closed systems still carries weight. If V4 lags meaningfully behind the best proprietary models, the United States retains a qualitative edge in the most demanding applications, from drug discovery to complex military planning. The strategic question is whether that edge erodes faster from China’s technical progress or from the spread of “good enough” open models that reduce the premium on absolute top-tier performance.
No official U.S. government response to the V4 release has appeared in available reporting. Whether the Commerce Department or White House views the launch as reason to tighten chip export rules or adjust AI regulation remains unclear. But the policy challenge is now plainly visible: controlling hardware is one thing, and controlling software that replicates freely across borders is something else entirely.
What independent benchmarks and policy responses will settle
DeepSeek V4 is not the model that dethroned American AI labs. As the CFR analysis concluded, V4 “signals a new phase” in the rivalry rather than a reversal of U.S. technical leadership. But it is the latest signal that China’s AI ecosystem can produce and distribute increasingly capable systems at a pace that complicates U.S. strategy. Each release narrows the window in which export controls and proprietary advantages can hold.
The next markers to watch: independent benchmark results that put hard numbers on V4’s strengths and weaknesses, any U.S. policy response targeting open-weight releases from foreign labs, and whether DeepSeek follows up with a full technical paper that clarifies what V4 actually is under the hood. Until those arrive, the story of this release is less about who has the best model today and more about how fast the floor is rising for everyone else.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.