The Chinese AI lab that shook global markets in January 2025 is back with a new flagship. On April 24, 2026, DeepSeek released preview versions of its V4 model in two configurations, “pro” and “flash,” claiming meaningful gains in reasoning, knowledge recall, and the ability to handle multi-step tasks autonomously. The company also confirmed a strategic hardware shift: V4 leans on chips built by Huawei rather than Nvidia, a move that directly challenges U.S. export controls designed to slow China’s AI progress.
But early outside assessments suggest the hype may be outrunning the results. Bloomberg reported that V4 does not appear to close the technical gap with top American systems, raising questions about whether DeepSeek’s latest release is a genuine leap or a well-timed marketing push.
What DeepSeek actually released
V4 arrives in two tiers. The “pro” variant targets demanding workloads like complex coding, multi-document research, and structured data analysis. The “flash” variant prioritizes speed and cost, aimed at high-volume applications such as customer support bots, lightweight agents, and rapid content generation.
DeepSeek says both versions improve on its previous V3 model across three dimensions: factual knowledge, logical reasoning, and “agentic” capabilities, meaning the system’s ability to plan and execute chains of tasks (writing code, querying databases, synthesizing findings) without constant human prompting. These claims come directly from the company and have not been confirmed by independent benchmarks as of late April 2026.
The hardware behind V4 may matter as much as the software. DeepSeek confirmed it has shifted toward Huawei-manufactured processors, reducing its reliance on Nvidia GPUs. That is a significant move. Washington’s export controls have restricted Chinese companies’ access to Nvidia’s most advanced chips since late 2022, and DeepSeek’s pivot to domestically produced silicon is a direct attempt to build AI infrastructure beyond the reach of those restrictions.
For Chinese AI development broadly, the stakes are high. If Huawei’s chips can support competitive large-scale training, it weakens one of Washington’s primary leverage points over China’s tech sector. If they cannot, DeepSeek and its peers face a persistent hardware ceiling that no amount of software optimization can fully overcome.
The gap with U.S. rivals
The central question around V4 is whether it narrows the distance between DeepSeek and leading American models such as OpenAI’s GPT-5 and Google DeepMind’s Gemini 2.5. Bloomberg’s early reporting suggests it does not. According to that assessment, V4’s improvements look incremental rather than transformative, with U.S. models maintaining their lead on advanced reasoning tasks and standardized benchmarks.
That pattern is familiar. When DeepSeek’s R1 reasoning model launched in January 2025, it triggered a sharp selloff in U.S. tech stocks and prompted urgent questions about whether China had leapfrogged American AI labs. As independent testing caught up with the initial excitement, the picture grew more nuanced: R1 was impressive for its efficiency and cost, but it did not surpass the best U.S. systems on most measures. V4 appears to be following a similar arc, with bold claims from the company meeting skepticism from outside observers.
Neither OpenAI nor Google DeepMind has commented publicly on V4 as of this writing, and no head-to-head benchmark comparisons from independent researchers have been published. The competitive picture will only sharpen once third-party testers put V4 through real-world stress tests on long-context reasoning, code execution, factual accuracy, and safety guardrails.
The Huawei chip question
DeepSeek’s hardware shift raises a separate set of unknowns. The company has not disclosed which specific Huawei processors it is using, what proportion of V4’s training was conducted on those chips, or how the switch affects training efficiency and inference speed compared to Nvidia hardware.
These details matter. Nvidia’s CUDA software ecosystem has become the default platform for large-scale machine learning, backed by mature libraries and a deep pool of engineering talent. Moving away from CUDA introduces compatibility challenges, potential performance trade-offs, and uncertainty about whether Huawei’s chips can handle the massive, iterative training runs that frontier AI models require.
For organizations already invested in Nvidia-optimized code, adopting a model built on a different hardware stack could require substantial re-engineering. That friction could erode some of the cost savings DeepSeek advertises at the model level, a factor enterprise buyers will need to weigh carefully.
Pricing and practical considerations
DeepSeek has marketed itself as a budget-friendly option relative to U.S. AI providers, a reputation built largely on the pricing of its earlier R1 and V3 models, which were widely reported to undercut American competitors. V4 appears to continue that strategy. But the full pricing picture remains incomplete. Exact costs for API access, enterprise licensing, and different usage tiers have not been fully disclosed. Key variables, including rate limits, priority access tiers, data residency guarantees, and volume discounts, are still unclear.
For developers and businesses evaluating V4, the sticker price per token is only part of the calculation. Total cost of ownership includes integration effort, reliability, uptime commitments, and technical support. Without a complete pricing schedule and transparent service-level agreements, it is difficult to determine whether V4’s apparent affordability holds up over the life of a real project.
What independent benchmarks and Huawei’s chips will decide
Two things will determine whether V4 is remembered as a turning point or a footnote. The first is independent benchmarking. Results from standardized evaluation suites, such as MMLU for knowledge and reasoning, HumanEval for coding, and domain-specific safety tests, will reveal whether DeepSeek’s claims hold up under scrutiny. Enterprise buyers should wait for those results before making adoption decisions that are costly to reverse.
The second is the Huawei hardware trajectory. V4 is an early test case for whether Chinese-made chips can support competitive AI at scale. If DeepSeek demonstrates strong performance on Huawei silicon, it validates a path that other Chinese AI companies will likely follow, accelerating the development of a parallel compute ecosystem outside U.S. control. If the chips prove to be a bottleneck, it reinforces the strategic value of Washington’s export restrictions.
What V4 makes unmistakably clear is that decisions about which chips power which models are no longer just engineering choices. They are moves in a broader contest over who controls the infrastructure of advanced AI, and every new release from DeepSeek sharpens the terms of that competition.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.