Morning Overview

DeepSeek teases long awaited AI model built to take on ChatGPT

Chinese AI startup DeepSeek is preparing to release a new flagship model called V4, positioning it as a direct competitor to OpenAI’s ChatGPT and other leading U.S. systems. The company has already begun sharing early access with select Chinese chipmakers while deliberately excluding American firms, a move that sharpens the geopolitical edge of what is already one of the most closely watched product launches in artificial intelligence this year. Investors and policymakers are paying close attention because DeepSeek’s last major release rattled global markets and intensified debate over how quickly Chinese labs are closing the gap with their U.S. rivals.

What V4 Promises and When It Might Arrive

DeepSeek’s V4 is expected to feature advanced coding capabilities, and internal tests suggest it could leapfrog industry leaders including OpenAI’s GPT series and Anthropic’s Claude. The model reportedly introduces four major technical upgrades, among them a new architecture with tiered key-value cache storage designed to cut memory use, according to the Economic Times. Reporting from the Financial Times adds that V4 is expected to support multimodal inputs across text, image and video, although DeepSeek has yet to publish a detailed model card or technical report that would allow independent researchers to validate those claims.

The exact launch date remains uncertain. Some coverage suggests the release could coincide with China’s parliamentary sessions beginning March 4, as noted by regional outlet Zamin.uz, while earlier reporting from Yahoo Finance indicated V4 might arrive around the Lunar New Year holiday period. DeepSeek has said it plans to issue a short technical note alongside the model, according to the Financial Times subscription page, but has stopped short of confirming a firm timetable. That ambiguity is consistent with the company’s previous pattern of surprise launches, which has repeatedly caught markets off guard and complicated efforts by competitors to plan their own product roadmaps.

Selective Access Draws a Line Between U.S. and Chinese Chipmakers

Perhaps the most consequential detail around V4 is not the model itself but who gets to see it first. DeepSeek is selectively sharing pre-release access with Chinese chipmakers including Huawei Technologies while withholding it from Nvidia and AMD, according to Reuters. That decision effectively gives Chinese hardware companies a head start in optimizing their accelerators for the next generation of DeepSeek workloads, a meaningful advantage in a market where inference efficiency and tight software and hardware integration can determine commercial viability. It also signals that DeepSeek is aligning more explicitly with domestic suppliers at a time when U.S. export controls are already constraining the flow of high-end chips into China.

This selective rollout carries practical consequences for the global chip supply chain. Nvidia’s data-centre GPUs have long been the default training hardware for frontier AI models, and Washington’s export rules have already limited which of those products can legally reach Chinese labs. DeepSeek’s move to cut Nvidia out of early V4 testing suggests the company is actively building its software stack around non-American silicon, raising the prospect of a more bifurcated AI ecosystem. If V4 runs especially well on Huawei hardware, it could weaken the commercial case for Nvidia chips among Chinese AI developers and reduce some of the leverage that U.S. policymakers hoped to gain through export restrictions. For companies and researchers outside China that rely on DeepSeek’s open-source releases, a key question is whether future models will be tuned primarily for hardware configurations that are difficult or impossible for them to obtain.

V3’s Track Record Sets a High Baseline

V4 does not arrive in a vacuum. DeepSeek’s preceding V3 model used a Mixture-of-Experts architecture with 671 billion total parameters and 37 billion active parameters, a design that routes queries through specialized sub-networks rather than activating the entire model for every request. That approach kept inference costs lower than comparably sized dense models while still posting competitive benchmark scores across coding, reasoning and general-purpose tasks. The company followed up with a V3.2 update in December that it claims outperformed OpenAI’s GPT-5 and Google’s Gemini 3.0 Pro on certain internal tests, though independent verification of those performance gains remains limited and the underlying evaluation protocols have not been fully disclosed.

DeepSeek also released its R1 reasoning model family, including R1, R1-Zero and several distilled variants, along with benchmark comparisons against GPT-4o and OpenAI’s o1 line. That willingness to publish model weights and evaluation details has helped the company cultivate a large community of developers who fine-tune, deploy and critique its systems. The open approach has also made DeepSeek’s work unusually transparent for a frontier lab, especially when contrasted with the more closed strategies of some U.S. rivals. Yet this same openness has intersected awkwardly with a separate controversy over how DeepSeek trained its earlier models and whether they relied too heavily on outputs from proprietary systems developed elsewhere.

OpenAI’s Distillation Allegations Add Friction

OpenAI has said it is reviewing allegations that its own AI models were used as training data for DeepSeek’s systems without permission, raising uncomfortable questions about how far competitive intelligence gathering has gone in the race to build ever more capable chatbots. According to reporting in the Guardian, OpenAI is investigating whether large volumes of ChatGPT outputs were systematically harvested and then distilled into DeepSeek’s models, potentially violating OpenAI’s terms of service and undermining the company’s claims that its most advanced systems are based primarily on licensed or publicly available data. DeepSeek has not publicly detailed the full composition of its training corpus, and no regulator has yet made a formal finding on the matter, but the dispute has already sharpened calls for clearer rules on data provenance in AI training.

The controversy lands at a sensitive moment for both companies. OpenAI faces mounting scrutiny over how it protects user data and commercial information flowing through its platforms, while DeepSeek is trying to persuade global developers and enterprise customers that its models are both technically strong and legally safe to adopt. If investigations were to conclude that large-scale distillation of proprietary systems had occurred, it could influence future licensing negotiations and prompt governments to tighten rules on cross-border AI development. For now, the episode underscores how opaque training pipelines and fragmented global regulation are colliding with fierce commercial rivalry, with V4’s debut likely to amplify those tensions rather than resolve them.

Market, Policy and Education Stakes Around V4

DeepSeek’s rapid ascent has already had visible effects on financial markets, with earlier model launches coinciding with sharp swings in the share prices of major chipmakers and AI-linked firms tracked on global equity screens. Traders now treat Chinese AI announcements as potential catalysts, not only for domestic tech stocks but also for U.S. names whose valuations hinge on their perceived technology lead. The possibility that V4 could narrow performance gaps with U.S. models, or even surpass them on certain tasks, adds another variable for investors trying to assess how durable current profit expectations are for established players in the AI hardware and software stack.

Central banks and finance ministries are also watching these developments as they consider how AI-driven productivity gains and investment cycles might affect inflation, growth and financial stability, themes that appear in specialist analysis on monetary policy dashboards. A more competitive Chinese AI sector could accelerate the diffusion of automation tools across manufacturing and services, reshaping trade patterns and potentially altering the balance of technological power between major economies. At the same time, universities and business schools are racing to update curricula so that future managers understand both the technical and geopolitical dimensions of systems like V4, a shift reflected in evolving course offerings highlighted in international education rankings. As companies weigh which AI platforms to adopt, they must navigate not only benchmark charts and cost metrics but also export controls, data governance rules and the prospect of more fragmented global standards.

Whether V4 ultimately delivers on the most ambitious performance claims will depend on details that DeepSeek has not yet fully disclosed, from training compute budgets to safety guardrails and evaluation methodology. But even before the model is publicly available, the choices the company is making (about which chipmakers gain early access, how openly it shares technical documentation and how it responds to allegations over training data) are shaping expectations for the next phase of global AI competition. For policymakers, investors and educators alike, the launch will be a test case in how frontier AI capabilities, industrial policy and international regulation intersect, and in how quickly the centre of gravity in advanced model development might shift. As governments and enterprises prepare for that possibility, some are turning to specialised licensing tools such as the enterprise rights finders used for financial and technical information, underscoring that access and governance questions now sit alongside model quality at the heart of the AI race.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.