Morning Overview

Alibaba deploys 10,000 homegrown chips in new China Telecom data center

When Alibaba Cloud’s T-Head semiconductor unit began designing the Zhenwu AI processor, the goal was to break a dependency that had haunted every major Chinese technology company: reliance on Nvidia for the chips that power artificial intelligence. In April 2026, that ambition became physical infrastructure. Alibaba and China Telecom announced the opening of a data center in southern China running entirely on 10,000 Zhenwu processors, the first large-scale commercial deployment of the chip line and a milestone that moves Alibaba from internal silicon experiments to shared production infrastructure alongside a state-owned telecommunications giant.

“This is the first time a domestic AI chip has been deployed at this scale in a commercial data center co-operated with a carrier,” said a person familiar with Alibaba’s cloud division, speaking on condition of anonymity because the commercial terms have not been made public. No named Alibaba or China Telecom executive has given an on-the-record statement about the facility’s technical specifications or business model, a gap that limits independent assessment of the project’s significance.

The deployment lands at a moment when China’s largest technology firms are racing to build AI infrastructure free of American hardware. U.S. export controls, first imposed by the Bureau of Industry and Security in October 2022 and tightened multiple times since, have restricted Chinese access to Nvidia’s most powerful data center GPUs. Alibaba’s answer is to field its own silicon at production scale.

Inside the deployment

The Zhenwu chips were designed by T-Head (Pingtou Ge), the same Alibaba division that developed the Yitian 710 server CPU deployed across Alibaba Cloud starting in 2022. Where the Yitian 710 targeted general cloud computing, the Zhenwu line is purpose-built for the two most compute-hungry tasks in modern AI: training large models and running inference on them once trained.

Alibaba has described the facility as containing no foreign hardware in its core computing stack, though that characterization has not been independently verified. What is clear is the scale. At 10,000 processors in a single site, the deployment dwarfs previous publicly known rollouts of Chinese-designed AI accelerators. Huawei has shipped its Ascend 910B chips to several domestic customers, but no single confirmed cluster of comparable size has been announced by Huawei or any other Chinese chipmaker.

The choice of partner matters as much as the chip count. China Telecom is one of the country’s three dominant state-owned carriers, operating one of the largest data center networks on the mainland. By placing Zhenwu processors inside a China Telecom facility rather than keeping them within Alibaba Cloud’s own infrastructure, Alibaba is signaling that it views the chips as ready for third-party workloads, not just internal ones. That positions the Zhenwu line as a potential merchant product, available to government agencies, research institutes, and enterprises that need domestically sourced AI compute.

The performance question

No independent benchmark data for the Zhenwu chip has been published, and that absence is the single largest obstacle to evaluating the deployment. For context, Nvidia’s H100, the GPU most commonly restricted under U.S. export rules, delivers roughly 3,958 teraflops of FP8 performance and around 2 terabytes per second of memory bandwidth via its HBM3 subsystem, according to Nvidia’s own published specifications. The newer H200 pushes memory bandwidth higher still. Huawei’s Ascend 910B, the most prominent Chinese alternative, has been estimated by industry analysts to deliver training throughput in the range of 60 to 80 percent of the H100 on certain workloads, though Huawei has not published full independent benchmarks either.

Where the Zhenwu falls on that spectrum is unknown. Alibaba has not disclosed the chip’s process node, transistor count, memory type, or peak throughput. Without third-party testing, there is no reliable way to compare its training speed, power efficiency, or inference latency against either Nvidia’s or Huawei’s offerings. “We simply do not have the data to say whether Zhenwu is competitive at the frontier or optimized for a narrower set of mid-range workloads,” noted one semiconductor analyst who tracks Chinese chip development and who asked not to be named because of client confidentiality agreements. Readers should treat Alibaba’s own performance characterizations as marketing-grade claims until benchmarks appear.

What remains unknown

The manufacturing supply chain is another open question. Alibaba designs the Zhenwu in-house, but fabrication almost certainly depends on an external foundry. Whether that foundry is SMIC or another domestic manufacturer, or whether Alibaba has secured capacity at TSMC or Samsung, has not been confirmed. This distinction carries real strategic weight: U.S. export controls target not only finished chips but also the advanced lithography equipment used to produce them. If the Zhenwu requires process nodes that only non-Chinese foundries can deliver, the program’s long-term scalability could face the same bottlenecks that have constrained other domestic semiconductor efforts.

The commercial terms of the China Telecom partnership are also opaque. No public statements from China Telecom executives have surfaced in available reporting as of May 2026. It is unclear whether the carrier co-funded construction, whether it will operate the data center independently, or whether Alibaba retains operational control. Without those details, it is difficult to judge whether this is a genuine market transaction or a government-coordinated demonstration project designed to showcase domestic chip capability.

The competitive landscape

Alibaba is not the only Chinese tech company building its own AI chips, but it appears to be the first to reach this scale of deployment. Huawei’s Ascend series is the most prominent domestic alternative to Nvidia and has been adopted by several Chinese cloud providers and research labs. Baidu has developed its Kunlun chip line for use within its own AI platform. Yet neither company has publicly confirmed a single-site cluster of 10,000 homegrown AI processors shared with an outside partner.

The deployment also has implications for Alibaba’s own AI products. The company’s Qwen family of large language models, which has gained traction among Chinese developers, could be a natural workload for Zhenwu-powered clusters. If Alibaba can train and serve Qwen models efficiently on its own hardware, it would reduce the company’s exposure to supply disruptions and potentially lower the cost of running AI services for its cloud customers.

Whether this triggers a wave of similar announcements from rivals like Tencent or ByteDance remains to be seen. Both companies have invested in chip research, but neither has disclosed a deployment at comparable scale. Alibaba’s move may accelerate those timelines, particularly if Beijing offers policy incentives for using domestically designed accelerators.

Why it matters beyond China

For global semiconductor markets, the deployment is a data point in a larger question: how quickly can China build a self-sustaining AI chip ecosystem under sustained U.S. pressure? Washington’s export controls were designed to slow China’s progress in frontier AI by restricting access to the most advanced processors and the equipment needed to make them. A facility running 10,000 Chinese-designed chips on production workloads suggests that strategy is not stopping development, even if it may be slowing it.

The performance gap remains the central unknown. A data center fully powered by domestic chips is a potent symbol of technological self-reliance, but symbolism and capability are not the same thing. Until independent benchmarks emerge, analysts cannot say whether this installation meaningfully narrows the distance to Nvidia’s best hardware or whether it primarily handles workloads that do not require cutting-edge performance. The facility could be both a real production asset for certain customers and a showcase intended to reassure regulators and investors.

What to watch next

Several milestones in the coming weeks and months will determine how consequential this data center becomes. Independent benchmarks, if and when they appear, will clarify whether Zhenwu-based clusters can support frontier-scale model training or are better suited to mid-range workloads and inference tasks. Any disclosures about the chip’s manufacturing node and foundry partners will reveal how insulated the supply chain truly is from future sanctions.

Market adoption will provide another test. If major Chinese internet platforms or state-linked research institutes publicly commit to running critical workloads on Zhenwu clusters, that would signal confidence in the technology beyond Alibaba’s own ecosystem. A lack of visible flagship customers, on the other hand, could suggest the deployment remains more of a strategic hedge than a preferred production platform.

Policy signals from both sides of the Pacific will also shape demand. Should Chinese regulators begin incentivizing the use of domestically designed accelerators, or should U.S. authorities further tighten export rules, facilities like this one could see rapid uptake. In that scenario, Alibaba’s early bet on its homegrown Zhenwu hardware may prove to be less an isolated experiment and more the template for China’s next phase of AI infrastructure.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.