daisy0914/Unsplash

Chinese artificial intelligence ambitions are colliding with United States export controls in a way that is reshaping where, and how, the most advanced models are trained. Instead of slowing down, Alibaba and ByteDance are redirecting their heaviest AI workloads to offshore data centers that can still tap Nvidia’s most powerful chips. The result is a new, legally gray geography of AI development that keeps cutting edge hardware in play while testing the limits of Washington’s tech restrictions.

By shifting training for systems like Alibaba’s Qwen and ByteDance’s Doubao away from mainland China, these companies are effectively routing around the chokepoint that U.S. regulators tried to build. I see this as less a story about evasion and more about adaptation, with Chinese groups using overseas infrastructure, complex leasing deals, and regional partnerships to keep pace in the global AI race.

How U.S. export controls created an offshore AI workaround

Washington’s export controls on advanced semiconductors were designed to keep the most capable Nvidia accelerators out of Chinese data centers, but they did not fully anticipate how quickly training workloads could be moved abroad. Chinese companies responded by looking for jurisdictions where Nvidia’s top-tier GPUs could still be purchased and deployed, then structuring their AI development so that the most compute intensive phases happen outside the mainland. In practice, that means the letter of the export rules is observed, while the strategic effect is blunted.

Reports describe how U.S. chip restrictions have pushed Chinese technology giants to train their most advanced artificial intelligence models using Nvidia accelerators located in offshore clusters rather than domestic facilities. The same pattern appears in financial analysis that notes Chinese tech companies are training their AI systems abroad to access Nvidia hardware that cannot be sold directly into China, including high end products such as the H200. Together, these accounts show how export controls have shifted the geography of AI training without cutting Chinese firms off from the underlying chips.

Alibaba and ByteDance lead the offshore pivot

Among Chinese players, Alibaba and ByteDance have become the most visible examples of this offshore strategy, in part because their consumer scale demands enormous training runs. Both companies are racing to build large language models that can compete with systems from OpenAI, Google, and others, and that race requires dense clusters of Nvidia GPUs. When those clusters could no longer be built in mainland China using the latest chips, the logical move was to follow the hardware to friendlier jurisdictions.

Detailed reporting describes how Alibaba and ByteDance shift development to south east Asia in order to sidestep U.S. curbs while still tapping Nvidia chips. Another account notes that large Chinese technology companies including large Chinese groups such as Alibaba are training AI models abroad using Nvidia products like the H20 chip. A separate analysis emphasizes that Alibaba and ByteDance are not the only firms pursuing this route, but they are the clearest illustration of how Chinese champions are reorganizing their AI pipelines around offshore compute.

Qwen, Doubao and the need for massive Nvidia clusters

The decision to move training offshore is not just about policy, it is about the technical demands of models like Qwen and Doubao. Alibaba’s Qwen family of large language models and ByteDance’s Doubao system are designed to handle complex reasoning, multilingual dialogue, and code generation at a scale that rivals Western flagships. Training such systems requires tens of thousands of high performance GPUs, fast interconnects, and reliable power, all of which are easier to assemble where Nvidia’s most advanced chips can be freely sold.

According to detailed coverage, Chinese technology giants are increasingly training their most advanced artificial intelligence models using Nvidia accelerators located in offshore clusters, a description that aligns with the compute needs of systems like Qwen and Doubao. Another report states that their Qwen and Doubao large language models are allegedly being trained using Nvidia chips in overseas data centers, underscoring how central these offshore clusters have become to the companies’ AI roadmaps.

Why south east Asia is becoming China’s AI back office

South east Asia has emerged as the primary staging ground for this offshore training because it offers a mix of regulatory flexibility, infrastructure investment, and geographic proximity to China. Data center operators in the region have been eager to attract hyperscale tenants, and local governments see AI infrastructure as a way to climb the value chain. For Chinese firms, that combination makes it possible to lease capacity rather than build from scratch, while keeping latency and logistical complexity manageable.

One detailed account explains that Asia is where Alibaba and ByteDance are shifting development, with top Chinese companies training AI models in south east Asia to sidestep U.S. curbs while maintaining that the structure is legally compliant. Another report notes that top Chinese companies are training their AI models in these regional hubs, reinforcing the idea that south east Asia is becoming a kind of AI back office for mainland groups that cannot directly import the latest Nvidia hardware.

Leasing offshore data centers to stay “legally compliant”

What makes this strategy particularly difficult for regulators to counter is that it relies on leasing rather than ownership. Instead of buying and importing Nvidia GPUs, Alibaba and ByteDance can rent capacity from foreign data center operators that have already acquired the chips under local rules. From a legal standpoint, the hardware never crosses into China, even though the models being trained are designed and controlled by Chinese companies.

Reporting on these arrangements notes that Alibaba and ByteDance are using south east Asian data center leases to access Nvidia accelerators while arguing that it is all legally compliant under current export control rules. Another analysis of offshore clusters explains that accelerators located in offshore clusters are being used by Chinese technology giants to train their most advanced models, which fits the pattern of capacity leasing rather than direct chip purchases by mainland entities.

Nvidia’s constrained role and the H20, H200 workaround

Nvidia sits at the center of this story, constrained by U.S. policy yet still supplying the chips that power Chinese AI development, albeit indirectly. When Washington tightened export rules, Nvidia responded with modified products such as the H20 that were designed to comply with performance thresholds while still serving the Chinese market. At the same time, more capable parts like the H200 remained off limits for direct sale into China, which is one reason Chinese firms have turned to offshore training.

Analysts tracking the sector point out that Nvidia cannot sell H200 chips to China under current rules, a restriction that has encouraged Chinese tech companies to train AI models abroad where those chips can still be deployed. At the same time, coverage of Chinese AI training abroad notes that using Nvidia chips such as the H20 remains central to the strategy of large Chinese technology companies that are training AI models overseas. The combination of restricted and tailored products has created a patchwork in which Nvidia’s most advanced hardware still shapes Chinese AI progress, but through more complex routes.

How offshore training reshapes the global AI map

The shift of Chinese AI training to offshore hubs is quietly redrawing the global map of where the most powerful models are built. Instead of a binary split between U.S. and Chinese data centers, there is now a third zone of intense activity in south east Asia, where local operators host clusters that serve foreign clients. This redistribution of compute has implications for everything from regional energy demand to the bargaining power of governments that host or regulate these facilities.

Reports that describe offshore clusters serving Chinese technology giants, and that detail how top Chinese companies are training AI models in south east Asia, show how this new geography is taking shape. In effect, the region is becoming a neutral ground where U.S. chipmakers, Chinese AI developers, and local infrastructure providers intersect, each responding to the incentives and constraints created by export controls.

The limits of current export controls

For policymakers in Washington, the offshore training trend exposes the limits of a strategy that focuses on where chips are sold rather than how they are used. As long as Nvidia accelerators can be purchased by entities outside China, and as long as Chinese firms can lease access to those accelerators, the practical impact of the restrictions will be partial. Tightening the rules further would mean reaching deeper into third country transactions, a step that would raise diplomatic and commercial risks.

Accounts that describe Chinese tech companies training AI models abroad to tap Nvidia chips, and that detail how ByteDance and Alibaba are not the only firms pursuing this route, underline how widespread the workaround has become. The more entrenched these offshore patterns are, the harder it will be for any single government to unwind them without reshaping the broader global market for AI compute.

More from MorningOverview