Japan is racing to build its own artificial intelligence backbone, and the effort now stretches from a mid-sized Osaka-based cloud provider publishing open-source supercomputer blueprints to multibillion-dollar commitments from some of the world’s largest technology companies. Sakura Internet, SoftBank, Microsoft, and Nvidia have each staked significant resources on the premise that Japan can and should train advanced AI models on domestic soil rather than routing sensitive workloads through data centers overseas.
The push has accelerated sharply since early 2025. SoftBank broke ground on what it calls the most powerful AI supercomputer in Asia, a system built on Nvidia’s Grace Blackwell chips and housed in Hokkaido. Microsoft committed roughly $2.9 billion to expand its Azure cloud and AI infrastructure across Japan, including new data center capacity in the Tokyo and Osaka regions. And Nvidia CEO Jensen Huang has made Japan a recurring stop on his global tour, signing cooperation agreements with Japanese firms and government agencies aimed at expanding GPU access for domestic researchers and startups.
Sakura Internet’s open blueprint
Among these players, Sakura Internet stands out for a different reason: transparency. The company’s research division has published two detailed technical papers on arXiv describing the design, configuration, and real-world performance of its AI-focused high-performance computing system, known as SAKURAONE.
The first paper documents the cluster’s architecture and how it behaves under actual large language model training conditions. The system uses an Ethernet-based interconnect rather than proprietary networking fabric, a deliberate choice that favors openness and interoperability. For companies training large language models, the interconnect between GPU nodes is often the performance bottleneck. Ethernet hardware is widely available and well understood, which means other organizations could replicate or adapt Sakura’s design without licensing specialized technology.
A companion paper places the system in a global performance context, referencing benchmarks such as the TOP500 and ISC rankings and describing the company’s broader strategy of building AI compute capacity through private-sector investment. Both documents are freely accessible author manuscripts, not marketing materials, and they describe hardware configurations, networking choices, and measured workload behavior in language aimed at engineers and computer scientists.
The single-tenant architecture is a notable design decision. By dedicating the full cluster to one training job at a time, Sakura can measure workload dynamics without interference from competing processes. This yields cleaner performance data and more predictable training runs, both of which matter when developing large language models where small inefficiencies compound across billions of parameters and days of continuous computation.
Rather than limiting disclosure to theoretical peak performance, the researchers describe observed GPU utilization, communication overhead, and job behavior under realistic training scenarios. That kind of workload-level visibility is uncommon in an industry where many providers advertise aggregate FLOPS but reveal little about how systems perform during long, memory-intensive runs.
Why Japan is building at home
Several forces are converging to make domestic AI infrastructure a national priority. Japan’s Ministry of Economy, Trade and Industry has channeled billions of yen into subsidies for AI-related compute, and the government has identified sovereign AI capacity as a strategic concern. Data residency regulations make it complicated for Japanese enterprises handling sensitive information to rely entirely on foreign cloud providers. And tightening U.S. export controls on advanced semiconductors have introduced uncertainty about long-term access to cutting-edge chips, giving Japanese firms an incentive to lock in GPU supply and build infrastructure now.
SoftBank’s approach reflects the scale end of the spectrum. Its planned Izumi supercomputer, backed by Nvidia’s latest accelerators, is designed to serve as a national-scale AI training resource. The project aligns with SoftBank CEO Masayoshi Son’s stated ambition to make Japan a global AI hub, and it draws on the company’s deep ties with Nvidia, which supplies the GPU architecture underpinning most frontier AI training worldwide.
Microsoft’s investment, announced in 2024, targets cloud and AI infrastructure expansion that would give Japanese enterprises access to Azure-based AI services running on local hardware. The commitment signals that even hyperscale cloud providers see value in placing compute physically inside Japan rather than asking customers to connect to distant regions.
Sakura Internet occupies a different niche. The company secured a significant contract from Japan’s National Institute of Information and Communications Technology (NICT) for government cloud and AI infrastructure, positioning it as a domestic alternative for public-sector workloads. Its decision to publish detailed technical findings on arXiv reflects a transparency-first philosophy that contrasts with the closed approaches taken by many hyperscale providers in the United States and China.
What the technical papers show, and what they don’t
The SAKURAONE papers are the strongest primary-source evidence available for understanding what Sakura Internet has actually built. They confirm that the company has moved beyond announcements into operational deployment of a large-scale AI training system with enough capacity to handle demanding LLM workloads.
But the papers have clear limits. They do not name specific GPU models, quantities, or supply agreements. They reference TOP500 and ISC benchmarks without detailing whether SAKURAONE has formally appeared on those lists or was designed with that goal in mind. Exact capital expenditure figures, funding rounds, and government subsidy amounts are absent from the manuscripts. Readers should treat any claims about specific dollar amounts or partnership terms with caution unless those figures appear in audited financial disclosures or official corporate announcements.
The papers also say little about higher-level offerings that enterprises now expect: turnkey training platforms, fine-tuning services, model hosting environments, or developer tooling. SAKURAONE is evidence of hardware capability, not a complete indicator of commercial readiness.
Where Japan’s AI infrastructure race goes from here
For organizations tracking the global distribution of AI compute power, the activity in Japan matters because it demonstrates that private-sector actors outside the United States and China are investing heavily in domestic infrastructure capable of supporting frontier model development. The practical question for any company considering training AI models in Japan is whether systems like Sakura’s, SoftBank’s, or Microsoft’s local deployments can deliver competitive price-performance, reliability, and support compared with established international cloud regions.
Answering that requires more than the technical metrics disclosed in research papers. Pricing, service-level agreements, ecosystem integration, and the regulatory environment around chip procurement will all shape whether Japan’s AI infrastructure buildout translates into a self-sustaining ecosystem or remains dependent on foreign technology at critical points in the supply chain.
As of spring 2026, the verified record shows multiple Japanese and international players committing real capital and engineering resources to the effort. Sakura Internet’s open, Ethernet-based cluster and its unusual willingness to publish operational data offer one concrete data point. SoftBank’s supercomputer ambitions and Microsoft’s regional expansion offer others. Together, they sketch the outline of a domestic AI infrastructure layer that did not exist two years ago. How far and how fast it scales will depend on decisions still being made in boardrooms, government ministries, and chip fabrication plants around the world.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.