Skip to main content
AI in Asia
Greater China

Huawei Supernode Becomes China's Top AI Asset

DeepSeek V4 confirms Huawei's Supernode and Ascend 950 can train frontier-class models, validating two years of sovereign-stack work.

· Updated Apr 26, 2026 7 min read
Huawei Supernode Becomes China's Top AI Asset

Huawei's Supernode Just Quietly Became China's Most Important AI Infrastructure Asset

The DeepSeek V4 launch on Friday confirmed something China watchers have been tracking for two years: Huawei's Supernode interconnect, paired with the Ascend 950 accelerator, is now genuinely capable of training a frontier-class large language model. That is the moment the Chinese AI infrastructure story stopped being about catching up and started being about building a parallel stack. For Greater China observers, the technical achievement matters less than the strategic implication, which is that DeepSeek now has a credible domestic alternative to Nvidia.

What Supernode Actually Is

Huawei's Supernode is a high-bandwidth interconnect technology designed to knit together large clusters of Ascend accelerators into a single training fabric. The architecture trades some per-chip raw performance for system-level throughput, which is exactly what large language model training needs. According to Fortune, DeepSeek used Supernode-connected Ascend 950 clusters to train V4-Pro, supplemented by Cambricon silicon. The detail matters because it is the first publicly confirmed end-to-end frontier-class training run on Chinese-designed silicon.

The Supernode story is part of a longer Huawei strategy. The company has been quietly investing in interconnect, networking, and accelerator software for years, and the Ascend 950 is now the public face of that work. Huawei's pitch is no longer that Ascend matches Nvidia chip-for-chip. The pitch is that Ascend plus Supernode plus Mindspore software gives Chinese customers a complete, sovereign, and price-competitive stack.

What Huawei has done with Supernode is build the infrastructure equivalent of the Apple stack, tightly integrated, optimised end to end, and only fully usable if you commit to the whole thing.

Dan Wang, technology analyst and senior fellow, Yale Law School Paul Tsai China Centre

Why This Lands Differently In Greater China Than Elsewhere

For a Chinese state-owned enterprise, a Belt and Road customer, or a Hong Kong financial institution evaluating local AI deployment, the V4-Supernode combination changes the procurement calculus. Six months ago the only realistic path to frontier-class LLM training in China involved either smuggled Nvidia silicon or extreme capital efficiency. Today the path includes a domestic-silicon option that has been demonstrated to work at frontier scale by a known reference customer.

Hong Kong is particularly interesting in this picture. The city's universities and financial institutions have been navigating a delicate position between US export controls and Chinese sovereign-stack ambitions. The Hong Kong University of Science and Technology and University of Hong Kong both run substantial AI research programmes that have historically depended on Nvidia silicon. The DeepSeek V4 demonstration gives those institutions a credible alternative they can evaluate for sensitive workloads without having to navigate US licensing.

By The Numbers

1.6 trillion

1.6 trillion parameters in DeepSeek V4-Pro

1.6 trillion parameters in DeepSeek V4-Pro, the model now confirmed to have been trained on Huawei Ascend silicon

1,000,000

1

1,000,000-token context window in V4-Pro, achieved without using Nvidia HBM memory

80%

80% combined HBM market share held by Samsung

80% combined HBM market share held by Samsung and SK Hynix, the supply chain V4 explicitly bypassed

$54.6 billion

$54.6 billion estimated 2026 HBM market size from

$54.6 billion estimated 2026 HBM market size from Bank of America, up 58% year-on-year

Where Alibaba And Tencent Sit Now

Both Alibaba and Tencent have invested heavily in their own silicon and cloud AI capabilities. Alibaba's Pingtouge division produces the Hanguang inference accelerator and has an active foundation-model programme around the Qwen family. Tencent operates a parallel effort around Hunyuan and the Yuanbao platform. The DeepSeek V4 demonstration creates pressure for both companies to either match the Huawei-DeepSeek combination or commit publicly to a Western-aligned stack.

Reports earlier this month that Alibaba and Tencent are in advanced talks to invest in DeepSeek at a $20 billion valuation suggest the two giants are choosing to align with the Huawei-DeepSeek axis rather than compete with it. That alignment, if it closes, would consolidate the Chinese sovereign-stack story into a single unified front, with Huawei providing infrastructure, DeepSeek providing the open-weights models, and Alibaba and Tencent providing distribution and consumer reach.

If Alibaba and Tencent close the DeepSeek investment, the Chinese sovereign AI stack ceases to be a thesis and becomes a fact. That is a structural change for Greater China procurement.

Bill Bishop, founder, Sinocism newsletter

What This Does Not Solve

The Huawei-DeepSeek combination does not solve all of China's AI infrastructure problems. The Supernode-Ascend stack remains capacity-constrained, and demand from Chinese state-owned enterprises and Belt and Road customers will exceed supply throughout 2026. The software ecosystem around Mindspore is less mature than CUDA, which means porting third-party models onto the stack still requires meaningful engineering effort. And the export reach of Ascend silicon remains limited by the same US licensing concerns that constrain Nvidia exports to China.

For Hong Kong, Macau, and Taiwan AI users, the practical implication is that hybrid stacks will dominate procurement decisions for the next 12 to 18 months. Mainland-facing workloads will use Huawei-DeepSeek where possible. Outward-facing workloads, including cross-border services and US-licensed software, will continue to use Nvidia where available. Procurement teams will need to develop genuine multi-stack competence, which most have not had to do before.

Greater China AI Stack ComponentSovereign OptionMaturity2026 Outlook
SiliconHuawei Ascend 950ProductionCapacity expansion
InterconnectHuawei SupernodeDemonstrated at frontier scaleWider deployment
SoftwareHuawei MindsporeImprovingClosing CUDA gap
Foundation modelsDeepSeek V4, Qwen, HunyuanFrontier-classOpen-weights race
DistributionAlibaba Cloud, Tencent CloudMatureTighter alignment

For deeper context, see our coverage of the Alibaba and Tencent talks to invest in DeepSeek, the DeepSeek V4 launch detail, and the TSMC profit-jump signal.

The AIinASIA View: The technical confirmation that Supernode and Ascend can train a frontier-class model is the most consequential Greater China AI infrastructure development of 2026 to date. It validates two years of state-directed investment, gives DeepSeek a defensible moat against Western frontier labs in regulated Chinese sectors, and creates a real procurement path for Hong Kong and Macau institutions navigating US export controls. The timing also matters. With Alibaba and Tencent reportedly close to investing in DeepSeek, the Chinese sovereign AI stack is consolidating into a single unified front faster than most Western analysts expected. We expect a wave of state-owned-enterprise procurement announcements built on the Huawei-DeepSeek combination through Q3 2026.

Frequently Asked Questions

Does Supernode mean China no longer needs Nvidia for AI training?

Not entirely. Supernode and Ascend together can train frontier-class models, as DeepSeek V4 demonstrates, but the supply of Ascend silicon and the maturity of the Mindspore software ecosystem still mean Nvidia is preferred by many Chinese commercial AI users where it is licensable.

How does Supernode compare to Nvidia NVLink and InfiniBand?

The architectures differ on raw bandwidth and protocol design, but at the system level both deliver enough throughput to train frontier models. Supernode is more tightly integrated with Ascend silicon, while NVLink and InfiniBand offer broader vendor compatibility.

Will Hong Kong universities adopt the Huawei-DeepSeek stack?

Most Hong Kong universities will run hybrid stacks. Sensitive workloads that would otherwise require US export licences will increasingly be evaluated on the Huawei-DeepSeek combination, while general research workloads will continue to use Nvidia where available.

What does this mean for Taiwanese chip suppliers?

TSMC remains essential to Nvidia's supply chain and will not be displaced by domestic Chinese silicon any time soon. The longer-term question is whether Chinese foundries can close enough of the gap to make Ascend production fully sovereign, which is a multi-year story.

Could Western companies use the same Huawei-DeepSeek stack?

Technically yes, since DeepSeek's weights are open and Mindspore software is publicly available. Practically Western companies face regulatory and reputational constraints that limit adoption to specific use cases, with the most likely candidates being academic research and developing-market deployments.