Skip to main content
AI in Asia
Why China AI Skeptics Are Reading It Wrong
Intelligence Desk
Intelligence Desk
Editorial Team
Deep Dive
· · Updated Apr 28, 2026 · 8 min read

Why China AI Skeptics Are Reading It Wrong

The Western consensus says China's AI is hitting a wall. The numbers we are reading from Beijing tell a different story.

The Wall Narrative Is Wrong, And It Is Hardening Fast

For most of 2025, the Western consensus on Chinese AI has been variations on a single theme. Export controls are working, China is hitting a compute wall, and the next generation of frontier models will leave Beijing behind.

The narrative is now hardening into investment policy at sovereign wealth funds and into editorial line at major US technology publications. From where we sit in Asia, that consensus is misreading the data, and the cost of that misreading will compound for the rest of this decade.

This piece is a contrarian read on what is actually happening with Chinese AI in April 2026. We are not arguing that China has reached parity with the US frontier. We are arguing that the gap is narrower than the consensus believes, the trajectory is steeper than the consensus models, and the policy environment in Beijing is now more aligned to AI build-out than the policy environment in Washington.

What The Wall Narrative Gets Right

The skeptical case has three legitimate points, and we want to engage them seriously rather than wave them off.

First, the Nvidia H200 and B300 export controls have worked at the chip level. Chinese hyperscalers cannot legally buy the latest Nvidia silicon, and the smuggled supply that filled the gap in 2024 has been substantially constrained by tighter US enforcement and by Singapore's recent transhipment crackdown. That is real.

Second, Huawei Ascend, Cambricon, and Biren chips are not at parity with Nvidia on absolute performance. The gap on raw FLOPS per watt and on memory bandwidth remains material, and the software stack lags CUDA by anywhere from 18 to 30 months depending on the workload.

Third, several Chinese frontier labs have publicly missed announced model targets. Baidu's Wenxin 5 ran late, Alibaba's Qwen 3.5 was originally planned as Qwen 4, and Zhipu AI's commercial roadmap has shifted twice in 18 months.

What The Wall Narrative Gets Wrong

Where the consensus collapses is on what those facts add up to. The implicit conclusion in most Western analysis is that constrained chips plus delayed models equals a structural deceleration. That is not what the data shows.

Domestic substitution has now reached 82% on Chinese hyperscaler GPU procurement, according to a Center for Strategic and International Studies analysis released this month. That figure was 31% at the start of 2024. The substitution is happening faster than skeptics modelled, and the perceived hardware gap is shrinking because Chinese hyperscalers are buying differently rather than buying less.

More importantly, the open release of DeepSeek V4, Qwen 3.5, and GLM 5 in 2026 has pushed Chinese frontier model performance into a position where the relevant metric is not raw benchmark wins. DeepSeek V4 sits within 4-7% of the latest Anthropic and OpenAI models on most public benchmarks while costing roughly 22% per inference token. The competitive frame is now total cost of ownership and deployment flexibility, and on those measures Chinese labs are leading.

What ByteDance, Alibaba, And Huawei Are Actually Doing

The most useful place to look is at the spending plans, not the model announcements. Combined 2026 capex from Alibaba, Tencent, Baidu, ByteDance, Huawei, and the central government cluster sits at roughly USD 130 billion, according to Jefferies Asia. That is up from USD 47 billion in 2024. The capex mix has also changed sharply.

Capex Category20242026 forecast
Imported Nvidia GPUsUSD 22BUSD 9B
Huawei Ascend / Cambricon / BirenUSD 6BUSD 51B
Data centre constructionUSD 12BUSD 38B
Power and coolingUSD 3BUSD 14B
Software and toolsUSD 4BUSD 18B

The domestic chip line has grown more than eightfold in two years. The data centre construction line has tripled. Software and tools, the often-overlooked part of the AI stack, has grown more than fourfold.

This is not a country preparing for a slowdown. It is a country making the most aggressive AI infrastructure bet in human economic history, with state backing and aligned commercial incentives.

Why Open Models Matter More Than Most US Analysts Realise

The second thing the consensus underweights is the strategic effect of Chinese open weight releases. DeepSeek V4 is now serving roughly 37 million API calls a day through domestic Chinese cloud routers, and a further but harder-to-measure volume through self-hosted deployments. Qwen 3.5 is the dominant open base model in Southeast Asia, and is increasingly used by South Korean creative AI tools under licence.

The open release strategy is doing three things at once. It is pulling developers into Chinese model architectures, it is building global goodwill in markets the US has antagonised through trade policy, and it is providing meaningful cover against any future export control aimed at model weights rather than chips.

Where The Real Risks Sit

We do not believe the Chinese trajectory is risk-free. The genuine risks are different from the ones the wall narrative describes.

The first real risk is power. China's grid expansion is impressive but the AI capex line on cooling and substations has grown for a reason. Beijing is now hitting genuine constraints on regional grid stability, particularly in Inner Mongolia and Guizhou where the cheap-electricity AI clusters have concentrated. If the Guizhou compute cluster hits its summer load this year without an outage, that is a meaningful signal. If it does not, the constraint is real.

The second is talent. Chinese AI labs are paying salaries that compete with US frontier labs, but the supply of senior researchers is genuinely tight. The labs that have raised most aggressively, including Moonshot AI at a USD 23 billion valuation, are spending money on people who have only existed as a labour market for the past 36 months.

The third is policy alignment between Beijing and the major hyperscalers. The official line is unified, but the operational tensions between the State Administration for Market Regulation, the Ministry of Industry and Information Technology, and the Cyberspace Administration of China are not fully resolved. Any meaningful enforcement action against a major hyperscaler in 2026 would change the trajectory.

What This Should Change For Asian Investors And Operators

The most consequential implication is for Asian operators that have been hedging against a Chinese AI slowdown. That hedge is now structurally mispriced. ASEAN sovereign cloud providers, Korean foundation labs, and Indian SaaS firms are all having to revise their China assumptions. The competitive set is broader, the pricing pressure is sharper, and the window for differentiation on Chinese model substitutes is narrower than it looked in 2024.

For readers tracking the Huawei supernode build-out and Asia's compute race, the central read is that China is no longer competing on imported infrastructure. The competition has shifted to a domestic stack that is increasingly self-sufficient, and the rest of Asia has to decide whether to align, hedge, or build alternatives that have a credible economic case.

The AIinASIA View: The wall narrative will be discredited within 18 months, and the cost of believing it now will be embedded in misallocated capital and missed partnerships. We are not arguing China is winning. We are arguing the gap to the US frontier is narrower than the consensus thinks, the trajectory is steeper, and the policy environment in Beijing is more aligned. Western analysts have been pattern-matching this story onto past industrial policy failures. That pattern does not fit. The relevant comparator is not Japan in 1990 or Korea in 2000. It is the United States in the late 1940s, when an aligned state, abundant capital, and a captive industrial supply chain produced a generation of compounding advantage.

Frequently Asked Questions

Are you saying Nvidia chips no longer matter to China?

No. We are saying the marginal Nvidia chip matters much less than it did 24 months ago, because Chinese hyperscalers have substituted into Huawei Ascend and Cambricon for the bulk of new builds, and because the software stack is closing the gap.

What about export controls on model weights?

This is a legitimate concern but the open release of DeepSeek V4, Qwen 3.5, and GLM 5 has substantially reduced the leverage Washington would have from any future weight-level controls. The horse has left the barn.

Is the data really 82% domestic substitution?

The CSIS figure tracks GPU procurement at the major hyperscalers. There is some methodological uncertainty, and the figure is broadly consistent with separate work by Bernstein and Jefferies Asia.

What happens if Beijing intervenes against a hyperscaler?

This would meaningfully change the trajectory and is the most credible single risk to our thesis. We are watching for any sign of regulatory tension.

By The Numbers

USD 130 billion
China AI Capex 2026

Combined 2026 capex from Alibaba, Tencent, Baidu, ByteDance, Huawei, and the central government cluster, per Jefferies Asia.

Read more →
37 million
DeepSeek API Calls Daily

Average daily API calls served by DeepSeek's open weights via domestic Chinese cloud routers in March 2026.

Read more →
82%
Domestic Substitution

Share of Chinese hyperscaler GPU procurement now sourced from Huawei Ascend, Cambricon, or Biren rather than Nvidia or AMD.

Read more →
USD 4.7 billion
Moonshot Series E

Most recent Moonshot AI funding round, valuation now USD 23 billion as of April 2026.

Read more →
Three
Frontier Models Now Open

DeepSeek V4, Qwen 3.5, and GLM 5 are all now available as open weights, all released by Chinese labs in 2026.

Read more →
USD 800 billion
10-Year Domestic AI Plan

Total state-led AI investment baseline through 2035, per the State Council Information Office April 2026 brief.

Read more →

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment
    Intelligence Desk
    Written by Intelligence Desk