The Wall Narrative Is Wrong, And It Is Hardening Fast
For most of 2025, the Western consensus on Chinese AI has been variations on a single theme. Export controls are working, China is hitting a compute wall, and the next generation of frontier models will leave Beijing behind.
The narrative is now hardening into investment policy at sovereign wealth funds and into editorial line at major US technology publications. From where we sit in Asia, that consensus is misreading the data, and the cost of that misreading will compound for the rest of this decade.
This piece is a contrarian read on what is actually happening with Chinese AI in April 2026. We are not arguing that China has reached parity with the US frontier. We are arguing that the gap is narrower than the consensus believes, the trajectory is steeper than the consensus models, and the policy environment in Beijing is now more aligned to AI build-out than the policy environment in Washington.
What The Wall Narrative Gets Right
The skeptical case has three legitimate points, and we want to engage them seriously rather than wave them off.
First, the Nvidia H200 and B300 export controls have worked at the chip level. Chinese hyperscalers cannot legally buy the latest Nvidia silicon, and the smuggled supply that filled the gap in 2024 has been substantially constrained by tighter US enforcement and by Singapore's recent transhipment crackdown. That is real.
Second, Huawei Ascend, Cambricon, and Biren chips are not at parity with Nvidia on absolute performance. The gap on raw FLOPS per watt and on memory bandwidth remains material, and the software stack lags CUDA by anywhere from 18 to 30 months depending on the workload.
Third, several Chinese frontier labs have publicly missed announced model targets. Baidu's Wenxin 5 ran late, Alibaba's Qwen 3.5 was originally planned as Qwen 4, and Zhipu AI's commercial roadmap has shifted twice in 18 months.
What The Wall Narrative Gets Wrong
Where the consensus collapses is on what those facts add up to. The implicit conclusion in most Western analysis is that constrained chips plus delayed models equals a structural deceleration. That is not what the data shows.
Domestic substitution has now reached 82% on Chinese hyperscaler GPU procurement, according to a Center for Strategic and International Studies analysis released this month. That figure was 31% at the start of 2024. The substitution is happening faster than skeptics modelled, and the perceived hardware gap is shrinking because Chinese hyperscalers are buying differently rather than buying less.
More importantly, the open release of DeepSeek V4, Qwen 3.5, and GLM 5 in 2026 has pushed Chinese frontier model performance into a position where the relevant metric is not raw benchmark wins. DeepSeek V4 sits within 4-7% of the latest Anthropic and OpenAI models on most public benchmarks while costing roughly 22% per inference token. The competitive frame is now total cost of ownership and deployment flexibility, and on those measures Chinese labs are leading.
What ByteDance, Alibaba, And Huawei Are Actually Doing
The most useful place to look is at the spending plans, not the model announcements. Combined 2026 capex from Alibaba, Tencent, Baidu, ByteDance, Huawei, and the central government cluster sits at roughly USD 130 billion, according to Jefferies Asia. That is up from USD 47 billion in 2024. The capex mix has also changed sharply.
| Capex Category | 2024 | 2026 forecast |
|---|---|---|
| Imported Nvidia GPUs | USD 22B | USD 9B |
| Huawei Ascend / Cambricon / Biren | USD 6B | USD 51B |
| Data centre construction | USD 12B | USD 38B |
| Power and cooling | USD 3B | USD 14B |
| Software and tools | USD 4B | USD 18B |
The domestic chip line has grown more than eightfold in two years. The data centre construction line has tripled. Software and tools, the often-overlooked part of the AI stack, has grown more than fourfold.
This is not a country preparing for a slowdown. It is a country making the most aggressive AI infrastructure bet in human economic history, with state backing and aligned commercial incentives.
Why Open Models Matter More Than Most US Analysts Realise
The second thing the consensus underweights is the strategic effect of Chinese open weight releases. DeepSeek V4 is now serving roughly 37 million API calls a day through domestic Chinese cloud routers, and a further but harder-to-measure volume through self-hosted deployments. Qwen 3.5 is the dominant open base model in Southeast Asia, and is increasingly used by South Korean creative AI tools under licence.
The open release strategy is doing three things at once. It is pulling developers into Chinese model architectures, it is building global goodwill in markets the US has antagonised through trade policy, and it is providing meaningful cover against any future export control aimed at model weights rather than chips.
Where The Real Risks Sit
We do not believe the Chinese trajectory is risk-free. The genuine risks are different from the ones the wall narrative describes.
The first real risk is power. China's grid expansion is impressive but the AI capex line on cooling and substations has grown for a reason. Beijing is now hitting genuine constraints on regional grid stability, particularly in Inner Mongolia and Guizhou where the cheap-electricity AI clusters have concentrated. If the Guizhou compute cluster hits its summer load this year without an outage, that is a meaningful signal. If it does not, the constraint is real.
The second is talent. Chinese AI labs are paying salaries that compete with US frontier labs, but the supply of senior researchers is genuinely tight. The labs that have raised most aggressively, including Moonshot AI at a USD 23 billion valuation, are spending money on people who have only existed as a labour market for the past 36 months.
The third is policy alignment between Beijing and the major hyperscalers. The official line is unified, but the operational tensions between the State Administration for Market Regulation, the Ministry of Industry and Information Technology, and the Cyberspace Administration of China are not fully resolved. Any meaningful enforcement action against a major hyperscaler in 2026 would change the trajectory.
What This Should Change For Asian Investors And Operators
The most consequential implication is for Asian operators that have been hedging against a Chinese AI slowdown. That hedge is now structurally mispriced. ASEAN sovereign cloud providers, Korean foundation labs, and Indian SaaS firms are all having to revise their China assumptions. The competitive set is broader, the pricing pressure is sharper, and the window for differentiation on Chinese model substitutes is narrower than it looked in 2024.
For readers tracking the Huawei supernode build-out and Asia's compute race, the central read is that China is no longer competing on imported infrastructure. The competition has shifted to a domestic stack that is increasingly self-sufficient, and the rest of Asia has to decide whether to align, hedge, or build alternatives that have a credible economic case.
Frequently Asked Questions
Are you saying Nvidia chips no longer matter to China?
No. We are saying the marginal Nvidia chip matters much less than it did 24 months ago, because Chinese hyperscalers have substituted into Huawei Ascend and Cambricon for the bulk of new builds, and because the software stack is closing the gap.
What about export controls on model weights?
This is a legitimate concern but the open release of DeepSeek V4, Qwen 3.5, and GLM 5 has substantially reduced the leverage Washington would have from any future weight-level controls. The horse has left the barn.
Is the data really 82% domestic substitution?
The CSIS figure tracks GPU procurement at the major hyperscalers. There is some methodological uncertainty, and the figure is broadly consistent with separate work by Bernstein and Jefferies Asia.
What happens if Beijing intervenes against a hyperscaler?
This would meaningfully change the trajectory and is the most credible single risk to our thesis. We are watching for any sign of regulatory tension.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.