Skip to main content
AI in ASIA
Thursday, 19 March 2026

3Before9

3 must-know AI stories before your 9am coffee

Who should pay attention

Enterprise AI buyers in Asia-Pacific | Semiconductor and memory investors | Cloud infrastructure strategists

What changes next

If Nvidia's China H200 restart holds and volume caps are raised, Chinese cloud providers will accelerate training workloads that have been on hold since mid-2025, intensifying competitive pressure on US AI labs.

1

Chips, memory, and cloud: Asia's AI infrastructure moment is accelerating fast

Three stories dominated the AI news Asia beat on 19 March 2026, and they share a single throughline: the physical infrastructure underpinning the global AI build-out is being negotiated, manufactured, and deployed with Asia-Pacific firmly at the centre. Nvidia is back in China. Micron is printing money. And Singapore just gained another serious cloud contender.

2

By The Numbers

US$23.86 billion , Micron's fiscal Q2 2026 revenue, nearly three times the year-ago figure and well above the US$20.07 billion analyst consensus 160%+ , Year-on-year growth in Micron's cloud memory revenue, reaching US$7.75 billion in a single quarter 479% , Nebius's year-on-year revenue growth in 2025, the year before it opened its Asia-Pacific headquarters in Singapore US$20 billion+ , Nebius's current contract backlog, including multi-year infrastructure deals with Microsoft and Meta ~10 months , The length of Nvidia's freeze on advanced chip supplies to China before the H200 restart announced at GTC 2026

3

Nvidia Breaks the China Chip Freeze

At [Nvidia's GTC 2026 conference](http://www.techmeme.com/260314/p12#a260314p12), CEO Jensen Huang confirmed the company is restarting manufacturing of its H200 processors for shipment to China. The announcement ends a freeze of roughly ten months on advanced chip supplies to the world's second-largest economy, a period during which Chinese AI developers were left scrambling for domestic alternatives or older, less capable silicon. Huang confirmed that Nvidia has received purchase orders from multiple Chinese customers and that the supply chain is now actively ramping. The H200 shipments will be subject to a 25 per cent US duty and a government inspection regime. Officials are reportedly considering a cap of 75,000 chips per customer and up to one million processors in total. Nvidia's top-tier Blackwell and Rubin architectures remain firmly off-limits for Chinese buyers under current export control rules. "China was once responsible for roughly a quarter of Nvidia's total revenue. Its return as a buyer reshapes the competitive calculus for the entire Asia-Pacific semiconductor supply chain." The implications radiate outward across the region. Korean and Taiwanese chip packaging and memory suppliers stand to benefit from renewed H200 production volume. Chinese cloud providers, many of which have been rationing compute capacity since mid-2025, finally gain access to training-grade silicon. The geopolitical détente , if it holds , could also shift the calculus for US-listed chipmakers weighing their long-term exposure to Chinese enterprise customers. It is worth noting that this is not a full normalisation. The chip caps, the duty burden, and the inspection regime all signal that Washington retains meaningful leverage. But for the Asian semiconductor ecosystem, even a partial reopening of the China market is significant news. If you want broader context on how Chinese AI firms have been navigating the supply crunch, our coverage of [how Chinese AI models have come to lead global token rankings](/news/chinese-ai-models-now-lead-global-token-rankings) despite hardware constraints is worth your time. Nvidia H200 chips destined for the Asia-Pacific market, restarting after a ten-month freeze.

4

Micron's Memory Supercycle Is Printing Money

If the Nvidia story is about geopolitics, the Micron story is about structural economics. The company's fiscal Q2 2026 results were, by any measure, extraordinary. Revenue of US$23.86 billion came in roughly three times the year-ago figure and beat analyst expectations by nearly US$4 billion. Earnings per share of US$12.20 demolished the US$9.31 consensus. Cloud memory revenue alone surged more than 160 per cent to US$7.75 billion, driven by insatiable demand for high-bandwidth memory used inside AI accelerators. "Micron's entire 2026 HBM production capacity is already sold out, and CEO Sanjay Mehrotra has pointed to structural supply constraints that show no sign of easing before 2028 at the earliest." - Sanjay Mehrotra, Chief Executive Officer, Micron Technology Micron guided Q3 revenue to roughly US$33.5 billion, implying year-on-year growth above 200 per cent. That is not a typo. The AI memory market has entered a supercycle that is compressing what would normally be years of incremental growth into individual quarters.

5

What This Means for Asia's Memory Giants

South Korea's SK Hynix and Samsung are locked in a three-way race with Micron for HBM market share, and both are aggressively expanding fab capacity across Korea, Japan and China. With HBM shortages expected to persist until 2028 or 2029, Asian memory manufacturers and their equipment suppliers are looking at a prolonged earnings tailwind that will reshape balance sheets across the sector. SK Hynix currently holds the leading position in HBM3E supply to Nvidia Samsung is investing heavily to close the gap and secure a larger share of the Nvidia supply chain Equipment suppliers in Japan and South Korea are benefiting from accelerated fab buildout timelines The memory shortage is compounding the AI chip access problem for smaller players in Southeast Asia and India who lack the procurement scale of hyperscalers This is precisely the kind of structural shift that makes AI infrastructure such a dominant theme across Asia-Pacific markets in 2026. The memory supercycle intersects with the data centre construction boom, the sovereign AI ambitions of multiple governments, and the race to attract hyperscaler investment , all of which we track regularly in our [3 Before 9: March 18, 2026](/news/3-before-9-2026-03-18) and [3 Before 9: March 17, 2026](/news/3-before-9-2026-03-17) briefings.

6

The Asia-Pacific Picture: Nebius Plants Its Flag in Singapore

The third story of the day is the most locally significant for Southeast Asian enterprise buyers. Nebius, the Nasdaq-listed AI cloud provider, has appointed John Haarer as general manager for Asia-Pacific and Japan and established its regional headquarters in Singapore. Haarer brings experience from Cloudflare and Twilio , both companies with strong Asia expansion track records , and will lead commercial growth across Singapore, Japan, South Korea and India. Nebius's [Asia-Pacific expansion announcement](https://nebius.com/newsroom/nebius-expands-into-asia-pacific-region-to-support-rapid-global-growth) comes off the back of 479 per cent year-on-year revenue growth in 2025, a contract backlog exceeding US$20 billion, and multi-year infrastructure deals with both Microsoft and Meta. The company is targeting annualised run-rate revenue of US$7 billion to US$9 billion by end of 2026. Singapore's continued rise as the preferred regional hub for AI infrastructure is underscored by this move. The city-state's regulatory clarity, data connectivity, and access to regional talent make it the default choice for cloud and compute providers establishing an Asia-Pacific presence. For enterprise buyers across the region, Nebius's arrival introduces a well-funded, GPU-native competitor into a market currently dominated by AWS, Azure and Google Cloud. More competition means more pricing pressure, and that is unambiguously good for buyers. The broader Southeast Asian AI infrastructure story also touches on regulatory developments worth watching. [Vietnam's enforcement of Southeast Asia's first AI law](/news/vietnam-enforces-southeast-asias-first-ai-law) is already shaping how cloud providers think about data residency and compliance across the region. Nebius will need to navigate that landscape as it expands beyond Singapore into markets with more complex regulatory environments.

7

The Week in AI News Asia: A Pattern Emerges

Taken together, these three stories reveal something important about the current moment in AI news Asia. The infrastructure layer , chips, memory, cloud compute , is the battleground that will determine which companies and which countries lead the next phase of AI development. The software and model layer gets most of the headlines, but it runs on physical hardware that is in genuinely short supply. Story Key Player Asia-Pacific Impact Nvidia H200 China restart Nvidia / Chinese cloud providers Korean and Taiwanese supply chain benefits; Chinese AI developers regain access to training silicon Micron Q2 supercycle results Micron / SK Hynix / Samsung Prolonged earnings tailwind for Asian memory manufacturers; HBM shortage persists through 2028+ Nebius Singapore HQ launch Nebius / Singapore ecosystem New GPU-native competitor enters Asia-Pacific cloud market; pricing pressure on incumbents Asia is not merely a consumer of AI infrastructure , it is one of its primary manufacturers and, increasingly, one of its most sophisticated deployers. The AI news Asia story in 2026 is a story about leverage, supply chains, and the geopolitics of compute. If you are following this space, our [3 Before 9: March 13, 2026](/news/3-before-9-2026-03-13) briefing covers several adjacent developments worth pairing with today's stories.

8

Frequently Asked Questions

Why did Nvidia stop selling H200 chips to China in the first place? Nvidia paused H200 shipments to China in mid-2025 following tightened US export controls on advanced AI chips. The controls were designed to limit China's access to hardware capable of training frontier AI models. The restart announced at GTC 2026 suggests a partial relaxation, though caps on total volume and individual customer purchases remain in place, and Nvidia's newest Blackwell and Rubin architectures are still banned for Chinese buyers. What is high-bandwidth memory (HBM) and why does it matter for AI? High-bandwidth memory is a type of DRAM that stacks multiple chips vertically to dramatically increase data transfer speeds. It is essential for AI accelerators like Nvidia's H100 and H200 because large language model training and inference require moving enormous amounts of data between the processor and memory at very high speed. Micron, SK Hynix and Samsung are the three producers capable of manufacturing HBM at scale, which is why the current shortage has such outsized consequences for the global AI industry. Why is Singapore the preferred location for AI cloud companies expanding into Asia-Pacific? Singapore combines political stability, a strong rule-of-law environment, world-class subsea cable connectivity, and proximity to the fast-growing markets of Southeast Asia. Its government has also been proactive in attracting data centre and AI infrastructure investment. Nebius is the latest in a long line of cloud providers, including AWS, Google and Microsoft, that have chosen Singapore as their Asia-Pacific regional base, though rising power and land costs are prompting some to look at secondary locations in Malaysia and Indonesia. The AIinASIA View: The confluence of Nvidia's China restart, Micron's memory supercycle, and Nebius's Singapore expansion in a single news cycle is not coincidence , it is evidence that 2026 is the year Asia-Pacific moves from being a participant in the AI infrastructure race to being one of its defining arenas. Any business in the region that is not actively thinking about compute access, memory costs, and cloud provider selection is already behind. The AI infrastructure story is moving faster than most boardrooms can track , so which of these three developments will most directly affect how your organisation plans its AI strategy in the next 12 months? Drop your take in the comments below.

That's today's 3-Before-9.

Explore more at AIinASIA.com or share signals with us.

Get 3-Before-9 in your inbox

Three signals, every weekday, before 9am

Free forever. Unsubscribe anytime. No spam.

Recent Editions