Frequently Asked Questions
Why did Nvidia stop selling H200 chips to China in the first place? Nvidia paused H200 shipments to China in mid-2025 following tightened US export controls on advanced AI chips. The controls were designed to limit China's access to hardware capable of training frontier AI models. The restart announced at GTC 2026 suggests a partial relaxation, though caps on total volume and individual customer purchases remain in place, and Nvidia's newest Blackwell and Rubin architectures are still banned for Chinese buyers. What is high-bandwidth memory (HBM) and why does it matter for AI? High-bandwidth memory is a type of DRAM that stacks multiple chips vertically to dramatically increase data transfer speeds. It is essential for AI accelerators like Nvidia's H100 and H200 because large language model training and inference require moving enormous amounts of data between the processor and memory at very high speed. Micron, SK Hynix and Samsung are the three producers capable of manufacturing HBM at scale, which is why the current shortage has such outsized consequences for the global AI industry. Why is Singapore the preferred location for AI cloud companies expanding into Asia-Pacific? Singapore combines political stability, a strong rule-of-law environment, world-class subsea cable connectivity, and proximity to the fast-growing markets of Southeast Asia. Its government has also been proactive in attracting data centre and AI infrastructure investment. Nebius is the latest in a long line of cloud providers, including AWS, Google and Microsoft, that have chosen Singapore as their Asia-Pacific regional base, though rising power and land costs are prompting some to look at secondary locations in Malaysia and Indonesia. The AIinASIA View: The confluence of Nvidia's China restart, Micron's memory supercycle, and Nebius's Singapore expansion in a single news cycle is not coincidence , it is evidence that 2026 is the year Asia-Pacific moves from being a participant in the AI infrastructure race to being one of its defining arenas. Any business in the region that is not actively thinking about compute access, memory costs, and cloud provider selection is already behind. The AI infrastructure story is moving faster than most boardrooms can track , so which of these three developments will most directly affect how your organisation plans its AI strategy in the next 12 months? Drop your take in the comments below.