Hong Kong's AI Summit Reveals Asia's Competing Visions for Governance
The 2026 World Internet Conference (WIC) Asia-Pacific Summit wrapped up in Hong Kong this week with a clear message: Asia is no longer waiting for the West to define what responsible AI✦ looks like. Under the theme "Digital and Intelligent Empowerment for Innovative✦ Development," the summit surfaced sharp tensions between innovation ambition and regulatory caution, and revealed how differently Asian nations are approaching the same challenge.
Intelligent Agents Are the New Frontier; and the New Liability
The most technically substantive sessions centred on AI agents: autonomous systems that can book flights, execute financial transactions, draft contracts, and interact with third-party APIs without human approval of individual actions. China's Cyberspace Administration (CAC) Director General Gao Lin used the summit's dedicated AI security forum to announce plans for mandatory national standards governing safe intelligent agent applications, building on the CAC's AI security governance frameworks; versions 1.0 (2024) and 2.0 (2025).
As covered in our analysis of Singapore's agentic AI governance framework, the governance gap between what agents can do and what regulators have sanctioned is widening fast. The WIC summit was the first major regional forum to address this directly.
Tiered Oversight and Sandbox Experiments
Gao outlined plans for tiered management of large language models alongside the agent standards. Under China's comprehensive AI regulation; in force since 1 January 2026; LLMs above 10 billion parameters✦ require algorithm registration. The new direction extends this to sandbox✦ regulation: controlled environments where new agent capabilities can be tested before broader deployment.
Gao also signalled ambitions for international cooperation, framing China's evolving AI security architecture as a potential contribution to global standards rather than purely domestic regulation. That framing matters. As Taiwan's AI Basic Act debates a principles-based approach and South Korea works through its AI Basic Act, China is positioning its prescriptive, compliance-heavy model as an alternative template; one that some developing economies may find administratively familiar.
By The Numbers
- China's AI security governance framework v1.0 was released in 2024; v2.0 in 2025, covering risk assessment and tiered management for large language models
- LLMs above 10 billion parameters require CAC algorithm registration since 1 January 2026 under China's comprehensive AI regulation
- The WIC Asia-Pacific Summit drew representatives from government, industry, and academia across 12+ countries
- SuperAI 2026, Asia's largest AI conference, returns to Singapore's Marina Bay Sands on June 10-11, with confirmed participation from OpenAI, Google, and AWS
- China's AI industry is targeting 12.6 trillion yuan in output by 2030, with intelligent agents identified as a key driver
Intelligent agents are reshaping industries across Asia-Pacific at a pace that demands proactive governance, not reactive patchwork.
AI's rapid development means the ideological attributes of large language models must be carefully defined. We must advance AI security frameworks systematically.
The Governance Divergence Is Becoming a Feature, Not a Bug
What the WIC summit made vivid is that Asian AI governance✦ is not converging on a single model. China is building a highly prescriptive, state-supervised system. Singapore is developing a risk-tiered, principles-based framework that emphasises business flexibility. Taiwan and South Korea are borrowing selectively from the EU AI Act. Vietnam's AI law, just 30 days in, is still being interpreted by companies on the ground.
This divergence creates real friction for multinationals operating across the region. A model compliant in Singapore may require significant modification for China's filing requirements. Training data sourcing rules differ substantially between jurisdictions.
| Country | Governance Approach | Key Focus |
|---|---|---|
| China | Prescriptive, state-supervised | Mandatory LLM✦ filing, agent sandboxes, ideological compliance |
| Singapore | Risk-tiered, principles-based | Business flexibility, model governance frameworks |
| Taiwan | Principles-based with sector rules | Balancing innovation with civil liberties |
| South Korea | EU-influenced with local amendments | High-risk system classification, liability rules |
| Vietnam | Comprehensive AI law (new in 2026) | Data localisation, public sector AI deployment rules |
What Comes Next
The WIC summit is not a binding forum; no agreements were signed and no international frameworks were ratified. But as a read on where the region's most influential governments are placing their policy bets, it was instructive. China's push toward mandatory agent standards could set a de facto compliance floor for any company seeking market access. For the rest of Asia, the summit served as a reminder that the AI governance conversation is accelerating and that waiting for global consensus is not a viable strategy.
Alongside the Hong Kong summit, Asia's calendar is filling up. India's IndiaAI Kosh GPU programme continues to expand compute✦ access, while China's own vertical AI strategy is drawing attention from policymakers across the region.
Frequently Asked Questions
What was the WIC Asia-Pacific Summit 2026 about?
The 2026 World Internet Conference Asia-Pacific Summit took place in Hong Kong under the theme "Digital and Intelligent Empowerment for Innovative Development." It brought together government officials, technology companies, and academics to discuss AI governance, agent security standards, digital finance, and pathways for international cooperation on AI regulation.
What are China's new mandatory AI agent standards?
China's CAC Director General Gao Lin announced plans for mandatory national standards governing intelligent agent applications, building on the existing AI security governance frameworks and introducing sandbox regulation for agentic AI systems before broader deployment.
How does China's AI governance differ from Singapore's approach?
China follows a prescriptive, state-supervised model with mandatory algorithm filings and compliance requirements for large language models. Singapore takes a principles-based, risk-tiered approach that prioritises business flexibility while addressing high-risk AI applications. Both models are gaining followers among other Asian nations.
Why does AI governance divergence matter for businesses operating in Asia?
Companies face different regulatory requirements across jurisdictions. A model compliant in Singapore may need modification for China's filing requirements. Training data rules, high-risk application definitions, and agent oversight requirements all differ between countries, creating real compliance complexity for regional operations.
When is the next major AI conference in Asia?
SuperAI 2026, described as Asia's largest AI conference, is scheduled for June 10-11, 2026, at Marina Bay Sands in Singapore, with confirmed participation from OpenAI, Google, AWS, Cerebras, and Samsung Next.
Is your organisation already mapping its AI compliance obligations jurisdiction by jurisdiction across Asia? Drop your take in the comments below.







