Skip to main content
AI in Asia
Sam Altman Wants to Tax His Own AI. Asia Should
· Updated Apr 26, 2026 · 10 min read

Sam Altman Wants to Tax His Own AI. Asia Should

OpenAI's CEO published a blueprint to tax, regulate, and redistribute AI wealth.

I have been covering AI in Asia for long enough to recognise a power move when I see one. And Sam Altman's 13-page blueprint, "Industrial Policy for the Intelligence Age: Ideas to Keep People First," is one of the most brazen, fascinating, and genuinely important power moves the tech industry has produced. Here is the pitch: the CEO of the world's most valuable AI company is telling governments exactly how to tax, regulate, and redistribute the wealth his own technology will generate. No tech titan has ever done this. The question every policymaker in Asia should be asking is not whether Altman means it, but whether they can afford to ignore it.

The Audacity of Self-Regulation (and Why I Think It Matters Anyway)

Let me be direct about what Sam Altman has done here. He has published a detailed policy document proposing a national public wealth fund seeded by AI companies, new taxes on automated labour, a four-day workweek pilot at full pay, and containment playbooks for rogue superintelligence. He released it through OpenAI, a company valued at over $300 billion, backed by SoftBank's $40 billion bridge loan, and currently racing toward artificial general intelligence at a pace that makes regulators visibly nervous.

The sceptics have already sharpened their knives, and I understand why. In 2023, Altman sat before the U.S. Senate and called for AI licensing. By 2025, he had pivoted to "light-touch" legislation and a 10-year moratorium on state AI regulations. Gary Marcus, author of Taming Silicon Valley, put it bluntly:

In 2023, Altman said creators deserve control over how their creations are used. By 2025, he was dismissing IP theft claims entirely. That's a far cry from consistency.

Fair point. But here is what I think the critics are missing: the document itself is substantive. Dismissing it because Altman's track record on regulation is inconsistent would be like refusing to read a fire safety manual because the author once played with matches. The proposals exist. They are detailed. And for Asia, where the AI governance conversation is happening right now, they represent the most concrete framework any major AI company has put on the table.

Four Proposals That Should Make Asian Policymakers Uncomfortable

The blueprint rests on four pillars, and each one carries implications for the region I cover every day.

First, the public wealth fund. OpenAI proposes giving every citizen a direct stake in AI-driven economic growth through a nationally managed fund, seeded in part by AI companies themselves. The fund would "invest in diversified, long-term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI." This is, in essence, a sovereign wealth mechanism for the intelligence age. Singapore already runs one of the world's most sophisticated sovereign wealth ecosystems through GIC and Temasek, but neither has a mandate specifically tied to AI-generated returns. I think that gap will become harder to justify as AI's share of GDP accelerates.

Second, the tax shift. Altman wants to move the tax base away from payroll and towards corporate income and capital gains: "taxes related to automated labour." The logic is uncomfortable but sound. As AI hollows out wage-based employment, the revenue streams funding social safety nets will evaporate. For Asian economies where manufacturing and services employment remain the backbone, the Philippines, Vietnam, and Indonesia especially, this is not an abstract policy question. It is an existential one.

Third, the four-day workweek. The blueprint frames 32-hour workweeks at full pay as an "efficiency dividend," converting AI productivity gains into time for workers rather than profit for shareholders. I find this the most politically interesting proposal. Japan has been experimenting with shorter workweeks for years. South Korea is wrestling with overwork culture. If a Silicon Valley CEO is willing to advocate for this publicly, Asian labour ministers should be asking why they are not.

Fourth, the containment framework. Auto-triggering safety nets that activate when AI-driven job displacement crosses defined thresholds, plus a "Right to AI" positioning access as foundational infrastructure. This is where Altman's framing gets closest to what Asia actually needs: not just rules about what AI cannot do, but guarantees about what citizens deserve from it.

Asia's Governance Patchwork Has a Wealth-Sized Hole

I have written extensively about AI regulation across this region, and the pattern is clear: Asian governments are focused on safety, labelling, and algorithmic transparency. They are not focused on who gets rich.

South Korea flipped the enforcement switch on its AI Basic Act in January 2026, the most comprehensive binding AI law in Asia, complete with extraterritorial reach. China continues its state-controlled governance model, mandating pre-approval of algorithms and strict content labelling. Singapore favours voluntary, sector-specific frameworks. Japan promotes innovation through soft law with no penalties. India remains in guidelines-only territory.

CountryRegulatory StyleWealth Redistribution MechanismAltman Alignment
South KoreaBinding comprehensive statuteNo specific AI wealth fundClosest on governance rigour
ChinaState-controlled, centralisedAlgorithm tax via state enterprisesDiverges on market approach
SingaporeVoluntary, sector-specificSkillsFuture reskilling investmentAligned on innovation-first ethos
JapanSoft law, innovation-firstNo dedicated AI redistributionAligned on light regulation
IndiaGuidelines-based, flexibleDigital public infrastructure focusPartially aligned on access framing

What none of these frameworks address is the question Altman is forcing into the open: when AI generates trillions in value, who gets the cheque?

Singapore's Budget 2026 allocated resources for AI upskilling, but a sovereign AI wealth fund remains uncharted territory. South Korea's AI Basic Act governs risk and transparency but says nothing about redistributing AI profits. China's approach redistributes through state-owned enterprise channels, but that is a feature of its political system, not a transferable model. India's digital public infrastructure ambitions are impressive but do not yet extend to AI-specific wealth mechanisms.

OpenAI's expansion into Asia, from its disaster response work in Bangkok to its education partnerships across the region, means Altman is not theorising about Asian markets. He is actively building inside them. A blueprint that shapes U.S. regulation will shape the terms on which American AI companies operate throughout the Asia-Pacific, and Asian governments that have not articulated their own position on AI wealth will find themselves playing by someone else's rules.

The Uncomfortable Truth About Motive

I want to be honest about something: I do not think Sam Altman published this document out of pure altruism. The Brookings Institution noted the contradiction plainly: Altman "called for AI regulation in a 2023 congressional hearing" but by 2025 "said everything was fine in his sector and there was no need for regulation." Brad Smith of Microsoft and Lisa Su of AMD both supported NIST standards for AI at the same hearing where Altman rejected them, saying: "I don't think we need it."

The April 2026 blueprint represents yet another pivot. Critics, including experts at TechPolicy Press, frame it as corporate strategy designed to position OpenAI as the responsible actor in the room while the company sprints toward superintelligence.

But here is where I part ways with the pure sceptics: motive matters less than mechanism. Even if Altman's blueprint is partly self-serving, even if it is designed to give OpenAI a seat at the regulatory table, the specific proposals are more detailed and more actionable than anything coming out of Brussels, Washington, or for that matter, any Asian capital. A flawed conversation starter is still better than no conversation at all. And right now, on the question of how AI wealth should be distributed, most of Asia is not even in the room.

We want to put these things into the conversation. Some will be good. Some will be bad. But we do feel a sense of urgency. And we want to see the debate of these issues really start to happen with seriousness.

On that point, at least, I think Altman is right.

The AIinASIA View: Altman's blueprint is part vision, part positioning, and entirely without precedent. Whether you trust his motives or not, the document forces a conversation Asia cannot afford to ignore. South Korea is legislating. China is enforcing. Singapore is testing. But not one Asian government has published a framework for redistributing AI-generated wealth. The question is no longer whether AI needs governance; it is who writes the rules, and who profits from them. For Asia's policymakers, the answer had better not be "Silicon Valley, by default."

Closing Thoughts

I have covered enough AI policy summits, regulatory launches, and corporate pledges across Asia to know the difference between a document that changes the conversation and one that simply fills a news cycle. Altman's blueprint, for all its contradictions and strategic convenience, belongs in the first category. It is the most detailed self-regulation framework any tech CEO has ever published, and it lands at the exact moment Asia's governments are deciding what their own AI rulebooks will look like. My advice to every policymaker, business leader, and AI practitioner reading this from Singapore to Seoul to Mumbai: read the 13 pages. Disagree with them. Improve on them. But do not ignore them. The wealth question is coming whether Asia is ready or not. Drop your take in the comments below.