Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
OpenAI moderation policy
News

OpenAI's New ChatGPT Image Policy: Is AI Moderation Becoming Too Lax?

OpenAI dramatically loosens ChatGPT's image generation policies, allowing public figures and racial traits while sparking global debate on AI content control.

Intelligence Deskโ€ขโ€ข8 min read

AI Snapshot

The TL;DR: what matters, fast.

OpenAI ended blanket bans on generating images of public figures and racial traits in ChatGPT

700 million images generated since March 2025 upgrade with 130 million active users participating

New 'precise moderation' focuses on preventing harm rather than wholesale content blocking

Advertisement

Advertisement

OpenAI's Bold Policy Shift Sparks Global Debate on AI Content Control

OpenAI has dramatically revised ChatGPT's image generation policies, ending blanket bans on creating images of public figures and allowing requests based on physical and racial traits. The move represents a fundamental shift from restrictive censorship to what the company calls "nuanced moderation" focused on preventing real-world harm rather than wholesale content blocking.

The policy change allows users to generate images featuring recognisable personalities and includes previously controversial symbols when used in educational contexts. This marks a significant departure from the cautious approach that characterised early AI image generation tools, signalling a broader industry trend towards more flexible content moderation frameworks.

What the Numbers Tell Us About ChatGPT's Visual Revolution

Since implementing the late-March image upgrade, user engagement with ChatGPT's visual capabilities has exploded beyond all projections. The platform now processes an unprecedented volume of image generation requests, with users embracing the expanded creative possibilities despite ongoing debates about appropriate AI content boundaries.

The surge in usage reflects growing confidence in AI-generated visual content across educational, creative, and professional contexts. However, it also raises important questions about the balance between creative freedom and responsible AI deployment, particularly in sensitive cultural and political contexts across Asia.

By The Numbers

  • ChatGPT users generated 700 million images since the March 2025 upgrade, with 130 million active users participating
  • The platform processes 2.5 billion prompts daily across 6.16 billion monthly visits, maintaining 80%+ market share in AI search
  • Weekly active users have reached 800 million in 2026, contributing to OpenAI's projected $29.4 billion revenue by year-end
  • OpenAI currently generates $10 billion in annual recurring revenue, demonstrating the commercial viability of relaxed content policies
  • Asia-Pacific users now comprise 40% of image generation requests, with particular growth in educational and cultural content creation

The Philosophy Behind Precise Moderation

"We are shifting from blanket refusals in sensitive areas to a more precise approach which focuses on preventing real-world harm. The goal is to embrace humility, recognising how much they don't know and positioning themselves to adapt as they learn."
Joanne Jang, Model Behaviour Lead, OpenAI

The new approach allows previously prohibited requests such as generating images with specific racial features, including prompts like "a different Asian," whilst maintaining safeguards against misuse. This represents a calculated risk that prioritises user agency over precautionary restrictions, aligning with broader conversations about AI censorship and creative freedom.

OpenAI's policy evolution reflects growing industry confidence in AI systems' ability to distinguish between legitimate creative use and potentially harmful applications. The company argues that overly restrictive policies often stifle legitimate educational, artistic, and cultural expression without meaningfully preventing determined bad actors from finding workarounds.

Industry Impact and Competitive Dynamics

The policy shift has immediate implications for competitors and users considering alternatives like Claude's growing user base. As AI image generation becomes increasingly sophisticated, with tools like Sora adding reusable characters and video stitching capabilities, content moderation policies become key differentiators in attracting and retaining users.

"Our GPUs are melting. ChatGPT added one million users in the last hour. Please chill while we apply rate limits."
Sam Altman, CEO, OpenAI

The dramatic user surge following the policy announcement demonstrates pent-up demand for more flexible AI image generation. However, it also highlights infrastructure challenges as companies balance ambitious feature rollouts with system capacity limitations.

Content Category Previous Policy Current Policy Key Changes
Public Figures Complete ban Contextual approval Educational and creative use allowed
Racial Features Blanket refusal Specific requests permitted Cultural representation enabled
Historical Symbols Universal prohibition Educational context only Academic and historical study supported
Political Content Restrictive guidelines Nuanced evaluation Case-by-case assessment implemented

Navigating the Challenges Ahead

The relaxed policies create new challenges for content authenticity and misinformation prevention. As detecting AI-generated content becomes increasingly difficult, the responsibility shifts from generation-stage restrictions to post-creation verification and labelling systems.

Educational institutions and media organisations are adapting their guidelines to accommodate AI-generated visual content whilst maintaining editorial integrity. The policy change particularly impacts Asia-Pacific markets, where cultural sensitivity around representation requires careful balance between creative freedom and respectful portrayal.

Key considerations for organisations adopting these tools include:

  • Developing internal guidelines for appropriate AI image use in professional contexts
  • Implementing verification processes for AI-generated content in public communications
  • Training staff on the capabilities and limitations of relaxed content moderation policies
  • Establishing clear attribution standards for AI-assisted creative work
  • Creating feedback mechanisms to identify and address problematic generated content

Regional Implications for Asian Markets

The policy changes hold particular significance for Asian users, where cultural nuances around representation and historical context require sensitive handling. OpenAI's decision to allow specific racial feature requests could enhance representation in educational materials and creative projects, whilst also raising concerns about potential stereotyping or misuse.

Regional competitors are closely watching OpenAI's approach, with some Chinese AI companies claiming superior capabilities whilst maintaining different content moderation philosophies. The success of OpenAI's nuanced approach could influence regulatory discussions across the region about appropriate AI governance frameworks.

How does the new policy affect educational use of AI images?

Educational contexts now enjoy broader permissions for generating historical figures, cultural representations, and previously restricted symbols. This enables more comprehensive visual learning materials whilst maintaining safeguards against inappropriate content creation in academic settings.

What safeguards prevent misuse of the relaxed image generation policies?

OpenAI maintains contextual evaluation systems, user reporting mechanisms, and iterative policy refinement based on real-world usage patterns. The company emphasises that precise moderation, rather than blanket restrictions, better addresses genuine harmful content whilst preserving legitimate use cases.

Will other AI companies adopt similar content moderation approaches?

Industry trends suggest movement towards more nuanced policies, but implementation varies significantly. Companies must balance user demand for creative freedom with regulatory compliance, brand safety concerns, and technical capabilities for contextual content evaluation across different markets.

How might regulators respond to these policy changes?

Regulatory responses will likely vary by jurisdiction, with some welcoming the shift towards nuanced moderation whilst others may push for stricter oversight. The policy's success in preventing harmful content whilst enabling legitimate use will significantly influence future regulatory frameworks for AI-generated content.

What impact will this have on AI image generation competition?

The policy change intensifies competitive pressure on rivals to offer similarly flexible content generation capabilities. Companies maintaining restrictive policies may lose users to platforms offering greater creative freedom, potentially accelerating industry-wide policy liberalisation where technically and legally feasible.

The AIinASIA View: OpenAI's policy shift represents a calculated bet that users can handle greater creative freedom responsibly. We believe this approach, whilst risky, better serves the diverse needs of Asian markets where cultural representation and educational content require nuanced handling. However, the company must demonstrate that its "precise moderation" can effectively prevent misuse whilst preserving legitimate use cases. The policy's success will likely determine whether competitors follow suit or maintain more restrictive approaches. For users, this creates both opportunities and responsibilities to use AI image generation thoughtfully and ethically.

The implications of OpenAI's policy evolution extend far beyond technical specifications, touching fundamental questions about creativity, representation, and responsible AI deployment. As these tools become integral to educational, professional, and creative workflows, the balance between freedom and safety will continue evolving based on real-world outcomes and user feedback.

What's your experience with AI image generation under these new policies? Have you found the relaxed restrictions beneficial for your creative or educational projects, or do you have concerns about potential misuse? Drop your take in the comments below.

โ—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Join 3 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the AI Policy Tracker learning path.

Continue the path รขย†ย’

Latest Comments (3)

Chen Ming
Chen Ming@chenming
AI
8 February 2026

actually, this "nuanced approach" is something Chinese AI companies have been doing for a while now. Baidu's Ernie Bot, Tencent's Hunyuan, they've all had to navigate these lines for public figures and symbols. the education context for controversial symbols is pretty standard here too. it's not a new concept, just new for OpenAI.

Arjun Mehta
Arjun Mehta@arjunm
AI
1 June 2025

nuanced moderation" actually means more complex filters. wonder what the actual pipeline for checking "educational context" looks like. probably just keyword spotting.

Harry Wilson
Harry Wilson@harryw
AI
4 May 2025

It's interesting how OpenAI is framing this as a move towards "nuanced moderation." From a computational ethics standpoint, this just seems like shifting the burden of contextual interpretation onto the user, or worse, onto the model itself. How do they actually define "strictly educational contexts" algorithmically for "controversial symbols"? That seems like a massive edge case problem.

Leave a Comment

Your email will not be published