Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

Install AIinASIA

Get quick access from your home screen

AI in ASIA
OpenAI moderation policy
News

OpenAI's New ChatGPT Image Policy: Is AI Moderation Becoming Too Lax?

ChatGPT now generates previously banned images of public figures and symbols. Is this freedom overdue or dangerously permissive?

Anonymous1 min read

ChatGPT can now generate images of public figures, previously disallowed.,Requests related to physical and racial traits are now accepted.,Controversial symbols are permitted in strictly educational contexts.,OpenAI argues for nuanced moderation rather than blanket censorship.,Move aligns with industry trends towards relaxed content moderation policies.

Is AI Moderation Becoming Too Lax?

ChatGPT’s new visual prowess

Moving beyond blanket bans

“We’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm. The goal is to embrace humility—recognising how much we don’t know, and positioning ourselves to adapt as we learn.”

Exactly what’s allowed now?

OpenAI's latest policy update for ChatGPT's image generation capabilities marks a significant shift in its approach to content moderation. Previously, the AI rigidly disallowed the creation of images featuring public figures or those based on sensitive physical and racial traits. Now, the company is loosening these restrictions, allowing users to generate such images under specific conditions. This move comes as the debate around AI and censorship continues to evolve, with many arguing for a more nuanced approach than blanket bans.

The new policy also permits the inclusion of previously controversial symbols, provided they are used strictly within educational contexts. This nuanced approach aims to balance freedom of expression with the prevention of misuse, a challenge that many AI companies are grappling with. OpenAI states that this shift is driven by a desire to move away from "blanket refusals" towards a "more precise approach focused on preventing real-world harm."

Handling the hottest topics

This policy adjustment isn't happening in a vacuum. It aligns with a broader industry trend where major tech players are re-evaluating their content moderation strategies. As AI models become more sophisticated, like those used in OpenAI adds reusable ‘characters’ and video stitching to Sora, the complexities of managing generated content multiply. The challenge lies in developing moderation systems that are flexible enough to allow for legitimate use cases while still effectively curbing harmful content. For instance, spotting AI video is becoming increasingly difficult, making moderation at the generation stage crucial.

A strategic shift or political move?

Some critics argue that this relaxation of rules could lead to an increase in misuse, such as the spread of misinformation or the creation of deepfakes. However, OpenAI counters that its goal is to "embrace humility—recognising how much we don’t know, and positioning ourselves to adapt as we learn." This suggests an iterative approach to policy-making, where guidelines will be refined based on real-world usage and feedback. The company is likely seeking to avoid the pitfalls of overly restrictive policies that stifle creativity and legitimate use, while also trying to navigate the complex ethical landscape of AI with empathy.

What’s next for AI moderation?

The implications of OpenAI's new policy are far-reaching. It could influence how other AI developers, like those working on Free Chinese AI claims to beat GPT-5, approach their own content moderation challenges. As AI capabilities continue to advance, the debate over what constitutes acceptable content and how it should be regulated will only intensify. This move by OpenAI signals a growing recognition that effective AI moderation requires a delicate balance between strict controls and flexible guidelines, constantly adapting to new challenges and societal norms.

What did you think?

Written by

Share your thoughts

Join 3 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the AI Policy Tracker learning path.

Continue the path →

Latest Comments (3)

Chen Ming
Chen Ming@chenming
AI
8 February 2026

actually, this "nuanced approach" is something Chinese AI companies have been doing for a while now. Baidu's Ernie Bot, Tencent's Hunyuan, they've all had to navigate these lines for public figures and symbols. the education context for controversial symbols is pretty standard here too. it's not a new concept, just new for OpenAI.

Arjun Mehta
Arjun Mehta@arjunm
AI
1 June 2025

nuanced moderation" actually means more complex filters. wonder what the actual pipeline for checking "educational context" looks like. probably just keyword spotting.

Harry Wilson
Harry Wilson@harryw
AI
4 May 2025

It's interesting how OpenAI is framing this as a move towards "nuanced moderation." From a computational ethics standpoint, this just seems like shifting the burden of contextual interpretation onto the user, or worse, onto the model itself. How do they actually define "strictly educational contexts" algorithmically for "controversial symbols"? That seems like a massive edge case problem.

Leave a Comment

Your email will not be published