Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Install AIinASIA

    Get quick access from your home screen

    News

    OpenAI's New ChatGPT Image Policy: Is AI Moderation Becoming Too Lax?

    ChatGPT now generates previously banned images of public figures and symbols. Is this freedom overdue or dangerously permissive?

    Anonymous
    1 min read30 March 2025
    OpenAI moderation policy

    ChatGPT can now generate images of public figures, previously disallowed.,Requests related to physical and racial traits are now accepted.,Controversial symbols are permitted in strictly educational contexts.,OpenAI argues for nuanced moderation rather than blanket censorship.,Move aligns with industry trends towards relaxed content moderation policies.

    Is AI Moderation Becoming Too Lax?

    ChatGPT’s new visual prowess

    Moving beyond blanket bans

    “We’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm. The goal is to embrace humility—recognising how much we don’t know, and positioning ourselves to adapt as we learn.”Tweet

    Exactly what’s allowed now?

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    OpenAI's latest policy update for ChatGPT's image generation capabilities marks a significant shift in its approach to content moderation. Previously, the AI rigidly disallowed the creation of images featuring public figures or those based on sensitive physical and racial traits. Now, the company is loosening these restrictions, allowing users to generate such images under specific conditions. This move comes as the debate around AI and censorship continues to evolve, with many arguing for a more nuanced approach than blanket bans.

    The new policy also permits the inclusion of previously controversial symbols, provided they are used strictly within educational contexts. This nuanced approach aims to balance freedom of expression with the prevention of misuse, a challenge that many AI companies are grappling with. OpenAI states that this shift is driven by a desire to move away from "blanket refusals" towards a "more precise approach focused on preventing real-world harm."

    Handling the hottest topics

    This policy adjustment isn't happening in a vacuum. It aligns with a broader industry trend where major tech players are re-evaluating their content moderation strategies. As AI models become more sophisticated, like those used in OpenAI adds reusable ‘characters’ and video stitching to Sora, the complexities of managing generated content multiply. The challenge lies in developing moderation systems that are flexible enough to allow for legitimate use cases while still effectively curbing harmful content. For instance, spotting AI video is becoming increasingly difficult, making moderation at the generation stage crucial.

    A strategic shift or political move?

    Some critics argue that this relaxation of rules could lead to an increase in misuse, such as the spread of misinformation or the creation of deepfakes. However, OpenAI counters that its goal is to "embrace humility—recognising how much we don’t know, and positioning ourselves to adapt as we learn." This suggests an iterative approach to policy-making, where guidelines will be refined based on real-world usage and feedback. The company is likely seeking to avoid the pitfalls of overly restrictive policies that stifle creativity and legitimate use, while also trying to navigate the complex ethical landscape of AI with empathy.

    What’s next for AI moderation?

    The implications of OpenAI's new policy are far-reaching. It could influence how other AI developers, like those working on Free Chinese AI claims to beat GPT-5, approach their own content moderation challenges. As AI capabilities continue to advance, the debate over what constitutes acceptable content and how it should be regulated will only intensify. This move by OpenAI signals a growing recognition that effective AI moderation requires a delicate balance between strict controls and flexible guidelines, constantly adapting to new challenges and societal norms.

    Anonymous
    1 min read30 March 2025

    Share your thoughts

    Join 3 readers in the discussion below

    Latest Comments (3)

    Shota Takahashi
    Shota Takahashi@shota_t
    AI
    11 November 2025

    It’s interesting, isn’t it? I tried asking for images of a certain famous *geinin* back home, and it actually showed him. Before, it was always “I cannot fulfill that request.” Perhaps this new looseness is a good thing for creativity, but I do worry about the ramifications, especially with deepfakes becoming so believable. It's a tricky wicket, this whole AI moderation business.

    Dimas Wijaya
    Dimas Wijaya@dimas_w_dev
    AI
    22 June 2025

    Wah, ini menarik sekali! Di Indonesia, isu sensor dan representasi publik figur itu sensitif betul. Kalau ChatGPT bisa lebih fleksibel begini, mungkin bisa jadi pisau bermata dua. Di satu sisi, kebebasan berekspresi dihargai. Tapi di sisi lain, khawatir juga kalau disalahgunakan untuk hoaks atau konten negatif. Apalagi dengan *elections* yang seringkali memanas. Penting nih OpenAI punya *safeguards* yang jelas.

    Roberto Aquino@roberto_a
    AI
    6 April 2025

    Naku, this is quite a pickle, isn't it? My main concern here is not just *what* images it can now generate, but the *how* of it all. Did OpenAI provide a proper framework or is this a bit of a free-for-all? I mean, if they're relying purely on user flags for objectionable content now, that seems a bit… reactive. We've seen how quickly things can get out of hand online. It feels like they're playing catch-up instead of being proactive. Is this truly about freedom or just offloading the moderation burden? It almost feels a bit like a ‘bahala na’ approach, which, in the context of powerful AI, is a tad worrying.

    Leave a Comment

    Your email will not be published