Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Business

    Microsoft's AI Image Generator Raises Concerns over Violent and Sexual Content

    Microsoft's AI image generator, Copilot Designer, has been found to produce violent and sexual content, despite the company's responsible AI principles.

    Anonymous
    2 min read6 March 2024
    Microsoft's AI image generator issues

    AI Snapshot

    The TL;DR: what matters, fast.

    Microsoft's Copilot Designer AI generates violent, sexual, and copyrighted content, prompting internal concerns.

    Engineer Shane Jones reported these issues and felt Microsoft did not adequately address them, facing pressure to remove public posts.

    Copilot Designer produced harmful images such as demons, sexualized women, and underage drug use, and Disney characters in offensive contexts.

    Who should pay attention: AI developers | Content moderators | Ethics committees | Regulators

    What changes next: Scrutiny of AI content moderation practices is likely to intensify.

    Microsoft's AI image generator, Copilot Designer, has been found to produce violent and sexual content, raising concerns over responsible AI practices. A Microsoft engineer has escalated the matter to the FTC and Microsoft's board after the company failed to take appropriate action. The tool also generates images that potentially violate copyright laws, including those of Disney characters.

    Engineer Raises Concerns over Microsoft's AI Image Generator

    Microsoft engineer, Shane Jones, has been testing Microsoft's AI image generator, Copilot Designer, and has been disturbed by the violent and sexual content it produces. Despite raising his concerns with Microsoft, Jones feels that the company has not taken appropriate action to address the issue. This raises broader questions about Why ProSocial AI Is The New ESG in the tech industry.

    Violent and Sexual Content Produced by Copilot Designer

    Copilot Designer has been found to generate images that depict demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualised images of women in violent tableaus, and underage drinking and drug use. These images go against Microsoft's responsible AI principles and have the potential to cause harm. Such issues highlight the ongoing debate around AI with Empathy for Humans and ethical AI development.

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    Copyright Violations by Copilot Designer AI Tool

    In addition to the violent and sexual content, Copilot Designer also generates images that potentially violate copyright laws. The tool has produced images of Disney characters, such as Elsa from "Frozen," Mickey Mouse, and Star Wars characters, in inappropriate and offensive contexts. This is not an isolated incident, as similar concerns have been raised, for example, when Warner Bros takes Midjourney to court over AI and superheroes.

    Was The Engineer Silenced?

    A Microsoft AI engineer, Shane Jones, raised alarms about potential security flaws that could allow the creation of explicit and violent deepfake images, including notable instances involving singer Taylor Swift. Despite his attempts to address these vulnerabilities internally and his communication with OpenAI, Jones claimed that Microsoft's legal department pressured him to remove his public posts detailing these concerns. Microsoft responded, stating that the reported techniques did not bypass their safety filters and that they have robust systems in place to address such concerns. Microsoft emphasised its commitment to investigating and remediating any issues raised by employees. For more information on deepfake concerns, a comprehensive report by the National Institute of Standards and Technology (NIST) on mitigating deepfake risks provides further insights.

    Conclusion

    Microsoft's AI image generator, Copilot Designer, has raised serious concerns over the violent and sexual content it produces, as well as potential copyright violations. The company's failure to take appropriate action highlights the need for greater responsibility and accountability in the development and deployment of AI tools. This echoes sentiments seen in other regions, such as Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means.

    Do you think AI companies should be held accountable for the content produced by their tools, even if it was generated autonomously? Let us know in the comments below!

    Anonymous
    2 min read6 March 2024

    Share your thoughts

    Join 4 readers in the discussion below

    Latest Comments (4)

    Nicholas Chong
    Nicholas Chong@nickchong_dev
    AI
    22 October 2025

    Wah, only just saw this article about Microsoft's AI. Everyone's quick to point fingers, but isn't this more of a reflection of the internet's content it's trained on? Bit like blaming the mirror for the reflection, innit? We need to tackle the root problem of harmful content online, not just the AI that processes it.

    Julien Simon
    Julien Simon@julien_s_ai
    AI
    22 May 2024

    C'est vraiment préoccupant, n'est ce pas? On dirait que c'est une histoire qui se répète, cette difficulté à maîtriser les IA, même avec de bonnes intentions. Ça rappelle les débuts de certains réseaux sociaux, où l'on a sous-estimé l'ampleur des dérives. C'est un sacré challenge pour les développeurs, et pour nous, les utilisateurs.

    Luis Torres
    Luis Torres@luis_t_ph
    AI
    17 April 2024

    Ay, this is a real head-scratcher. If even Copilot, with all of Microsoft's resources and supposed "responsible AI" principles, is spitting out dodgy images, what hope do smaller devs have? Makes you wonder how truly robust these content filters are, eh?

    Crystal Tan@crystaltan
    AI
    27 March 2024

    Wah, this is really kiasu to hear. I was hoping these AI tools would be better vetted, especially from a big company like Microsoft. It's a bit worrying to think about the kind of stuff it could generate if not properly reined in. Definitely needs more robust safety guardrails. Seriously, folks need to be more careful.

    Leave a Comment

    Your email will not be published