Skip to main content
AI in ASIA
Microsoft's AI image generator issues
Business

Microsoft's AI Image Generator Raises Concerns over Violent and Sexual Content

Microsoft's AI image generator, Copilot Designer, has been found to produce violent and sexual content, despite the company's responsible AI principles.

Intelligence Desk2 min read

AI Snapshot

The TL;DR: what matters, fast.

Microsoft's Copilot Designer AI generates violent, sexual, and copyrighted content, prompting internal concerns.

Engineer Shane Jones reported these issues and felt Microsoft did not adequately address them, facing pressure to remove public posts.

Copilot Designer produced harmful images such as demons, sexualized women, and underage drug use, and Disney characters in offensive contexts.

Who should pay attention: AI developers | Content moderators | Ethics committees | Regulators

What changes next: Scrutiny of AI content moderation practices is likely to intensify.

Microsoft's AI image generator, Copilot Designer, has been found to produce violent and sexual content, raising concerns over responsible AI practices. A Microsoft engineer has escalated the matter to the FTC and Microsoft's board after the company failed to take appropriate action. The tool also generates images that potentially violate copyright laws, including those of Disney characters.

Engineer Raises Concerns over Microsoft's AI Image Generator

Microsoft engineer, Shane Jones, has been testing Microsoft's AI image generator, Copilot Designer, and has been disturbed by the violent and sexual content it produces. Despite raising his concerns with Microsoft, Jones feels that the company has not taken appropriate action to address the issue. This raises broader questions about Why ProSocial AI Is The New ESG in the tech industry.

Violent and Sexual Content Produced by Copilot Designer

Copilot Designer has been found to generate images that depict demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualised images of women in violent tableaus, and underage drinking and drug use. These images go against Microsoft's responsible AI principles and have the potential to cause harm. Such issues highlight the ongoing debate around AI with Empathy for Humans and ethical AI development.

In addition to the violent and sexual content, Copilot Designer also generates images that potentially violate copyright laws. The tool has produced images of Disney characters, such as Elsa from "Frozen," Mickey Mouse, and Star Wars characters, in inappropriate and offensive contexts. This is not an isolated incident, as similar concerns have been raised, for example, when Warner Bros takes Midjourney to court over AI and superheroes.

Was The Engineer Silenced?

A Microsoft AI engineer, Shane Jones, raised alarms about potential security flaws that could allow the creation of explicit and violent deepfake images, including notable instances involving singer Taylor Swift. Despite his attempts to address these vulnerabilities internally and his communication with OpenAI, Jones claimed that Microsoft's legal department pressured him to remove his public posts detailing these concerns. Microsoft responded, stating that the reported techniques did not bypass their safety filters and that they have robust systems in place to address such concerns. Microsoft emphasised its commitment to investigating and remediating any issues raised by employees. For more information on deepfake concerns, a comprehensive report by the National Institute of Standards and Technology (NIST) on mitigating deepfake risks provides further insights.

Conclusion

Microsoft's AI image generator, Copilot Designer, has raised serious concerns over the violent and sexual content it produces, as well as potential copyright violations. The company's failure to take appropriate action highlights the need for greater responsibility and accountability in the development and deployment of AI tools. This echoes sentiments seen in other regions, such as Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means.

Do you think AI companies should be held accountable for the content produced by their tools, even if it was generated autonomously? Let us know in the comments below!

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Be the first to share your perspective on this story

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the AI Writing Mastery learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Loading comments...