News
Voice From the Grave: Netflix’s AI Clone of Murdered Influencer Sparks Outrage
Netflix’s use of AI to recreate murder victim Gabby Petito’s voice in American Murder: Gabby Petito has ignited a firestorm of controversy.
Published
1 month agoon
By
AIinAsia
TL;DR – What You Need to Know in 30 Seconds
- Gabby Petito’s Voice Cloned by AI: Netflix used generative AI to replicate the murdered influencer’s voice, narrating her own texts and journals in American Murder: Gabby Petito.
- Massive Public Outcry: Viewers say it’s “deeply unsettling,” violating a murder victim’s memory without her explicit consent.
- Family Approved – But Is That Enough? Gabby’s parents supposedly supported the decision, but critics argue that the emotional authenticity just isn’t there and it sets an alarming precedent.
- Growing Trend in AI ‘Docu-Fiction’: Netflix has toyed with AI imagery before. With the rising popularity of AI tools, this might just be the beginning of posthumous voice cloning in media.
Netflix’s AI Voice Recreation of Gabby Petito Sparks Outrage: The True Crime Documentary Stirs Ethical Concerns
In a world where true crime has become almost as popular as cat videos, Netflix’s latest release, American Murder: Gabby Petito, has turned heads in a way nobody quite expected. The streaming giant, never one to shy away from sensational storytelling, has gone high-tech – or, depending on your perspective, crossed a line – by using generative AI to recreate the voice of Gabby Petito, a 22-year-old social media influencer tragically murdered in August 2021.
But has this avant-garde approach led to a meaningful tribute, or merely morphed a devastating real-life tragedy into a tech sideshow? Let’s delve into the unfolding drama, hear what critics are saying, and weigh up whether Netflix has jumped the shark with its AI voice cloning.
The Case That Shocked Social Media
Gabby Petito’s story captured headlines in 2021. According to the FBI, the young influencer was murdered by her fiancé, Brian Laundrie, during a cross-country road trip. Their relationship, peppered across social media, gave the world a stark and heart-wrenching view of what was happening behind the scenes. Gabby’s disappearance, followed by the discovery of her body, ignited massive public scrutiny and online sleuthing.
So, when Netflix announced it was developing a true crime documentary about Gabby’s case, titled American Murder: Gabby Petito, the internet was all ears. True crime fans tuned in for the premiere on Monday, only to find a surprising disclosure in the opening credits: Petito’s text messages and journal entries would be “brought to life” in what Netflix claims is “her own voice, using voice recreation technology” (Netflix, 2023).
A Techy Twist… or Twisted Tech?
Let’s talk about that twist. Viewers soon discovered that the series literally put Gabby’s words in Gabby’s mouth, using AI-generated audio. But is this a touching homage, or a gruesome gimmick?
- One viewer on X (formerly Twitter) called the move a “deeply unsettling use of AI.” (X user, 2023)
- Another posted: “That is absolutely NOT okay. She’s a murder victim. You are violating her again.” (X user, 2023)
These remarks sum up the general outcry across social platforms. The main concern? That Gabby’s voice was effectively hijacked and resurrected without her explicit consent. Her parents, however, appear to have greenlit the decision, offering Gabby’s journals and personal writings to help shape the script.
“We had so much material from her parents that we were able to get. At the end of the day, we wanted to tell the story as much through Gabby as possible. It’s her story.”
Their stance is clear: the production team sees voice recreation as a means to bring Gabby’s point of view into sharper focus. But a lot of viewers remain uneasy, saying they’d prefer archived recordings if authenticity was the goal.
A Tradition of Controversy
Netflix is no stranger to boundary-pushing in its true crime productions. Last year, eagle-eyed viewers noticed American Manhunt: The Jennifer Pan Story featured images that appeared generative or manipulated by AI. In many respects, this is just another addition to a growing trend of digital trickery in modern storytelling.
Outlets like 404 Media have further reported a surge in YouTube channels pumping out “true crime AI slop” – random, possibly fabricated stories produced by generative AI and voice cloning. As these tools become more widespread, it’s increasingly hard to tell fact from fiction, let alone evaluate ethical ramifications.
“It can’t predict what her intonation would have been, and it’s just gross to use it,” one Reddit user wrote in response to the Gabby Petito doc. “At the very least I hope they got consent from her family… I just don’t like the precedent it sets for future documentaries either.” (Redditor, 2023)
This points to a key dilemma: how do we prevent well-meaning storytellers from overstepping moral boundaries with technology designed to replicate or reconstruct the dead? Are we treading on sensitive territory, or is this all simply the new face of docu-drama?
Where Do We Draw the Line After the Netflix AI Voice Controversy
For defenders of American Murder: Gabby Petito, it boils down to parental endorsement. If Gabby’s parents gave the green light, then presumably they believed this approach would honour their daughter’s memory. Some argue that hearing Gabby’s voice – albeit synthesised – might create a deeper emotional connection for viewers and reinforce that she was more than just a headline.
“In all of our docs, we try to go for the source and the people closest to either the victims who are not alive or the people themselves who have experienced this,” explains producer Julia Willoughby Nason. “That’s really where we start in terms of sifting through all the data and information that comes with these huge stories.” (Us Weekly, 2023)
On the other hand, there’s a fair amount of hand-wringing, and for good reason. The true crime genre is already fraught with ethical pitfalls, often seen as commodifying tragic incidents for mainstream entertainment. Voice cloning a victim’s words can come off as invasive, especially for a crime so recent and so visible on social media.
One user captured the discomfort perfectly:
“I understand they had permission from the parents, but that doesn’t make it feel any better,” adding that the AI model sounded, “monotone, lacking in emotion… an insult to her.” (X user, 2023)
And that’s the heart of it. Even with family approval, it’s a sensitive business reanimating someone’s voice once they can no longer speak for themselves. For many, the question becomes: is it a truly fitting tribute, or simply a sensational feature?
Ethics or Entertainment?
Regardless of where you land on the debate, it’s clear that Netflix’s choice marks a new chapter in how technology can (and likely will) be used to re-create missing or murdered people in media. As AI becomes more powerful and more prevalent, it’s not much of a stretch to imagine docu-series about other public figures employing the same approach.
“This world hates women so much,”** another user tweeted,** expressing the view that once again, a woman’s agency in her own narrative has been undermined. (X user, 2023)
Is that a fair assessment, or simply an overstatement borne of the outrage cycle? If Gabby’s story can be told in her own words, with parental involvement, should we not appreciate the effort? Or is the line drawn at artificially resurrecting her voice – a voice that was forcibly taken from her?
So, we’re left to debate a tech-fuelled question: Is it beautiful homage, or a blatant overreach? Given that we’re fast marching into an era where AI blurs the lines of reality, the Netflix approach is just one sign of things to come.
With generative AI firmly planting its flag in creative storytelling, where do you think the moral boundary lies?
You may also like:
- A Chatbot with a Fear of Death?
- Revolutionising Crime-Solving: AI Detectives on the Beat
- Watch the controversial Netflix show by tapping here
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
You may like
-
Adobe Jumps into AI Video: Exploring Firefly’s New Video Generator
-
How Singtel Used AI to Bring Generations Together for Singapore’s SG60
-
New York Times Encourages Staff to Use AI for Headlines and Summaries
-
How Will AI Skills Impact Your Career and Salary in 2025?
-
Beginner’s Guide to Using Sora AI Video
-
Reality Check: The Surprising Relationship Between AI and Human Perception
News
OpenAI’s New ChatGPT Image Policy: Is AI Moderation Becoming Too Lax?
ChatGPT now generates previously banned images of public figures and symbols. Is this freedom overdue or dangerously permissive?
Published
2 days agoon
March 30, 2025By
AIinAsia
TL;DR – What You Need to Know in 30 Seconds
- ChatGPT can now generate images of public figures, previously disallowed.
- Requests related to physical and racial traits are now accepted.
- Controversial symbols are permitted in strictly educational contexts.
- OpenAI argues for nuanced moderation rather than blanket censorship.
- Move aligns with industry trends towards relaxed content moderation policies.
Is AI Moderation Becoming Too Lax?
ChatGPT just got a visual upgrade—generating whimsical Studio Ghibli-style images that quickly became an internet sensation. But look beyond these charming animations, and you’ll see something far more controversial: OpenAI has significantly eased its moderation policies, allowing users to generate images previously considered taboo. So, is this a timely move towards creative freedom or a risky step into a moderation minefield?
ChatGPT’s new visual prowess
OpenAI’s latest model, GPT-4o, introduces impressive image-generation capabilities directly inside ChatGPT. With advanced photo editing, sharper text rendering, and improved spatial representation, ChatGPT now rivals specialised image AI tools.
But the buzz isn’t just about cartoonish visuals; it’s about OpenAI’s major shift on sensitive content moderation.
Moving beyond blanket bans
Previously, if you asked ChatGPT to generate an image featuring public figures—say Donald Trump or Elon Musk—it would simply refuse. Similarly, requests for hateful symbols or modifications highlighting racial characteristics (like “make this person’s eyes look more Asian”) were strictly off-limits.
No longer. Joanne Jang, OpenAI’s model behaviour lead, explained the shift clearly:
“We’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm. The goal is to embrace humility—recognising how much we don’t know, and positioning ourselves to adapt as we learn.”
In short, fewer instant rejections, more nuanced responses.
Exactly what’s allowed now?
With this update, ChatGPT can now depict public figures upon request, moving away from selectively policing celebrity imagery. OpenAI will allow individuals to opt-out if they don’t want AI-generated images of themselves—shifting control back to users.
Controversially, ChatGPT also now accepts previously prohibited requests related to sensitive physical traits, like ethnicity or body shape adjustments, sparking fresh debate around ethical AI usage.
Handling the hottest topics
OpenAI is cautiously permitting requests involving controversial symbols—like swastikas—but only in neutral or educational contexts, never endorsing harmful ideologies. GPT-4o also continues to enforce stringent protections, especially around images involving children, setting even tighter standards than its predecessor, DALL-E 3.
Yet, loosening moderation around sensitive imagery has inevitably reignited fierce debates over censorship, freedom of speech, and AI’s ethical responsibilities.
A strategic shift or political move?
OpenAI maintains these changes are non-political, emphasising instead their longstanding commitment to user autonomy. But the timing is provocative, coinciding with increasing regulatory pressure and scrutiny from politicians like Republican Congressman Jim Jordan, who recently challenged tech companies about perceived biases in AI moderation.
This relaxation of restrictions echoes similar moves by other tech giants—Meta and X have also dialled back content moderation after facing similar criticisms. AI image moderation, however, poses unique risks due to its potential for widespread misinformation and cultural distortion, as Google’s recent controversy over historically inaccurate Gemini images has demonstrated.
What’s next for AI moderation?
ChatGPT’s new creative freedom has delighted users, but the wider implications remain uncertain. While memes featuring beloved animation styles flood social media, this same freedom could enable the rapid spread of less harmless imagery. OpenAI’s balancing act could quickly draw regulatory attention—particularly under the Trump administration’s more critical stance towards tech censorship.
The big question now: Where exactly do we draw the line between creative freedom and responsible moderation?
Let us know your thoughts in the comments below!
You may also like:
- China’s Bold Move: Shaping Global AI Regulation with Watermarks
- China’s Bold Move: Shaping Global AI Regulation with Watermarks
- Or try ChatGPT now by tapping here.
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
News
Tencent Joins China’s AI Race with New T1 Reasoning Model Launch
Tencent launches its powerful new T1 reasoning model amid growing AI competition in China, while startup Manus gains major regulatory and media support.
Published
5 days agoon
March 27, 2025By
AIinAsia
TL;DR – What You Need to Know in 30 Seconds
- Tencent has launched its upgraded T1 reasoning model
- Competition heats up in China’s AI market
- Beijing spotlights Manus
- Manus partners with Alibaba’s Qwen AI team
The Tencent T1 Reasoning Model Has Launched
Tencent has officially launched the upgraded version of its T1 reasoning model, intensifying competition within China’s already bustling artificial intelligence sector. Announced on Friday (21 March), the T1 reasoning model promises significant enhancements over its preview edition, including faster responses and improved processing of lengthy texts.
In a WeChat announcement, Tencent highlighted T1’s strengths, noting it “keeps the content logic clear and the text neat,” while maintaining an “extremely low hallucination rate,” referring to the AI’s tendency to generate accurate, reliable outputs without inventing false information.
The Turbo S Advantage
The T1 model is built on Tencent’s own Turbo S foundational language technology, introduced last month. According to Tencent, Turbo S notably outpaces competitor DeepSeek’s R1 model when processing queries, a claim backed up by benchmarks Tencent shared in its announcement. These tests showed T1 leading in several key knowledge and reasoning categories.
Tencent’s latest launch comes amid heightened rivalry sparked largely by DeepSeek, a Chinese startup whose powerful yet affordable AI models recently stunned global tech markets. DeepSeek’s success has spurred local companies like Tencent into accelerating their own AI investments.
Beijing Spotlights Rising AI Star Manus
The race isn’t limited to tech giants. Manus, a homegrown AI startup, also received a major boost from Chinese authorities this week. On Thursday, state broadcaster CCTV featured Manus for the first time, comparing its advanced AI agent technology favourably against more traditional chatbot models.
Manus became a sensation globally after unveiling what it claims to be the world’s first truly general-purpose AI agent, capable of independently making decisions and executing tasks with minimal prompting. This autonomy differentiates it sharply from existing chatbots such as ChatGPT and DeepSeek.
Crucially, Manus has now cleared significant regulatory hurdles. Beijing’s municipal authorities confirmed that a China-specific version of Manus’ AI assistant, Monica, is fully registered and compliant with the country’s strict generative AI guidelines, a necessary step before public release.
Further strengthening its domestic foothold, Manus recently announced a strategic partnership with Alibaba’s Qwen AI team, a collaboration likely to accelerate the rollout of Manus’ agent technology across China. Currently, Manus’ agent is accessible only via invite codes, with an eager waiting list already surpassing two million.
The Race Has Only Just Begun
With Tencent’s T1 now officially in play and Manus gaining momentum, China’s AI competition is clearly heating up, promising exciting innovations ahead. As tech giants and ambitious startups alike push boundaries, China’s AI landscape is becoming increasingly dynamic—leaving tech enthusiasts and investors eagerly watching to see who’ll take the lead next.
What do YOU think?
Could China’s AI startups like Manus soon disrupt Silicon Valley’s dominance, or will giants like Tencent keep the competition at bay?
You may also like:
Tencent Takes on DeepSeek: Meet the Lightning-Fast Hunyuan Turbo S
DeepSeek in Singapore: AI Miracle or Security Minefield?
Alibaba’s AI Ambitions: Fueling Cloud Growth and Expanding in Asia
Learn more by tapping here to visit the Tencent website.
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
News
Google’s Gemini AI is Coming to Your Chrome Browser — Here’s the Inside Scoop
Google is integrating Gemini AI into Chrome browser through a new experimental feature called Gemini Live in Chrome (GLIC). Here’s everything you need to know.
Published
7 days agoon
March 25, 2025By
AIinAsia
TL;DR – What You Need to Know in 30 Seconds
- Google is integrating Gemini AI into its Chrome browser via an experimental feature called Gemini Live in Chrome (GLIC).
- GLIC adds a clickable Gemini icon next to Chrome’s window controls, opening a floating AI assistant modal.
- Currently being tested in Chrome Canary, the feature aims to streamline AI interactions without leaving the browser.
Welcoming Google’s Gemini AI to Your Chrome Browser
If there’s one thing tech giants love more than AI right now, it’s finding new ways to shove that AI into everything we use. And Google—never one to be left behind—is apparently stepping up their game by sliding their Gemini AI directly into your beloved Chrome browser. Yep, that’s the buzz on the digital street!
This latest AI adventure popped up thanks to eagle-eyed folks at Windows Latest, who spotted intriguing code snippets hidden in Google’s Chrome Canary version. Canary, if you haven’t played with it before, is Google’s playground version of Chrome. It’s the spot where they test all their wild and wonderful experimental features, and it looks like Gemini’s next up on stage.
Say Hello to GLIC: Gemini Live in Chrome
They’re calling this new integration “GLIC,” which stands for “Gemini Live in Chrome.” (Yes, tech companies never resist a snappy acronym, do they?) According to the early glimpses from Canary, GLIC isn’t quite ready for primetime yet—no shock there—but the outlines are pretty clear.
Once activated, GLIC introduces a nifty Gemini icon neatly tucked up beside your usual minimise, maximise, and close window buttons. Click it, and a floating Gemini assistant modal pops open, ready and waiting for your prompts, questions, or random curiosities.
Prefer a less conspicuous spot? Google’s thought of that too—GLIC can also nestle comfortably in your system tray, offering quick access to Gemini without cluttering your browser interface.

Why Gemini in Chrome Actually Makes Sense
Having Gemini hanging out front and centre in Chrome feels like a smart move—especially when you’re knee-deep in tabs and need quick answers or creative inspiration on the fly. No more toggling between browser tabs or separate apps; your AI assistant is literally at your fingertips.
But let’s keep expectations realistic here—this is still Canary we’re talking about. Features here often need plenty of polish and tweaking before making it to the stable Chrome we all rely on. But the potential? Definitely exciting.
What’s Next?
For now, we’ll keep a close eye on GLIC’s developments. Will Gemini revolutionise how we interact with Chrome, or will it end up another quirky experiment? Either way, Google’s bet on AI is clearly ramping up, and we’re here for it. Don’t forget to sign up to our occasional newsletter to stay informed about this and other happenings around AI in Asia and beyond.
Stay tuned—we’ll share updates as soon as Google lifts the curtains a bit further.
You may also like:
- Revolutionising Search: Google’s New AI Features in Chrome
- Google Gemini: How To Maximise Its Potential
- Google Gemini: The Future of AI
- Try Google Carnary by tapping here — be warned, it can be unstable!
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.

The Secret to Using Generative AI Effectively In 2025

AI-pril Fools! How AI is Outsmarting Our Best Pranks

OpenAI’s New ChatGPT Image Policy: Is AI Moderation Becoming Too Lax?
Trending
-
News2 weeks ago
Adobe Jumps into AI Video: Exploring Firefly’s New Video Generator
-
Life1 day ago
AI-pril Fools! How AI is Outsmarting Our Best Pranks
-
News2 days ago
OpenAI’s New ChatGPT Image Policy: Is AI Moderation Becoming Too Lax?
-
Business6 days ago
Forget the panic: AI Isn’t Here to Replace Us—It’s Here to Elevate Our Roles
-
News7 days ago
Google’s Gemini AI is Coming to Your Chrome Browser — Here’s the Inside Scoop
-
Tools17 hours ago
The Secret to Using Generative AI Effectively In 2025
-
News5 days ago
Tencent Joins China’s AI Race with New T1 Reasoning Model Launch