Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Life

YouTube's Secret AI: Reality-Bending Edits?

YouTube secretly uses AI to enhance creator videos without consent, sparking backlash over digital authenticity and manipulation concerns.

Intelligence DeskIntelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

YouTube secretly uses AI to enhance videos without creator consent, altering appearance and quality

Music YouTubers Rick Beato and Rhett Shull discovered and exposed the unauthorized modifications

The controversy raises serious questions about digital authenticity and platform transparency

YouTube's Secret AI Enhancement Sparks Creator Backlash

When music YouTuber Rick Beato noticed something odd about his appearance in a recent video, he couldn't shake the feeling that his hair looked "strange" and that he seemed to be wearing makeup. His instincts were correct. YouTube has been secretly using AI to enhance videos without creator consent, sparking concerns about authenticity and digital manipulation.

The revelation emerged after multiple creators reported similar anomalies in their content. What began as subtle observations about sharper skin textures and altered facial features has evolved into a broader debate about consent, transparency, and our relationship with digitally mediated reality.

The Discovery That Changed Everything

Rhett Shull, another prominent music YouTuber, became increasingly disturbed by the AI alterations in his videos. His video addressing the issue garnered over half a million views, demonstrating widespread creator concern. Complaints had been surfacing on social media since June 2024, featuring close-ups of distorted body parts and questions about YouTube's intentions.

Advertisement

"The more I looked at it, the more upset I got. If I wanted this terrible over-sharpening I would have done it myself. But the bigger thing is it looks AI-generated. I think that deeply misrepresents me and what I do and my voice on the internet." Rhett Shull, Music YouTuber

After months of speculation, YouTube finally acknowledged the alterations. Rene Ritchie, YouTube's head of editorial and creator liaison, confirmed the company was "running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise and improve clarity in videos during processing."

By The Numbers

  • Over 500,000 views on Rhett Shull's complaint video within days
  • Rick Beato boasts more than 5 million subscribers and nearly 2,000 videos
  • Complaints began emerging on social media platforms in June 2024
  • YouTube Shorts receives billions of views daily across the platform
  • The experiment affects only a "limited number" of videos according to YouTube

The controversy highlights how AI is quietly reshaping daily reality and digital consumption, often without explicit user awareness or consent.

Samuel Woolley, a disinformation expert at the University of Pittsburgh, draws a crucial distinction between voluntary smartphone AI enhancements and YouTube's approach. Woolley suggests YouTube's use of "machine learning" terminology might deliberately downplay AI concerns. While machine learning is technically a subset of AI, the fundamental issue remains: unconsented content alteration.

"You can make decisions about what you want your phone to do, and whether to turn on certain features. What we have here is a company manipulating content from leading users that is then being distributed to a public audience without the consent of the people who produce the videos." Samuel Woolley, University of Pittsburgh

This practice extends beyond video platforms. Similar concerns have emerged around the environmental cost of AI tools and the broader implications of AI-mediated content consumption without transparency.

Platform AI Enhancement User Control Transparency
YouTube Shorts Video clarity, denoising None Disclosed after complaints
Samsung Galaxy Moon photo enhancement Can disable Initially hidden
Google Pixel Best Take, Magic Eraser Optional features Clearly labelled
Netflix AI remastering None Not disclosed

Reality's Shifting Foundations

Professor Jill Walker Rettberg from the University of Bergen offers a compelling analogy for our changing relationship with authentic content. The implications extend beyond YouTube. Netflix faced similar criticism for AI-remastered versions of 1980s sitcoms The Cosby Show and A Different World, where viewers noticed distorted faces and backgrounds in what were marketed as high-definition versions.

"Footsteps in the sand are a great analogy. You know someone made those footprints. With an analogue camera, you know something was in front of the camera because the film was exposed to light. But with algorithms and AI, what does this do to our relationship with reality?" Jill Walker Rettberg, University of Bergen

These developments parallel concerns about separating AI hype from reality, where the gap between AI promises and actual implementation becomes increasingly apparent.

Key warning signs of AI enhancement include:

  • Unnaturally smooth skin textures combined with over-sharpened details
  • Slight facial distortions, particularly around ears and hairlines
  • Inconsistent lighting or shadow patterns across frames
  • Enhanced clothing details that appear artificially defined
  • Subtle colour saturation changes that feel unnatural

Industry Response and Future Implications

Not all creators share the same concerns. Rick Beato remains largely supportive of YouTube's innovations, though his initial confusion about the AI alterations sparked the broader investigation.

"You know, YouTube is constantly working on new tools and experimenting with stuff. They're a best-in-class company, I've got nothing but good things to say. YouTube changed my life." Rick Beato, Music YouTuber

However, the broader industry is responding to transparency concerns. Google's upcoming Pixel 10 phone will be the first to incorporate industry-standard content credentials, essentially digital watermarks for AI-edited images. This aligns with growing discussions around watermarking as protection against AI manipulation and responsible AI development.

What exactly is YouTube doing to videos?

YouTube is applying AI-powered enhancements including denoising, unblurring, and clarity improvements to select YouTube Shorts videos. The company describes this as similar to smartphone camera processing, though creators aren't informed when their content is altered.

Can creators opt out of these AI enhancements?

Currently, YouTube hasn't provided creators with an opt-out option. The company hasn't responded to questions about whether users will gain control over these alterations in future updates to their platform policies.

How can viewers identify AI-enhanced videos?

Look for unnaturally smooth skin combined with over-sharpened details, slight facial distortions, inconsistent lighting patterns, and artificially enhanced clothing textures. Side-by-side comparisons often reveal the most obvious differences in processing.

Is this practice legal without creator consent?

While likely covered under YouTube's terms of service, the practice raises ethical questions about content manipulation without explicit creator knowledge or consent, particularly given the platform's role in creator monetisation.

What other platforms use similar AI enhancements?

Netflix has applied AI remastering to older content, Samsung enhances moon photography, and Google Pixel offers AI photo features. However, most provide some level of user control or transparency about these enhancements.

The AIinASIA View: YouTube's secret AI enhancements represent a troubling trend towards non-consensual content manipulation that undermines creator autonomy and viewer trust. While technical improvements aren't inherently problematic, the lack of transparency and creator control sets a dangerous precedent. We believe platforms must prioritise explicit consent and clear labelling for any AI alterations. The future of digital content depends on maintaining authentic creator-audience relationships, not algorithmic mediation that serves platform interests over creator integrity. YouTube should immediately implement opt-out controls and clear disclosure requirements.

The implications of AI-mediated content extend far beyond individual creator concerns. As these technologies become more sophisticated and widespread, they challenge fundamental assumptions about authenticity in digital media. Whether this represents technological progress or a concerning erosion of trust between creators and audiences remains hotly debated.

What's your stance on platforms using AI to enhance content without explicit creator consent? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 6 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the Governance Essentials learning path.

Continue the path →

Latest Comments (6)

Krit Tantipong
Krit Tantipong@krit_99
AI
1 December 2025

This YouTube AI thing making faces look weird reminds me of some of the challenges we've had with object recognition for logistics in Thailand. When our computer vision models are trained on mostly Western datasets, they sometimes struggle with variations in skin tone or even local packaging designs. It's like the AI tries to "fix" what it doesn't recognize as normal. This subtle image manipulation by YouTube, even for minor things like shirt wrinkles or skin texture, could easily introduce biases if applied more broadly, affecting how products are perceived online, especially if the AI isn't specifically trained on diverse Asian features or product types. It's a reminder that even subtle AI interventions can have bigger implications.

Ploy Siriwan@ploytech
AI
25 November 2025

whoa, this youtube AI thing is wild. like, the part about rick beato's hair looking "strange" and then rhett shull seeing that "terrible over-sharpening" that makes things look AI-generated. it's kinda creepy how subtle this is. makes me think about some of the small creators here in thailand and vietnam. they're so focused on growth, usually just trying to get their content out there. if youtube is messing with their videos, even subtly, without telling anyone... that's a big deal for trust, especially when audiences here can be really sensitive to authenticity. it’s not just big YouTubers getting affected! 😬

James Clarke@jamesclarke
AI
16 November 2025

stuff. we're seeing similar challenges with deepfake detection here in Manchester, especially with the rapid pace of development in generative AI. it really highlights the need for transparency, even with seemingly minor tweaks. the ethical implications are huge for creators and consumers alike.

Hye-jin Choi
Hye-jin Choi@hyejinc
AI
14 November 2025

The lack of transparency here reminds me of the debate in Korea around AI-driven content manipulation, especially concerning deepfakes and public trust. This isn't just about appearance but the integrity of communication.

Lee Chong Wei@lcw_tech
AI
11 November 2025

This is exactly the kind of black box optimization that makes DevSecOps so difficult in production. YouTube just pushes these AI models into their pipelines and who knows what edge cases they're creating. Rick Beato's "strange hair" and Rhett Shull getting upset about over-sharpening sounds like a classic case of unmonitored downstream effects. I bet they're just chasing some arbitrary engagement metric without considering the actual visual integrity. Not to mention the infrastructure cost of running these inference engines on billions of videos. It's a miracle it's not breaking more things.

Lakshmi Reddy
Lakshmi Reddy@lakshmi.r
AI
8 November 2025

This definitely reminds me of the research around "digital colonialism" in AI, where platforms based in the global north make decisions about how content is presented universally, without transparent consultation. Shull's point about eroding trust with his audience because of misrepresentation is very salient for creators whose cultural context might be completely ignored by these default AI enhancements.

Leave a Comment

Your email will not be published