Oh, the internet, what a wild place it's become! If your social media feeds are anything like mine, you've probably noticed a growing number of videos that just feel a bit... off. We're talking about AI-generated videos, and they're popping up everywhere. The tricky bit is, they're getting so good that spotting them can be a real headache.
For now, though, there's one really handy indicator that might just save you from falling for some of these digital deceptions: poor video quality.
The 'Filmed on a Potato' Phenomenon
You know the look, don't you? That grainy, blurry, almost pixelated footage that makes you wonder if it was shot on a really old mobile phone or perhaps, yes, a potato. Well, if you see a video with this kind of shoddy picture quality, it's worth raising an eyebrow. It could very well be an AI creation.
Hany Farid, a computer-science professor at the University of California, Berkeley, and a pioneer in digital forensics, highlighted this point. He told us"
"It's one of the first things we look at." Farid even founded a deepfake detection company, GetReal Security, so he knows a thing or two about this stuff!
Now, let's be clear, this isn't a foolproof method. The best AI tools can actually churn out some incredibly polished clips. And, of course, a low-quality video isn't automatically fake or nefarious. Matthew Stamm, who heads the Multimedia and Information Security Lab at Drexel University, US, explained,
"If you see something that's really low quality that doesn't mean it's fake. It doesn't mean anything nefarious."
However, the key here is that blurry, pixelated AI videos are often the ones that are designed to trick you. It's a clever way to hide the AI's imperfections.
Hiding in Plain Sight: AI's Subtle Flaws
Even the most advanced AI video generators, like Google's Veo and OpenAI's Sora, still have their quirks. Farid notes that they produce "small inconsistencies," but these aren't always obvious. We're not talking about glaring errors like someone having six fingers anymore. Instead, it's far more subtle. For more on the capabilities of such tools, you can check out our Beginner's Guide to Using Sora AI Video.
Think about things like:
- Uncannily smooth skin textures
- Oddly shifting patterns in hair or clothing
- Background objects that move in impossible or unrealistic ways
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
These slight glitches are much easier to miss when the video quality is poor. A crisp, high-definition clip would likely reveal these tell-tale AI errors much more readily. So, by deliberately making videos look low-quality, AI creators can effectively mask these imperfections, making their fakes more convincing.
The Short, Blurry, and Compressed Truth
Remember that viral video of bunnies on a trampoline, or the emotional clip of a couple falling in love on the New York subway, or even that surprisingly progressive sermon from an American priest? They all turned out to be AI fakes, and they all had something in common: they looked like they were filmed on a potato. There's also the ongoing discussion about YouTube's Secret AI: Reality-Bending Edits?.
Farid points to three key things to look out for: resolution, quality, and length.
- Length: This is usually the easiest giveaway. Most AI videos are surprisingly short, often just six, eight, or ten seconds long. This is because generating AI video is expensive, and the longer the clip, the more likely the AI is to make a mistake. You can stitch multiple AI videos together, but you'll often spot the cuts every few seconds.
- Resolution and Quality: These are related but distinct. Resolution refers to the sheer number of pixels in an image, while compression is a process that shrinks video files by throwing away detail. This often results in blocky patterns and blurred edges.
In fact, the 'bad guys' often intentionally downgrade their AI-generated content. As Farid puts it, "If I'm trying to fool people, what do I do? I generate my fake video, then I reduce the resolution so you can still see it, but you can make out all the little details. And then I add compression that further obfuscates any possible artefacts." It's a deliberate tactic to hide the digital fingerprints of AI.
The Future: A Constant Battle
It's a bit grim, but we need to face it: this advice about blurry videos won't last forever. Tech giants are pouring billions into making AI even more realistic. Matthew Stamm predicts that these visual cues will likely disappear from video within the next couple of years, just as they've mostly vanished from AI Faces: Flawless, Symmetrical, Unsettling. Soon, we simply won't be able to trust our eyes.
So, what then?
Researchers like Farid and Stamm use advanced techniques to verify content, looking for "statistical traces" that our eyes can't see, like digital fingerprints. Think of it like forensic science for videos. Technology companies are also exploring new standards to embed information into files at the point of creation, either to prove authenticity or to clearly label AI-generated content. For a deeper dive into the challenges of AI verification, the National Institute of Standards and Technology (NIST) provides valuable insights in their publications on AI trustworthiness^ [https://www.nist.gov/artificial-intelligence].
Shifting Our Mindset
Ultimately, the real solution might lie in how we, as users, approach online content. Digital literacy expert Mike Caulfield suggests that constantly chasing new AI 'tells' is a losing battle. Instead, we need to completely rethink our relationship with online videos and images.
Think about it: you wouldn't just believe a piece of text simply because someone wrote it down. You'd consider the source, the context, and whether it's been verified. Videos and images used to be different because they were harder to fake. That's no longer the case.
"My perspective is that largely video is going to become somewhat like text, long term, where provenance [the origin of the video], not surface features, will be most key, and we might as well prepare for that."
The crucial questions moving forward will be:
- Where did this content come from?
- Who posted it?
- What's the context?
- Has a trustworthy source verified it?
This is a massive challenge, perhaps "the greatest information security challenge of the 21st Century," as Stamm puts it. But he remains hopeful, believing that a combination of solutions – education, intelligent policies, and technological advances – will help us navigate this brave new world of digital information. It's an ongoing discussion, similar to the broader conversation around AI's Secret Revolution: Trends You Can't Miss. We also explore related topics such as AI Browsers Under Threat as Researchers Expose Deep Flaws.
It's a journey, and we're all learning as we go!











Latest Comments (4)
This is so timely! I'm constantly seeing videos now and wondering if they're real. My main worry isn't just spotting deepfakes, but how this rapid proliferation of AI-generated content might subtly shift our perception of reality over time. What impact do you foresee it having on critical thinking skills down the road?
It's definitely getting trickier, innit? Though I reckon the article focuses a bit too much on the visuals. Sometimes the audio, like a weird cadence or lack of natural pauses, gives it away even more. It's like a different kind of uncanny valley.
Interesting read! While it's good to know some tells for AI videos, I still feel like the article makes it sound a bit too straightforward. "The #1 clue"? I reckon these AI fellas are learning super quick, so what's true today might not be tomorrow. We'll always be playing catch up, kan?
Honestly, it's not always about pixel glitches; an odd accent or strange facial expressions are often my first give-away, innit.
Leave a Comment