Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

Install AIinASIA

Get quick access from your home screen

AI in ASIA
deepfake verification methods
Learn

Deepfakes Deconstructed: 6 Verification Methods

Spotting deepfakes is getting tougher. Learn six crucial verification methods to navigate the digital world. Click to uncover the truth!

Anonymous5 min read

AI Snapshot

The TL;DR: what matters, fast.

AI-generated videos are increasingly prevalent on social media, making it difficult to distinguish real content from fabricated media.

Sophisticated AI tools can create highly realistic scenes, including those featuring public figures or historical events, with significant implications for news and privacy.

Deepfakes are designed to deceive by imitating real people or events, and identifying visual glitches, inconsistencies, and other subtle errors can help in their detection.

Who should pay attention: Social media users | Journalists | Content creators

What changes next: The sophistication of AI video generation and detection methods will continue to evolve.

AI-generated videos are now a common sight across social media platforms, making it increasingly difficult to discern what's real from what's fabricated. From fantastical animal rescues to seemingly mundane influencer content, these sophisticated fakes are fooling millions. Understanding the tell-tale signs is crucial for navigating our increasingly digital world.

The Rise of Synthetic Media

The proliferation of AI video technology has dramatically changed the media landscape. Tools like OpenAI's Sora 2 and Google's Nano Banana model can produce incredibly realistic scenes, complete with synchronised dialogue and sound effects. This means that virtually any scenario, including those featuring famous personalities or historical events, can now be convincingly faked. The implications are far-reaching, affecting everything from news reporting to personal privacy.

Consider the recent surge in deepfakes; these aren't just harmless fun. While some AI creations, often dubbed "AI slop", are low-effort and purely for entertainment, deepfakes are designed to imitate real people or events with the intent to deceive. This distinction is vital when assessing the content you consume online.

Key Indicators of AI-Generated Video

Spotting an AI video often comes down to scrutinising subtle inconsistencies that human creators rarely miss. Here are several indicators to look out for:

Visual Glitches and Errors: AI models can struggle with maintaining perfect continuity. Look for strange anomalies, such as objects momentarily morphing, extra limbs appearing, or elements flickering in and out of existence. Shadows, reflections, and lighting should also be consistent; if something appears out of place or garbled, it's a strong sign of AI manipulation. Even text within a scene can betray an AI origin, often appearing warped or misspelled. A viral TikTok video, for example, showed a polar bear cub briefly gaining an extra paw, a classic AI slip-up. Low-Resolution or Poor Quality Footage: In an age where most devices record in high definition, unusually grainy or pixelated video should raise suspicion. If a "mind-blowing" event is captured with footage that looks like it's from an old camcorder, it's often a red flag. AI generators sometimes produce lower-quality outputs, especially when attempting complex scenes, which can result in a distinctly outdated aesthetic. You might also want to consult guides on how to identify AI images as many of the visual tells cross over. Uncanny Hyper-Realism: Conversely, some AI videos can look too perfect. Faces might appear poreless, lighting unnaturally flawless, or movements overly smooth and robotic. This "uncanny valley" effect can make subjects look almost animated, rather than genuinely human. While visually impressive, this level of perfection often indicates a synthetic origin, as natural human imperfections are absent. Oddly Slow or Dreamlike Pacing: Many AI-generated videos feature an unnatural, dreamlike quality. Camera movements can be excessively smooth, and a subtle slow-motion effect might pervade the entire clip, giving it an eerie, polished feel that deviates from typical real-world footage. These cinematic touches are often an attempt by AI to enhance realism but can inadvertently make the video appear artificial. Audio Syncing Issues: Pay close attention to spoken dialogue. If lip movements don't perfectly align with the audio, or if the timing feels slightly off, it's a strong indicator of AI manipulation. Furthermore, AI-generated audio might lack the natural ambient sounds, echoes, or background noise you'd expect in a real environment, making the soundscape feel sterile or incomplete. A lack of sound altogether, or obviously dubbed audio, also points to fakery. Implausible Scenarios: Finally, trust your instincts. If a video depicts something that seems too incredible, too funny, or too outrageous to be true, it very likely isn't. AI excels at creating scenarios that play on our emotions, whether it's adorable animals performing impossible feats or shocking events that defy logic. Remember the viral videos of babies walking runways? Such scenarios exploit our natural tendency to be amazed, but they often lack basic realism. This phenomenon is why many workers are finding themselves trusting AI less even as they use it more.

Detecting AI: Tools and Limitations

While vigilance is key, several tools are emerging to help identify AI-generated content. Cybersecurity firms like CloudSEK offer Deepfake Analyzers that scan video frames for signs of manipulation, often providing a "likely AI" score. Other services, such as WasItAI or AI-or-Not, allow users to upload content for analysis.

However, these tools aren't foolproof. Some sophisticated AI creations can still bypass detectors, as demonstrated when an entirely AI-generated Coca-Cola commercial fooled one such system into thinking it was human-made. Additionally, some AI platforms now include watermarks, such as a "Sora" logo, which can be an obvious giveaway if you're looking for them. As AI capabilities advance, the cat-and-mouse game between creators and detectors will undoubtedly continue. For more insights into the challenges of AI-generated content, an official report on the impact of deepfakes on society offers further reading on the broader implications of this technology and its detection efforts from the European Parliament695475_EN.pdf)^.

What strategies do you use to spot AI videos on your social feeds? Share your tips and experiences in the comments below.

What did you think?

Written by

Share your thoughts

Join 2 readers in the discussion below

This article is part of the AI Safety for Everyone learning path.

Continue the path →

Latest Comments (2)

TechEthicsWatch@techethicswatch
AI
17 February 2026

the "fantastical animal rescues" example really sticks with me. always makes you wonder who benefits from spreading that kind of content, even if it seems harmless on the surface. is it just for clicks, or is there a bigger agenda behind flooding the feeds with this sort of AI slop?

Ahmad Razak
Ahmad Razak@ahmadrazak
AI
11 February 2026

The example of the polar bear cub with the extra paw is a good illustration of current AI limitations. I wonder how the Malaysian AI Roadmap, particularly in its focus on ethical AI and digital literacy, will address the growing sophistication of deepfake detection tools versus the pace of AI generation advancements. Will we see a policy push for mandatory AI content disclosure?

Leave a Comment

Your email will not be published