Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

Install AIinASIA

Get quick access from your home screen

AI in ASIA
spot fake AI faces
Learn

AI Faces: Flawless, Symmetrical, Unsettling

AI faces are alarmingly realistic. Can you tell genuine from fake? Learn how to spot them and stay ahead!

Anonymous4 min read

AI Snapshot

The TL;DR: what matters, fast.

AI-generated faces are now so realistic that even 'super recognisers' struggle to differentiate them from genuine photographs.

A new study reveals that the average person often mistakes synthetic faces for real ones, a phenomenon termed 'hyperrealism'.

Just five minutes of targeted training can significantly improve an individual's ability to identify AI-generated fakes by focusing on common rendering flaws and unnatural proportionality.

Who should pay attention: General public | Social media users | Researchers | AI developers

What changes next: Further development of AI detection methods is expected.

Artificial intelligence is becoming increasingly adept at generating hyperrealistic images, particularly faces, bringing with it both fascination and concern. A recent study highlights just how convincing these AI-created visages are, revealing that even individuals with exceptional facial recognition abilities struggle to differentiate them from genuine photographs.

Published in Royal Society Open Science,^ ("Super-recognisers" are no better than average at detecting AI-generated faces) the research indicates that "super recognisers" – an elite group known for their superior facial processing skills – performed no better than chance when identifying AI-generated faces. Surprisingly, the average person fared even worse, often mistaking synthetic faces for real ones. This phenomenon, where AI-generated content appears more "real" than actual photographs, is known as "hyperrealism". Many are already grappling with the implications of this technology, especially concerning the rise of AI 'Slop' Eroding Social Media Experience.

The Power of Brief Training

Encouragingly, the study, led by Katie Gray, an associate professor in psychology at the University of Reading, found that a mere five minutes of targeted training significantly improved individuals' ability to spot these fakes. The training focused on common rendering flaws in AI-generated images, such as unusual hairlines, unnatural skin textures, or even the presence of a middle tooth. It also highlighted that AI-produced faces tend to exhibit a degree of proportionality rarely seen in human faces.

Crucially, this short training session boosted the accuracy of both super recognisers and typical participants by similar margins. While super recognisers started with a slight advantage, the comparable improvement suggests they might be using different cues beyond simple rendering errors to detect fakes. This insight could be pivotal in developing more sophisticated detection methods.

Human-in-the-Loop Solutions

Gray believes that understanding the unique skills of super recognisers could pave the way for more effective AI detection strategies in the future. The study's authors propose a "human-in-the-loop" approach, combining AI detection algorithms with the enhanced capabilities of trained super recognisers. This echoes a broader sentiment across the industry, where human oversight is seen as vital for AI development, as highlighted by OpenAI's view that human adoption, not new models, is key to AGI.

The proliferation of these deepfake faces is primarily due to advanced AI algorithms called generative adversarial networks (GANs). These systems involve two neural networks: one generates images based on real-world data, and the other, a discriminator, evaluates their authenticity. Through iterative refinement, the generator becomes incredibly skilled at creating images that can fool the discriminator, and consequently, humans. For those interested in understanding how these images are made, exploring tools like The new ChatGPT Images is here can offer some insight.

The Nuances of Detection

In the initial experiment, participants were given 10 seconds to decide if a displayed face was real or AI-generated. Super recognisers correctly identified only 41% of AI faces, performing no better than random guessing. Typical individuals managed an even lower 30%. Both groups also frequently misidentified real faces as fake, with rates of 39% and 46% respectively.

After the five-minute training, which included examples of AI errors and real-time feedback, detection accuracy improved considerably. Super recognisers correctly identified 64% of fakes, while typical participants reached 51%. Interestingly, the rate of falsely identifying real faces as fake remained largely consistent, suggesting the training primarily enhanced the detection of AI-specific anomalies.

One important takeaway from the study is the advice to slow down and carefully scrutinise facial features when attempting to discern authenticity. Trained participants took slightly longer to examine images, suggesting that deliberate observation is key. However, it's worth noting that the study tested participants immediately after training, so the long-term effectiveness of such brief interventions remains to be fully explored, as pointed out by Meike Ramon, a professor of applied data science.

As AI continues to create increasingly convincing digital content, the ability to discern real from fake becomes ever more critical. This research underscores the ongoing challenge but also offers a pragmatic path forward: targeted education and the strategic integration of human expertise in the battle against visual deception. For more on the broader societal implications, consider the discussion around the danger of anthropomorphising AI.

What are your thoughts on fighting AI-generated misinformation, particularly with deepfakes? Share your strategies and concerns in the comments below.

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

Latest Comments (4)

Yuki Tanaka
Yuki Tanaka@yukit
AI
24 January 2026

the finding that "super recognizers" struggled and then improved with brief training is interesting. it suggests that even in tasks relying on perceptual expertise, specific feature-based training can be highly effective. i wonder if the improvements were sustained over time, or if further research has explored the generalization of this learned discrimination to novel AI face datasets beyond the study's specific generation models.

Zhang Yue
Zhang Yue@zhangy
AI
18 January 2026

This finding about super-recognisers is congruent with what we see in some adversarial attacks on facial recognition models, where even very small perturbations can fool state-of-the-art networks. The "hyperrealism" term is apt here, similar to what DeepSeek-V2 might strive for in image generation.

Maggie Chan
Maggie Chan@maggiec
AI
14 January 2026

Honestly, this "brief training" finding is probably the most useful insight for my team right now. We're building compliance tools for financial orgs, and deepfakes are a constant red flag for identity verification. Five minutes of training to improve detection sounds great in theory, but putting that into a scalable, automated system that needs to verify thousands of identities daily? That's the real challenge. It's not about an individual's "super recogniser" skills, it's about robust, always-on AI detection. The human-in-the-loop part is good, but our clients need near real-time, high-volume solutions. It still feels like we're patching holes rather than building a solid dam.

Maggie Chan
Maggie Chan@maggiec
AI
13 January 2026

the "brief training" part is interesting. we’re looking at something similar for our compliance AI, teaching it to spot subtle anomalies in documents. if a five-minute human training can improve detection so much for faces, imagine what targeted data and reinforcement learning could do for our models identifying fraud.

Leave a Comment

Your email will not be published