The digital landscape is increasingly populated by AI-generated content, often blurring the lines between what's authentic and what's artificial. This phenomenon, dubbed "AI slop", presents a growing challenge as generative models become more sophisticated. It's becoming harder to trust what we see online, particularly with images.
The Subtle Tells of AI Imagery
While AI image generators are constantly improving, they still leave behind subtle clues that savvy observers can spot. These aren't always immediately obvious, but a closer look often reveals the artificial nature of an image. Understanding these tells is crucial in an age where AI can create a new "meaning" of work, not just the outputs and even influence news cycles.
Distorted Text and Unnatural Anatomy
One of the oldest giveaways in AI-generated images is their struggle with text. Early models were notoriously bad, rendering jumbled letters or outright gibberish. While improvements have been made, distorted or nonsensical text on signs, book covers, or clothing remains a strong indicator of AI authorship. Always zoom in and inspect text for warped characters or illogical arrangements.
Another common fault lies in anatomical accuracy, particularly with human figures.
"For whatever reason, AI models have long struggled to generate people with 10 fingers or fingers that don't melt into each other. Or maybe they're missing knuckles or even nails."
Beyond hands, look for extra limbs, impossibly long or short torsos, or facial irregularities like misaligned eyes or a single nostril. These "AI accidents" are becoming less frequent but are still a tell-tale sign. The imperfections in human rendering highlight the ongoing challenges in AI's pursuit of photorealism.
The Uncanny Valley and Hyper-Stylisation
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
AI-generated faces often fall into the "uncanny valley," appearing almost real but with an unsettling, synthetic quality. This can manifest as overly smooth, plasticky skin, vacant or glassy eyes, or hair that looks too perfect or wispy. When an image feels too flawless, or gives you a vague sense that something isn't quite right, it's worth considering AI as the source. We're also seeing a rise in hyper-stylised imagery, particularly in advertising and local business promotions. Restaurants, for instance, might use AI to generate pictures of food that are unnaturally perfect, glistening, and free from any real-world imperfections. This visual idealisation often betrays its artificial origin, as it lacks the authentic nuances of real photography. It's a prime example of how AI "Slop" is Drowning Science in Poor Data and aesthetics alike.
When Chaos and Simplicity Collide
AI images can sometimes err on two extremes: overwhelming complexity or stark oversimplification.
Visual Overload
Some AI-generated images are characterised by an excessive amount of visual information. This can include:
- Strange, repeating textures
- Overly intense or hyper-detailed backgrounds
- Shadows cast at impossible angles
- Reflections and glowing lights that defy physics
- A general sense of visual noise and chaos
These images often aim for dramatic effect but end up looking overtly computer-generated, like a "fever dream" rendered by a game engine. If an image feels like it's assaulting your senses with too much going on, it's a strong contender for AI authorship.
The Loss of Detail
Conversely, AI can also strip away crucial detail, leading to an overly smooth or simplified appearance. This is often seen in "restored" or "colourised" historical photos where AI processes merge textures and flatten surfaces. A brick wall might lose its individual brick definition, appearing as a solid, smooth red surface. Leaves on trees can blur into indistinct masses, and people might look more like painted figures than actual individuals. This loss of authentic texture and granular detail is a clear sign an image has been processed or generated by AI, despite its apparent clarity.
Tools to Aid Detection
Beyond visual inspection, several tools can help identify AI-generated images. Google, for example, has been integrating detection capabilities into its ecosystem. Android users can employ Circle to Search to query if an image is AI-generated. Similarly, Google Lens's "About this image" feature can offer context, including potential AI origins, especially if an image carries Google's proprietary SynthID watermark. The Google Gemini app also allows users to upload photos and directly ask if they were created with Google AI, leveraging the SynthID or Gemini's reasoning capabilities.
While these tools are handy, they aren't foolproof. Sophisticated fakes can still slip through. Research by The New York Times, for example, highlighted that even leading AI detection tools can misidentify AI-generated images as real, illustrating the ongoing arms race between AI generation and detection technologies. Think you've a keen eye for AI images? Test them here to see if you were right! For those interested in deeper dives into AI's impact, exploring articles like Does Business AI Really Give Back Our Time or How to Actually Think With AI (Not Just Ask It Questions) provides further context on the broader implications of AI in our daily lives.
What strategies do you use to spot AI-generated images online? Share your tips in the comments below.











Latest Comments (6)
my boss almost used an AI generated photo in a presentation last week, it looked so real initially. how good are these tools for real time checks?
this rundown of tells is really helpful, might actually make a difference for our social media team 😊💪
was just talking to a coworker about how these tools are actually getting pretty good. might need to start using them myself.
Still too easy. 📌👀
with LLMs getting so good, it's getting harder but knowing what to look for still helps a lot. I think the tools will catch up
good tools available?
Leave a Comment