Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

AI in ASIA
AI slop social media
Life

AI 'Slop' Eroding Social Media Experience

Is AI "slop" ruining your social media feed? Discover how low-quality content and deepfakes are eroding our digital experience.

Intelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

Social media platforms are increasingly deluged with low-quality, AI-generated content and deepfakes, making it difficult for users to distinguish reality.

Generative AI tools, despite their technological advancements, contribute to a flood of artificial content that isolates users rather than connecting them.

Experts suggest social media companies prioritize showcasing AI capabilities over user experience, pushing platforms further from their original purpose of fostering genuine connection.

Who should pay attention: Social media users | Social media platforms | AI developers

What changes next: Debate is likely to intensify regarding the regulation of AI-generated content.

The digital landscape, once a hub for connection, is increasingly fractured by low-quality AI-generated content and deceptive deepfakes. Social media, designed to bring us closer, now often leaves users feeling isolated and detached from reality. This shift is accelerating rapidly with the proliferation of advanced generative AI tools.

The Rise of AI-Generated Content

Generative AI platforms, such as OpenAI's Sora and Midjourney, allow users to create sophisticated videos and images from simple text prompts. While these tools represent remarkable technological progress, they also present significant ethical challenges. The sheer volume of AI-generated content, often referred to as "AI slop," floods social media feeds with everything from animals exhibiting uncanny human traits to public figures appearing to say or do things they never did. This onslaught makes it increasingly difficult for individuals to discern fact from fiction online. We've previously discussed how AI slop is choking AI progress in research, and its impact on social media is equally concerning.

Alexios Mantzarlis, director of Cornell Tech's Security, Trust and Safety Initiative, points out that social media's primary aim has shifted. "The cynical answer is that social media is now aimed at keeping you connected to the tool, rather than to each other," he explains. He argues that tech giants are prioritising showcasing their AI capabilities to boost stock prices, often at the expense of user experience. This echoes concerns about AI chatbot giants restricting free access.

Authenticity Versus Artificiality

The initial appeal of platforms like TikTok, for instance, was its perceived authenticity compared to the polished, often unrealistic content prevalent on Instagram. However, even these newer platforms are now saturated with AI-generated material. This heightened artificiality clashes directly with the desire for genuine human connection many users seek online.

Before generative AI, social media feeds were already evolving; updates from friends and family were often overshadowed by content from high-profile creators. While this can sometimes introduce users to new interests, it also distances them from their personal networks. With AI now in the mix, any semblance of authenticity is further eroded. Users are not only contending with the insecurity sparked by retouched images of real people, but also with entirely AI-generated vacation photos or AI influencers propagating unattainable beauty standards. "Before, we had the problem of unrealistic body expectations," Mantzarlis highlights, "And now we're facing the world of unreal body expectations."

The Regulatory Challenge

Social media companies like Meta and TikTok have publicly committed to labelling AI-generated content and banning harmful posts, such as deepfakes depicting crisis events or the unauthorised use of private individuals' likenesses. TikTok, for example, is trialling a feature that allows users to control the amount of AI-generated content in their feeds.

However, government regulation struggles to keep pace with the rapid advancements in AI technology. Factors such as political gridlock and lobbying efforts from tech firms contribute to this lag. In the absence of robust external regulation, companies are left to enforce their own policies, an uphill battle given the sheer volume of content. These efforts often appear insufficient, leading to a breakdown of trust. An August study by Raptive, a media company, revealed that when people merely suspected content was AI-generated, 48% found it less trustworthy, and 60% felt a lower emotional connection to it.

There will be lots of junk and bumps in the road and problems, but maybe this can create amazing new forms of information-sharing and entertainment," – Paul Bannister, Chief Strategy Officer at Raptive.

Paul Bannister, Chief Strategy Officer at Raptive, offers a more optimistic perspective, suggesting that AI tools could empower more individuals to become creators, broadening the scope of creative expression. He acknowledges the "junk and bumps in the road" but believes in the potential for new forms of information sharing and entertainment. This perspective aligns with how AI skills are being built with new OpenAI courses.

However, Mantzarlis also warns of a darker side: "It's going to be used for further exacerbating tensions, for confirming people's pre-existing biases." Social media already acts as an echo chamber, amplifying harmful viewpoints and misinformation. The widespread ability to easily manufacture and disseminate personal realities through AI could exacerbate societal divisions.

The demand for platforms to offer users greater control over the AI content they encounter is growing. Many would prefer to minimise or even eliminate AI-generated content from their feeds, seeking a return to more genuine online interactions. For further reading on the societal impact of AI, the European Parliament has published a comprehensive analysis of AI and its impact on human behaviour740263_EN.pdf)^.

What did you think?

Written by

Share your thoughts

Join 2 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the AI Video & Audio learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (2)

Jasmine Koh@jasminek
AI
13 January 2026

Mantzarlis' point about platforms prioritizing tool connection over human connection resonates with the shift we're seeing. This certainly aligns with observations regarding platform design choices, where engagement metrics often supersede ethical considerations for content authenticity, which could be examined through frameworks like Value Sensitive Design.

Marcus Thompson
Marcus Thompson@marcust
AI
4 January 2026

Totally agree with Mantzarlis about social media shifting to keeping you connected to the tool. We've seen similar patterns internally when testing new dev tools. If the tool gets too complex and starts generating its own "slop" it just pulls focus away from the actual work and people shut it down.

Leave a Comment

Your email will not be published