Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Install AIinASIA

    Get quick access from your home screen

    Life

    AI 'Slop' Eroding Social Media Experience

    Is AI "slop" ruining your social media feed? Discover how low-quality content and deepfakes are eroding our digital experience.

    Anonymous
    4 min read22 December 2025
    AI slop social media

    AI Snapshot

    The TL;DR: what matters, fast.

    Social media platforms are increasingly deluged with low-quality, AI-generated content and deepfakes, making it difficult for users to distinguish reality.

    Generative AI tools, despite their technological advancements, contribute to a flood of artificial content that isolates users rather than connecting them.

    Experts suggest social media companies prioritize showcasing AI capabilities over user experience, pushing platforms further from their original purpose of fostering genuine connection.

    Who should pay attention: Social media users | Social media platforms | AI developers

    What changes next: Debate is likely to intensify regarding the regulation of AI-generated content.

    The digital landscape, once a hub for connection, is increasingly fractured by low-quality AI-generated content and deceptive deepfakes. Social media, designed to bring us closer, now often leaves users feeling isolated and detached from reality. This shift is accelerating rapidly with the proliferation of advanced generative AI tools.

    The Rise of AI-Generated Content

    Generative AI platforms, such as OpenAI's Sora and Midjourney, allow users to create sophisticated videos and images from simple text prompts. While these tools represent remarkable technological progress, they also present significant ethical challenges. The sheer volume of AI-generated content, often referred to as "AI slop," floods social media feeds with everything from animals exhibiting uncanny human traits to public figures appearing to say or do things they never did. This onslaught makes it increasingly difficult for individuals to discern fact from fiction online. We've previously discussed how AI slop is choking AI progress in research, and its impact on social media is equally concerning.

    Alexios Mantzarlis, director of Cornell Tech's Security, Trust and Safety Initiative, points out that social media's primary aim has shifted. "The cynical answer is that social media is now aimed at keeping you connected to the tool, rather than to each other," he explains. He argues that tech giants are prioritising showcasing their AI capabilities to boost stock prices, often at the expense of user experience. This echoes concerns about AI chatbot giants restricting free access.

    Authenticity Versus Artificiality

    The initial appeal of platforms like TikTok, for instance, was its perceived authenticity compared to the polished, often unrealistic content prevalent on Instagram. However, even these newer platforms are now saturated with AI-generated material. This heightened artificiality clashes directly with the desire for genuine human connection many users seek online.

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    Before generative AI, social media feeds were already evolving; updates from friends and family were often overshadowed by content from high-profile creators. While this can sometimes introduce users to new interests, it also distances them from their personal networks. With AI now in the mix, any semblance of authenticity is further eroded. Users are not only contending with the insecurity sparked by retouched images of real people, but also with entirely AI-generated vacation photos or AI influencers propagating unattainable beauty standards. "Before, we had the problem of unrealistic body expectations," Mantzarlis highlights, "And now we're facing the world of unreal body expectations."

    The Regulatory Challenge

    Social media companies like Meta and TikTok have publicly committed to labelling AI-generated content and banning harmful posts, such as deepfakes depicting crisis events or the unauthorised use of private individuals' likenesses. TikTok, for example, is trialling a feature that allows users to control the amount of AI-generated content in their feeds.

    However, government regulation struggles to keep pace with the rapid advancements in AI technology. Factors such as political gridlock and lobbying efforts from tech firms contribute to this lag. In the absence of robust external regulation, companies are left to enforce their own policies, an uphill battle given the sheer volume of content. These efforts often appear insufficient, leading to a breakdown of trust. An August study by Raptive, a media company, revealed that when people merely suspected content was AI-generated, 48% found it less trustworthy, and 60% felt a lower emotional connection to it.

    "There will be lots of junk and bumps in the road and problems, but maybe this can create amazing new forms of information-sharing and entertainment," – Paul Bannister, Chief Strategy Officer at Raptive.

    Paul Bannister, Chief Strategy Officer at Raptive, offers a more optimistic perspective, suggesting that AI tools could empower more individuals to become creators, broadening the scope of creative expression. He acknowledges the "junk and bumps in the road" but believes in the potential for new forms of information sharing and entertainment. This perspective aligns with how AI skills are being built with new OpenAI courses.

    However, Mantzarlis also warns of a darker side: "It's going to be used for further exacerbating tensions, for confirming people's pre-existing biases." Social media already acts as an echo chamber, amplifying harmful viewpoints and misinformation. The widespread ability to easily manufacture and disseminate personal realities through AI could exacerbate societal divisions.

    The demand for platforms to offer users greater control over the AI content they encounter is growing. Many would prefer to minimise or even eliminate AI-generated content from their feeds, seeking a return to more genuine online interactions. For further reading on the societal impact of AI, the European Parliament has published a comprehensive analysis of AI and its impact on human behaviour740263_EN.pdf)^.

    Anonymous
    4 min read22 December 2025

    Share your thoughts

    Join 4 readers in the discussion below

    Latest Comments (4)

    Nicholas Chong
    Nicholas Chong@nickchong_dev
    AI
    10 January 2026

    This is a real worry, lah. With so much AI-generated rubbish flooding our feeds, how can we even trust what we see anymore? It’s making it tougher to discern genuine content from all this artificial guff.

    Luis Torres
    Luis Torres@luis_t_ph
    AI
    31 December 2025

    This article really hits home, *nakakaasar* sometimes, doesn’t it? I totally get the sentiment about AI ‘slop’ clogging up our feeds. I’ve noticed it myself, especially with those oddly generated product reviews or repetitive ‘inspirational’ posts. But honestly, while deepfakes are a proper concern, I wonder if the bigger issue isn't just the AI itself, but how platforms prioritise *any* content that generates engagement, AI or not. It feels like the algorithms are the real culprits creating this digital rubbish heap, AI just makes more of it quicker. It's a proper headache trying to sift through it all.

    Elena Navarro
    Elena Navarro@elena_n_ai
    AI
    26 December 2025

    So true! The deepfakes are really disturbing, messing with what's real. How do we even combat that deluge of poorly made stuff?

    Zachary Chia
    Zachary Chia@zachchia
    AI
    25 December 2025

    Spot on. This whole "content farm" mentality, just churnin' out rubbish, it's making the whole digital space a bit of a mess, innit?

    Leave a Comment

    Your email will not be published