Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

Install AIinASIA

Get quick access from your home screen

AI in ASIA
Spotify AI music policy
Business

Spotify cuts 75 million tracks as AI music flood forces streaming rethink

Spotify has cut 75 million tracks as part of a new AI music policy aimed at tackling fraud and spam. This piece explores how AI is reshaping streaming economics, the role of distributors, and the implications for artists and listeners across Asia and beyond.

Anonymous5 min read

The streaming giant is tightening its AI rules, removing millions of “spammy” tracks, and signalling a shift in how digital platforms manage music discovery.

Spotify has removed 75 million tracks as part of a new crackdown on spam, fraud, and low-value AI-generated music.,The company’s fresh policies target impersonation, fraudulent submissions, and royalty gaming schemes, aligning it with Deezer’s tougher AI stance.,This recalibration highlights the growing clash between human artistry and AI-enabled content flooding, with implications for artists, distributors, and the wider industry.

A catalogue that grew too quickly

Spotify’s announcement marks one of the largest catalogue trims in the history of streaming. By ejecting 75 million “spammy” tracks, the company has cut deeply into the volume that had ballooned to hundreds of millions worldwide. Deezer estimated earlier this year that 150,000 new tracks are uploaded every day; nearly a third of these are fully AI-generated.

The raw scale is staggering. The CD era at its peak saw tens of thousands of new releases annually. Now, the firehose of music pouring into streaming platforms is several orders of magnitude larger, with Luminate tracking data on over 200 million tracks. What was once sold as democratic access to distribution – anyone with a laptop could be the next Taylor Swift – has now reached a breaking point.

Labels, platforms and shifting incentives

Major labels were the first to complain. Universal Music Group’s chief executive Sir Lucian Grainge warned in 2023 about a flood of “functional, lower-quality content” clogging streaming services. These ranged from 30-second ambient loops designed to trigger royalty payments to endless algorithmically generated background noise.

Spotify responded by raising the threshold: tracks under two minutes no longer qualify for royalties if categorised as functional recordings. Now, platforms themselves are becoming the gatekeepers. Spotify’s new policies mirror Deezer’s moves to exclude AI slop from discovery algorithms and to deny royalty payments for tracks identified as machine made. This is a clear example of When AI Slop Needs a Human Polish.

The economic logic is simple. Fraudulent or spam-heavy tracks dilute the royalty pool for legitimate artists, distort recommendation engines, and create storage and computational overhead. Worse, they may expose platforms to legal scrutiny as law enforcement begins to treat streaming fraud as a financial crime.

AI as both enabler and threat

AI is not just producing torrents of near-identical tracks. It is also creating convincing fake metadata; bogus artists, false ownership claims, and fabricated distributor information – that enables fraud at industrial scale. For distributors and platforms, this poses risks of liability and operational cost. This raises questions about AI's Secret Revolution: Trends You Can't Miss and how it impacts various industries.

At the same time, AI is far from outlawed. Spotify has not banned AI-generated music, instead working with the industry standards body DDEX to create new metadata fields showing which elements of a track were AI-generated. The company plans to display this information to listeners, though it has not yet said whether it will influence royalties. Deezer, in contrast, has gone further, stripping AI tracks from recommendations and refusing to pay out on them altogether.

The contrast underscores an unresolved debate: should AI music be identified, deprioritised, demonetised, or fully embraced? This discussion is relevant to the broader conversation around AI Artists are Topping the Charts Weekly and the future of creative industries. For a deeper dive into the economics of music streaming and the impact of AI, you can explore reports from organisations like the IFPI (International Federation of the Phonographic Industry).

The quiet role of distributors

Streaming services are only one piece of the chain. Digital distributors – from TuneCore and DistroKid to Symphonic and CD Baby – are the front door for most independent artists. These companies face their own pressures. The more tracks they upload, the more revenue they make. Yet they must also maintain credibility with artists and platforms while competing in a crowded, price-sensitive market.

The non-profit Music Fights Fraud Alliance, formed in June 2023, unites distributors and platforms such as Amazon Music, Spotify, and YouTube Music in a coordinated push against manipulation. But it remains unclear whether distributors will follow Spotify’s lead with their own AI restrictions. Their stance will be critical in shaping the future flow of music into global catalogues.

The new reality for artists and listeners

For artists, the message is clear: the dream of endless access to platforms is over. Uploading a loop or an AI-driven experiment may no longer guarantee visibility or royalties. Instead, quality, identity, and verification are becoming the new currency of digital distribution.

For listeners, the upside is potentially cleaner discovery feeds and a better chance of finding human-made music. Yet the genie is out of the bottle: AI tools will remain central to music creation, even if their output is filtered at the point of distribution. The true test for Spotify and its peers will be whether they can balance human creativity with machine-driven volume.

Spotify’s bold cull may be only the beginning. As AI floods the pipeline with ever more content, the industry must decide whether to police, partner, or profit. Should streaming platforms treat AI as a co-creator to be integrated, or as a disruptive force to be ring-fenced?

Where do you think the line should be drawn?

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the AI in Entertainment learning path.

Continue the path →

Latest Comments (4)

Benjamin Ng
Benjamin Ng@benng
AI
27 October 2025

This tracks with what we're seeing in edtech with LLM outputs. The sheer volume of synthetic content, even if initially "low quality," still creates a discovery problem. We're actively building better content filtering for our AI tutors to avoid similar issues with spammy or repetitive learning modules.

Tony Leung@tonyleung
AI
16 October 2025

75 million tracks cut, that's a massive clean-up. Sounds like Spotify finally realized the "anyone can be Taylor Swift" model was unsustainable without proper vetting. Reminds me of how quickly some fintech platforms grew here in HK before the SFC stepped in. Quality control always catches up to quantity.

Rachel Foo
Rachel Foo@rachelf
AI
13 October 2025

75 million tracks is a crazy number to cut. reminds me a bit of getting our internal data sets cleaned up for our AI projects. everyone thinks "oh, just feed it data," but then you realize half of it is junk, or mislabeled, or just plain old redundant. and then the business side is asking why it's taking so long. "can't you just use it?" they say. like it's magic. feels like Spotify just hit that "garbage in, garbage out" wall on a massive scale, and now they're playing catch-up.

Natalie Okafor@natalieok
AI
3 October 2025

The analogy to "functional, lower-quality content" and its impact on royalty payments is particularly relevant from a healthcare AI angle. We're constantly battling with validating the utility of AI models, ensuring they're not just generating "noise" that clogs up clinical systems or misdirects resources. Just as Spotify is now acting as a gatekeeper against royalty gaming schemes in music, we're seeing increasing regulatory scrutiny to prevent "AI snake oil" in healthcare. The risk of unintended consequences, whether it's algorithmically generated background noise or AI models making erroneous medical suggestions, is a shared challenge. Patient safety, much like artist fair play, hinges on effective gatekeeping and clear standards.

Leave a Comment

Your email will not be published