Image by Freepik

Spotify has just carried out one of the most aggressive cleanups in streaming history, scrubbing roughly 75 million AI-driven and spam tracks from its catalog in a single enforcement sweep. The move instantly reshaped what listeners find when they hit play, and it has left a huge swath of small creators wondering whether they are collateral damage in a war on synthetic music. At the same time, the crackdown signals that the era of unregulated AI uploads is over, and that the next phase of streaming will be defined by how platforms police the line between human artistry and machine-made noise.

How Spotify’s AI purge reached “75 m” tracks

The scale of Spotify’s intervention is staggering: internal enforcement tied to its new AI rules has removed more than 75 million “spammy” or synthetic tracks, a figure that rivals the size of many entire music services. One widely shared breakdown describes how Spotify “just nuked 75 million fake songs” and frames the sweep as a direct response to AI tools that can churn out thousands of ultra short clips designed to game recommendation systems and royalty pools, with creators repeating the figure as “75 m” to underline its sheer size in a single breath. That same wave of enforcement is echoed in posts noting that Spotify “just wiped out 75 m AI tracks in a single sweep,” stressing that these were not harmless curiosities but industrial scale uploads that flooded playlists with seconds long loops and mismatched content.

Reporting on the company’s internal numbers reinforces that this is not a rounding error but a structural reset of the catalog. One detailed account notes that Spotify has removed more than 75m spam tracks in roughly a year, a volume that “rivals” its legitimate catalog of about 100 million songs and that had been diluting payments to real artists. Another analysis of Spotify’s policy rollout describes how the platform’s AI governance push is explicitly tied to this purge, explaining that the company has already taken down “more than 75 million spammy tracks” as part of a broader effort to clean up its royalty system and protect rights holders from automated abuse.

The three pillars of Spotify’s new AI governance

Behind the dramatic deletions sits a more methodical framework that Spotify now presents as its AI rulebook. The company has articulated three core pillars for how it will govern artificial intelligence on the platform: impersonation control, spam prevention, and transparency around AI use. In practice, that means Spotify is no longer treating AI uploads as a novelty but as a regulated class of content, with clear expectations for how creators label synthetic material and strict boundaries on what is allowed when it comes to copying voices or flooding the system with machine generated noise.

One detailed breakdown of the policy shift explains that Spotify’s new framework rests on those three pillars, and ties them directly to the removal of more than 75 million spammy tracks. Another report on the rollout of AI filters describes how Spotify Rolls Out New Filters, Disclosure Rules for AI Content that are explicitly framed around “Trust, Transparency and Tighter Impersonation” controls, with the company promising that its systems can now detect and remove problematic AI tracks more quickly. Together, these moves show a platform trying to codify AI not as a free for all, but as a space governed by specific obligations and enforcement tools.

Inside the new impersonation and disclosure rules

The most emotionally charged part of Spotify’s AI reset is its stance on impersonation, which cuts to the heart of what artists fear about synthetic audio. Spotify has introduced a new impersonation policy that spells out how it will handle claims about AI voice cloning and deepfake tracks, making it clear that creators cannot use generative tools to mimic artists, public figures, or other recognizable voices without permission. The company is also leaning on disclosure, requiring that AI generated content be clearly labeled so that listeners, rights holders, and advertisers know when a track is synthetic and when it is human performed.

In its own policy language, Spotify highlights “What we’re announcing” as a package of measures that includes a dedicated impersonation rule, new reporting tools for artists, and a strengthened music spam filter, all bundled under a banner of Spotify Strengthens AI Protections for artists and songwriters. A separate explainer on the AI policy shift spells out “What’s Changing” in more detail, noting that the Spotify AI Policy now bans training models on copyrighted music without consent, requires that AI training data be ethically sourced and disclosed, and sets expectations for how synthetic vocals and compositions must be labeled. Together, these rules are designed to give human artists a clearer path to challenge deepfakes and to ensure that AI tools are used with consent rather than as a shortcut around it.

Why Spotify says the purge protects “real” artists

Spotify is not shy about the stakes it sees in this crackdown, framing AI abuse as one of the most urgent threats facing the music business. Company leaders argue that a flood of synthetic tracks does not just clutter search results, it actively siphons money away from working musicians by diverting plays and royalties to automated uploads that cost almost nothing to produce. In that view, the 75 million track purge is less a symbolic gesture and more a necessary reset of the economic baseline on which streaming rests.

One detailed report on the policy rollout notes that Problems surrounding AI have become one of the industry’s most pressing issues, and that Spotify has acknowledged the need to protect artists from having their work scraped for training or their voices cloned without consent. Another analysis of the spam removals points out that the more than 75 million AI generated spam tracks that were removed had been diluting payments to legitimate artists and, in some cases, were designed to impersonate them without permission. A separate breakdown of the enforcement push describes how Spotify Removes 75 Million Spammy Songs, Cracks Down on AI use by “bad actors,” explicitly linking the purge to a goal of stopping royalty diversion schemes that exploit the platform’s payout formulas.

Why smaller creators are panicking

For independent artists and bedroom producers, the same enforcement wave that targets industrial scale spam can feel like a blunt instrument. Many of the tools that bad actors use to flood Spotify with synthetic clips are the same ones that legitimate creators rely on to experiment with AI assisted production, from vocal synthesis to generative backing tracks. When the platform suddenly deletes 75 million tracks and tightens its filters, it is not always obvious which side of the line a given upload will fall on, and that uncertainty is driving a wave of anxiety among smaller creators who fear losing their catalogs overnight.

That tension is visible in creator focused discussions where users share clips titled along the lines of “Spotify just deleted 75M fake songs” and debate whether the company is “saving music” or overreaching. One widely circulated video breaks down how Spotify just deleted 75M fake songs and warns that the same detection systems that catch ultra short spam loops could also misclassify experimental or heavily AI assisted tracks from independent artists. Another social post that went viral on music TikTok and Instagram states that Spotify just wiped out 75 m AI tracks in a single sweep, emphasizing that “Many were ultra short clips” and mismatched content, but leaving creators to wonder how the platform will distinguish between malicious spam and legitimate micro songs or sound art. That ambiguity is why so many smaller artists describe the new AI regime as both necessary and terrifying.

How the new rules reshape Spotify’s algorithm and discovery

Beyond the raw numbers, the AI purge is already changing how Spotify’s recommendation engine behaves. With tens of millions of low quality or synthetic tracks removed, the algorithm has a cleaner pool of music to draw from, which should, in theory, make it easier for genuine artists to surface in personalized playlists. At the same time, the platform has been quietly adjusting how its systems weigh listening behavior, giving more weight to sustained engagement and less to quick skips or background noise, a shift that dovetails with the crackdown on ultra short AI spam.

One in depth editorial on the platform’s recommendation system explains that Don’t worry. Spotify has significantly changed its approach to recommendations in 2025, moving away from pure play counts and toward metrics that capture how deeply listeners engage with tracks and artists. The same piece notes that “Where the” algorithm once rewarded volume, it now favors consistency and listener retention more than ever, which makes it harder for 30 second AI loops to dominate discovery. Another analysis of year end listening patterns points out that Rather than simply showcasing the most streamed artists and songs, Spotify Wrapped 2025 highlights a decline in overall streaming volume and a disconnect between what the algorithm pushes and what listeners feel attached to, suggesting that the AI cleanup is part of a broader recalibration of how discovery should work.

What artists are saying about Spotify’s AI safeguards

Among professional musicians, the reaction to Spotify’s AI crackdown is complicated but leans toward cautious support. Many artists have spent the past year watching their names and voices appear in AI generated tracks they never authorized, and they see the new impersonation rules as a long overdue line in the sand. At the same time, there is frustration that the platform is moving faster to police AI uploads than to fix long standing issues around low per stream payouts and opaque playlisting, which leaves some creators feeling that the company is protecting its brand more than their livelihoods.

One social clip that circulated widely among producers opens with the line “Spotify just updated their AI policies. What do you all think?” and then reads out a passage From Spotify that references “Spotify Strengthens AI Protections” and invites feedback from the community. Another policy focused explainer underscores that What Changing in the Spotify AI Policy is meant to “Safeguard Creativity,” including a ban on training AI models on copyrighted music without consent and a requirement that training data be ethically sourced and disclosed. For many artists, those commitments are a meaningful step, but they are also watching closely to see whether enforcement is consistent and whether appeals processes work when legitimate tracks are mistakenly flagged.

Streaming’s broader pivot to “human first” music

Spotify’s AI sweep is not happening in isolation, it is part of a wider shift across streaming toward what some executives are calling a “human first” approach to music. As generative tools become more powerful, platforms are being forced to decide whether they want to be open repositories for any audio a model can spit out, or curated spaces that prioritize human creativity and consent based use of AI. The 75 million track purge is a clear signal that Spotify is choosing the latter, even if it still wants to leave room for responsible experimentation with synthetic tools.

Other companies are already building on that stance. One detailed analysis of a rival service notes that Coda Music’s AI blockade is explicitly framed as a “human first revolution,” and that its response is “timely,” building on earlier efforts like Spotify’s AI protections announced in September 2025, including limits on how music can be used in AI training according to Spotify’s newsroom. That kind of cross platform alignment suggests that the industry is converging on a new norm in which AI is allowed, but only under strict conditions that respect rights holders and keep spam at bay. For creators, the challenge now is to adapt to that environment, finding ways to use AI as a tool without triggering the very filters that were built to stop it.

More from MorningOverview