Spotify’s 75-Million Purge – what actually happened, who it hits, and how real artists stay clear
Spotify has recently admitted to removing over 75 million spammy, AI-generated tracks in the past year. This information is being circulated by both legitimate and half-dis-informative sources – but most explanations are vague. Many people panic, thinking their own shitty produced tracks are under threat. Unfortunately – they are not.
In reality, the purge is aimed at large-scale spam and fraud, not personal experiments or low-quality indie crap. So – this isn’t your cousin’s lo-fi EP got unfairly nuked – it’s a bonfire of junk uploads designed to siphon royalties and pollute recommendations. Here’s what actually changed, why it happened, and what it means if you’re a genuine artist.
First things first – what got axed
The bulk of the removed catalogue fits one or more of these patterns:
- Mass-generated songs from text-to-music tools pushed out in the thousands per account.
- Ultra-short snippets engineered to beat the 30-second royalty trigger.
- Duplicates, SEO-bait titles, retitled clones, and playlist-stuffing noise.
- Impersonations and deepfake vocals – slapping a famous artist’s name and voice on a track you didn’t license.
- Uploads funnelling botted plays to farm pennies at scale.
Translation – obvious grift. The target is the industrial spam layer, not ordinary DIY releases.

The actual policy bundle – three moving parts
- Impersonation enforcement got teeth – vocal deepfakes, fake artist profiles, and hijacked metadata are being penalised harder. Distributors are on the hook and repeat offenders get shut out.
- A music spam filter – suspicious uploads can be down-ranked or stopped from surfacing in recommendations and search. It looks for mass-upload patterns, duplicates, and other fraud signals – so junk won’t ride the algorithm.
- AI disclosure in credits – Spotify is aligning with an industry standard so creators and distributors can clearly flag AI involvement. The point isn’t to ban AI – it’s to label it so rights and royalties don’t get muddied.
Why do this now? Follow the money!
Streaming pays out when a track is played for long enough. Spam rings exploited that with armies of micro-tracks and bots. Left unchecked, this dilutes the royalty pool and drowns listeners in landfill audio. Spotify’s purge is less about taste policing and more about stopping systematic revenue skimming and restoring minimal trust in recommendations.
No – this isn’t a blanket ban on AI
AI itself isn’t the villain. Plenty of legitimate artists use AI as an instrument – arrangement ideas, stems, sound design – and will continue to do so. The line is fraud and deception: voice-cloning a star without consent, fake attributions, botted plays, or churning out 500 near-identical tracks just to clip the meter. That’s what’s being targeted.
Context that matters – the 1,000-stream rule
Since 2024, a track typically needs about 1,000 streams in 12 months to qualify for recorded-royalty payouts. That rule predates this purge and was introduced to stop micropennies from noise farms gobbling administrative overhead. Together with the spam filter, it raises the floor – fewer junk payouts, more money pooling to tracks people actually play.
If you’re a normal indie artist releasing real music – this isn’t aimed at you. If anything, it reduces clutter you compete with.
Who actually gets caught – and why
- Accounts dumping hundreds of near-identical cues per week.
- Uploads with mismatched metadata, fake artist names, or keyword-stuffed titles.
- Deepfake vocals of living artists without licensing.
- Content linked to play-inflation schemes – bots, clickfarms, suspicious listener patterns.
How to stay well clear – a practical checklist
- Release at a sane cadence – quality over flood. If you’re shipping 300 tracks a month, expect scrutiny.
- Keep metadata clean and honest – correct artist name, no celebrity bait, no fake features.
- Don’t use unlicensed voices or likenesses – if you’re using a model trained on a specific singer, get explicit rights.
- Be transparent about AI involvement – if tools helped generate stems or vocals, disclose it in credits when your distributor supports it.
- Avoid ultra-short filler – make actual songs, not 31-second widgets. If you do functional audio, follow platform length rules.
- Never, ever buy plays – promotion services that guarantee streams are landmines. One campaign can flag your entire catalogue.
- Use reputable distributors – they’re now gatekeepers with their own fraud checks. Cheap, no-name aggregators often equal risk.
What changes for listeners and legit artists
- Cleaner recommendations – the spam filter should stop clones from cluttering search and radio.
- Fairer pool dynamics – less leakage to junk means slightly better economics for tracks that cross the eligibility threshold.
- Faster takedowns – impersonations and profile hijacks should be resolved with fewer back-and-forth tickets.
Edge cases worth knowing
- If you collaborate with AI vocal tools, document your rights – model license, training disclosures, and any consent from human vocalists.
- If you release instrumental libraries or ambience, expect stricter categorisation and length rules – follow your distributor’s guidance.
- If an old track gets flagged by mistake – appeal through your distributor with session files, stems, and release history. Paper trails help.
This was a purge of spam, not a crusade against bedroom producers. If you’re writing your own tracks, not gaming the system, and not impersonating anyone – you are not the target. Keep releasing music – and let the garbage trucks do their job.
Discover more from Tornevalls
Subscribe to get the latest posts sent to your email.