Spotify, the world’s leading music streaming platform, has unveiled a major new policy framework to aggressively combat the misuse of generative AI technology, a move designed to protect artists and producers from impersonation, spam, and deception. The platform’s new focus aims to safeguard the integrity of the music ecosystem as AI tools make it easier for bad actors and content farms to “push slop” and potentially divert royalties.
The announcement comes after a period of massive investment in anti-spam measures, with Spotify having already removed over 75 million spammy tracks in the last 12 months alone, coinciding with the rapid explosion of generative AI.
The new policy work is concentrating on three core areas:
1. Improved Enforcement of Impersonation Violations
The rise of generative AI has made creating vocal deepfakes of popular artists simpler than ever. To counter this, Spotify is introducing a clarified new impersonation policy that provides stronger protection and clearer recourse for artists.
- Key Rule: Vocal impersonation is now only permitted on Spotify when the impersonated artist has explicitly authorized the usage.
- Protection for Artists: Unauthorized AI voice cloning is viewed as an exploitation of an artist’s identity and a threat to their work’s fundamental integrity. Artists maintain the exclusive choice to license their voices for AI projects.
- Content Mismatch: Spotify is ramping up investments to tackle fraudulent uploads delivered to the wrong artist’s profile. This includes testing new prevention tactics with leading artist distributors and investing more resources into the content mismatch process, allowing artists to report discrepancies even during the pre-release state to reduce review wait times.
2. Rollout of New Music Spam Filter
With Spotify’s total music payouts growing significantly—from $1 billion in 2014 to $10 billion in 2024—the platform has become a greater target for bad actors. Spam tactics like mass uploads, duplicates, SEO hacks, and artificially short track abuse are easier to exploit with high-volume AI music generation tools.
- New System: This fall, Spotify will roll out a new music spam filter. This system will work to identify uploaders and tracks engaging in these spam tactics, tag them, and subsequently stop recommending them to listeners.
- Protecting Royalties: The filter is crucial because unchecked spam can dilute the royalty pool and impact attention and payouts for professional artists who adhere to the rules. The system is designed to be rolled out conservatively to avoid penalizing legitimate uploaders.
3. AI Disclosures for Music with Industry-Standard Credits
Recognizing that many listeners seek more information about the use of AI in the music they stream, and that artists use AI on a spectrum, Spotify is pushing for industry-wide transparency.
- Industry Standard: Spotify is committed to developing and supporting the new industry standard for AI disclosures in music credits, spearheaded through the DDEX consortium.
- Transparency, Not Punishment: As labels, distributors, and music partners submit this information, Spotify will display it across the app. This allows artists to clearly indicate where and how AI played a role in a track—whether in vocals, instrumentation, or post-production. The disclosure is intended to strengthen trust and is not about down-ranking tracks or punishing responsible AI use.
- Broad Alignment: Spotify is collaborating with a wide range of industry partners, including DistroKid, CD Baby, Believe, EMPIRE, FUGA, and Kontor New Media, to drive the wide adoption of this standard. This ensures listeners receive consistent information across all streaming services.
Spotify’s core priorities remain constant: protecting artist identity, enhancing the platform, and providing transparency to listeners. The company affirms it is purely a platform for licensed music where royalties are paid based on listener engagement, treating all music equally regardless of the tools used in its creation. These updates are the latest in a continuous effort to support a more trustworthy music ecosystem.
Key Highlights:
- Spotify launched a major policy to combat AI misuse, focusing on impersonation, spam, and transparency to protect artists and royalties.
- New rules mandate that vocal impersonation (deepfakes) requires the artist’s authorization, backed by enhanced tools to stop fraudulent profile uploads.
- A new music spam filter will roll out this fall to identify and de-recommend spam tactics, like mass uploads and SEO hacks, that aim to dilute the $10B royalty pool.
- Spotify will support a new DDEX industry standard for AI disclosures in music credits, giving listeners and artists transparency on the use of AI in vocals, instrumentation, or post-production.