Publishers vs. Anthropic: Music Giants Move to Block AI ‘Fair Use’ Defense in $3B Suit
The legal definition of "fair use" in the age of generative AI is facing its most significant challenge yet. In a motion filed on Monday, March 23, 2026, in a San Jose federal court, a coalition of the world’s largest music publishers argued that Anthropic’s training methods do not meet the legal criteria for transformative use. The publishers—Universal Music Group, Concord, and ABKCO—assert that Anthropic didn't just "learn" from their lyrics; it systematically downloaded unauthorized copies of more than 20,517 copyrighted works from "shadow libraries" and torrent sites to build its Claude models.
The publishers' case centers on the argument that Claude’s output is not transformative, but substitutive. They provided evidence showing that Claude can be prompted to generate near-verbatim lyrics for iconic songs like Neil Diamond’s "Sweet Caroline" and The Rolling Stones’ "Wild Horses."
The stakes are historically high, with the publishers seeking the maximum statutory penalty of $150,000 per infringed work, bringing the potential total to over $3.07 billion.
Anthropic has previously defended its practices by comparing AI training to a human "reading" a book to learn how to write. However, the publishers argue that "reading" via mass-torrenting from pirate sites is not a protected activity. If the judge rules against Anthropic’s fair use defense, it could force a fundamental shift in how AI models are built, requiring developers to secure explicit licenses for every piece of data in their training sets. For Anthropic—recently valued at $380 billion—the outcome of this motion could define its financial future and the legal boundaries of the entire generative AI industry.