
AI companies Anthropic and Meta score major courtroom victories in San Francisco as federal judges rule training their models on books falls under “fair use,” leaving authors with limited recourse despite claims of copyright infringement.
Key Takeaways
- Judge William Alsup ruled Anthropic’s AI training using copyrighted books was “quintessentially transformative” and protected under fair use doctrine
- Meta won dismissal of a similar lawsuit when Judge Vince Chhabria ruled authors failed to provide sufficient evidence of market harm
- Anthropic still faces a December trial for downloading millions of pirated books, with potential damages up to $150,000 per work
- These rulings establish precedent-setting boundaries for AI companies’ use of copyrighted materials
- The cases highlight tension between protecting authors’ economic rights and enabling AI technological advancement
Judges Rule in Favor of AI Companies’ Use of Books
In a significant legal development for the artificial intelligence industry, federal judges in San Francisco have ruled in favor of AI companies in two separate copyright cases involving the use of books to train large language models. The rulings grant AI developers like Anthropic and Meta considerable latitude in using copyrighted materials without authors’ permission, potentially reshaping the landscape of intellectual property rights as technology continues to advance. U.S. District Judge William Alsup determined that Anthropic’s use of books to train its AI model Claude was transformative and constituted fair use under copyright law, while Judge Vince Chhabria dismissed claims against Meta on technical grounds.
The ruling in the Anthropic case specifically addressed how AI companies transform existing works into something entirely new. “The purpose and character of using copyrighted works to train LLMs to generate new text was quintessentially transformative,” Alsup wrote. “Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them, but to turn a hard corner and create something different,” Judge William Alsup wrote in his ruling.
Anthropic’s Victory Comes With Complications
While Anthropic celebrated the ruling as confirmation that its training methods align with copyright’s fundamental purpose of promoting creativity and progress, the company still faces significant legal challenges. The judge ordered a December trial to assess damages related to Anthropic’s admitted downloading of approximately 7 million pirated books for a “central library.” Although Anthropic claims it ultimately decided not to use these pirated materials for training its LLMs, Judge Alsup was clear about the potential consequences, noting that U.S. copyright law allows for statutory damages of up to $150,000 per work for willful infringement.
A federal judge ruled that an artificial intelligence company did not break the law when it used copyrighted books to train its chatbot without the consent of the authors or publishers — but ordered it must go to trial for allegedly using pirated books. https://t.co/VyJmb8P1Up
— The Washington Post (@washingtonpost) June 25, 2025
“That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft, but it may affect the extent of statutory damages,” Judge William Alsup wrote.
Internal communications revealed during the case showed Anthropic employees had expressed concerns about using pirate sites for obtaining training materials, which led to a change in their approach. This acknowledgment of improper sourcing may impact the damages phase of the trial, even as the company maintains that its training methodologies themselves are legally sound. The distinction between owning legitimate copies versus pirated ones became a critical factor in the judge’s partial ruling against the company.
Meta’s Victory and Broader Implications
In a parallel case, Judge Vince Chhabria dismissed authors’ claims against Meta, finding they failed to provide sufficient evidence that Meta’s AI systems diluted the market for their works. However, Chhabria made clear his ruling was narrowly focused on the plaintiffs’ arguments rather than broadly endorsing Meta’s practices. “This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful,” said Chhabria It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.
“So by training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way,” said Judge Vince Chhabria
These rulings come as AI companies increasingly face legal challenges from content creators across multiple industries. Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson initiated the case against Anthropic, while similar lawsuits have targeted OpenAI, Microsoft, and other AI developers. The San Francisco decisions may influence ongoing litigation and future regulatory approaches to AI training. Meanwhile, some media companies have chosen a different path, seeking compensation by licensing their content to AI companies rather than pursuing legal action.
Balance of Innovation and Rights Protection
These rulings highlight the complex balance courts are trying to strike between protecting intellectual property rights and allowing technological innovation to flourish. While the decisions generally favor AI companies, they also establish boundaries that could shape industry practices moving forward. The judge’s nuanced opinions suggest that while AI training on properly acquired works may constitute fair use, unauthorized copying remains problematic. This distinction provides a roadmap for AI companies to follow while developing their training methodologies and data acquisition practices.
“We are pleased that the Court recognized that using works to train LLMs (language learning models) was transformative, spectacularly so,” an Anthropic spokesperson said of the following ruling.
President Trump’s administration has shown interest in promoting American technological leadership while ensuring proper protections for creators. These cases represent a significant development in establishing the legal framework for how AI companies can utilize existing creative works to build powerful new technologies. As AI becomes increasingly central to our economy, striking the right balance between innovation and rights protection will remain a critical policy challenge for lawmakers and courts across the country.