Meta's AI Copyright Lawsuit Dismissed

A federal judge in San Francisco has ruled in favor of Meta Platforms against a group of authors who alleged the company infringed their copyrights by using their books without permission to train its artificial intelligence (AI) system, Llama. U.S. District Judge Vince Chhabria stated on Wednesday, June 26, 2025, that the authors failed to present sufficient evidence that Meta's AI would dilute the market for their work, which was necessary to prove the company's conduct was illegal under U.S. copyright law.
Despite dismissing the case, Judge Chhabria emphasized that his ruling was narrow and did not broadly validate Meta's use of copyrighted materials for AI training. He clarified that using such materials without permission would be unlawful in “many circumstances.” Chhabria further stated, “This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful. It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.”
This decision contrasts with another recent ruling from the same court, where U.S. District Judge William Alsup found that AI company Anthropic’s training of its chatbot Claude constituted “fair use” of copyrighted materials. However, Alsup's ruling still requires Anthropic to go to trial for illicitly acquiring those books from pirate websites rather than through legitimate means. Chhabria's ruling in the Meta case is the second in the U.S. to address fair use in the context of generative AI.
The lawsuit against Meta, initiated in 2023, accused the company of misusing pirated versions of books from online “shadow libraries” to train Llama without permission or compensation. The plaintiffs, including prominent writers like Sarah Silverman, Jacqueline Woodson, and Ta-Nehisi Coates, argued that Meta was “liable for massive copyright infringement” and that the company “could and should have paid” to license these literary works. They contended that Meta knew the risks associated with tapping into pirated databases, a decision that even triggered an escalation to CEO Mark Zuckerberg and other executives for approval.
Meta, for its part, asserted that U.S. copyright law permits the unauthorized copying of a work to transform it into something new, and that the AI-generated expression from its chatbots is fundamentally different from the source material. The company’s attorneys argued that there was “no evidence that anyone has ever used Llama as a substitute for reading Plaintiffs’ books, or that they even could.” Meta maintains that Llama does not output the actual copyrighted works, and that the methods of acquiring the training data have “no bearing on the nature and purpose of its use.”
The legal doctrine of fair use, which allows the use of copyrighted works without the owner's permission under certain circumstances, is a crucial defense for tech companies in these burgeoning AI-related lawsuits. AI companies generally argue that their systems make fair use of copyrighted material by studying it to create new, transformative content, and that being forced to pay copyright holders could hinder the industry's growth.
Conversely, copyright owners contend that AI companies unlawfully copy their work to generate competing content that threatens their livelihoods. Judge Chhabria expressed sympathy for this argument during a hearing in May, reiterating his concern that generative AI has the potential to flood the market with content, thus undermining the market and incentives for human creation. He dismissed arguments that requiring AI companies to adhere to copyright laws would slow innovation, stating, “These products are expected to generate billions, even trillions of dollars for the companies that are developing them. If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders for it.”
While Meta secured a dismissal in this specific case, it may prove to be a limited victory. Chhabria’s 40-page ruling repeatedly suggested that Meta and other AI companies might be “serial copyright infringers” and seemed to invite other authors to bring similar cases with stronger legal arguments. The ruling's scope is confined to the rights of the 13 named authors, leaving countless other copyright holders unaffected and potentially open to pursue their own claims.