Two US district judges have ruled that training artificial intelligence (AI) models on copyrighted books may amount to fair use under US law, provided the outputs are non-infringing and the use is transformative..However, the judgments diverge on how courts should treat AI companies that source training data from pirated material..In Kadrey v. Meta Platforms Inc, Judge Vince Chhabria held that Meta’s use of copyrighted books to train its LLaMA models was protected under fair use. The ruling came after the 13 authors - including Richard Kadrey and Sarah Silverman - failed to demonstrate any market harm or specific copying in LLaMA’s outputs.Meta admitted that it had used the Books3 dataset, which contains hundreds of thousands of full-text copyrighted books scraped from pirate libraries. However, Meta argued that the use was highly transformative, as the purpose was to teach the model how to generate language, not to reproduce the books themselves. Meta also contended that its outputs did not regurgitate any copyrighted content.Judge Chhabria agreed, at least on the record before him. He emphasised that while the use of copyrighted works in AI training could very well be illegal in other circumstances, plaintiffs in this case had not demonstrated that Meta’s use displaced the market for their books or that the model had generated infringing outputs."This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful. It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one,” the Court stated, granting Meta’s motion for partial summary judgment.However, the Court refused to dismiss the authors’ separate claim concerning Meta’s alleged unlawful distribution of copyrighted books. That issue will proceed to trial. According to the complaint, Meta obtained the Books3 dataset via torrenting, a method that may involve uploading infringing material to other users. The Court held that this aspect of the case raises unresolved factual issues and must proceed to trial..In Bartz v. Anthropic, the court divided the company’s conduct into three phases: (1) training on copyrighted works, (2) scanning and storing lawfully purchased books and (3) mass downloading and permanent retention of pirated works.Only the first two uses were deemed lawful. Judge William Alsup held that while training Claude was transformative, building a general-purpose research library from pirated material was not.“Pirating copies to build a research library without paying for it, and to retain copies should they prove useful for one thing or another, was its own use—and not a transformative one,” the court said.Internal documents showed that Anthropic executives sought to avoid the “legal/practice/business slog” of licensing, opting instead to build a library of over seven million pirated works downloaded from sources like Books3 and PiLiMi. The court ruled that the company’s “store everything forever” approach violated the Copyright Act..Judge Chhabria in Kadrey rejected industry arguments that adverse rulings would halt development: "If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders for it."Judge Alsup focused more on protecting transformative innovation without addressing industry economic concerns..Both rulings affect only the specific plaintiffs involved. The Anthropic case proceeds to trial on piracy-related damages, while Meta faces continued litigation over distribution claims..Read Meta AI judgment .Read Anthropic judgment