2025 AI Cases

2025 was an important year in legal cases where authors and publishers tried to defend their rights and ownership of their work. Looking at the whole, the outcome is not great. These cases don’t even touch what happens before publishing. Every author who uses cloud systems like Google Docs and Microsoft Word runs the very real risk of feeding the AI machine with your effort. 

Word Weaver Pro is the most secure authoring tool on the Internet. Nothing ever leaves the browser to go to some third-party. However, “AI browsers,” Copilot, Gemini, and other browser plug-ins like Grammarly and many others will use your work for AI training. Be very careful what you install on your computer and browser!

As we often say You deserve more than fine print and hope.

To be clear, I am not anti-AI. It has its uses and is not going anywhere. Using it for research, editing, and refining ideas is a personal choice each writer must make. What I am against is the stealing of work that takes so much time, effort, emotion, and – in the case of self-published authors – money. 

You can look at Project Guttenberg as one example where a huge wealth of great books can be used to train an LLM. Stealing work from books not specifically made available to the public is totally unacceptable and represents the worst kind of corporate greed. Unfortunately, these companies can afford a much stronger legal team compared to the average author.

Presented here is a fact‑based write‑up of prominent cases in which AI companies were alleged to have used books without permission or compensation, grounded strictly in reported court filings, judicial rulings, and settlements. Where outcomes differ, that distinction is made explicit.

Cases

Anthropic – Bartz v. Anthropic PBC

Allegation
Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson alleged that Anthropic copied and stored millions of copyrighted books—many obtained from pirate sites such as LibGen and Pirate Library Mirror—without permission and used them in datasets connected to training its Claude AI models.

Court findings
In June 2025, U.S. District Judge William Alsup ruled that training large language models on copyrighted books can qualify as fair use when the books are lawfully acquired, because the use is “transformative.” However, the court also held that downloading and maintaining pirated copies of books was not fair use and constituted copyright infringement.

Outcome
Anthropic agreed in September 2025 to a $1.5 billion class‑action settlement to resolve claims related to pirated books. Under the settlement, authors are eligible for payments (reported at roughly $3,000 per covered book), and Anthropic agreed to destroy the pirated files.

Microsoft – Authors v. Microsoft (Megatron AI)

Allegation
A group of authors sued Microsoft in New York federal court, alleging the company used nearly 200,000 pirated books to train its Megatron AI model without permission or compensation. The complaint states that the books were used to teach the model to generate text that mimics authors’ expressive styles.

Status
As of the reported filings in mid‑2025, the case was pending, with authors seeking statutory damages and injunctive relief. I could not find a final ruling on liability.

Meta Platforms – Kadrey et al. v. Meta

Allegation
Authors including Sarah Silverman, Jacqueline Woodson, and Ta‑Nehisi Coates accused Meta of copying and ingesting millions of copyrighted books—allegedly sourced from shadow libraries such as LibGen—to train its LLaMA language models, without permission or payment.

Court ruling
In June 2025, a federal judge dismissed the authors’ claims, holding that the plaintiffs failed to make the correct legal arguments or evidentiary showing. Importantly, the judge emphasized that the dismissal did not declare Meta’s conduct lawful and explicitly noted that feeding copyrighted works into AI models without permission can violate copyright law under different facts.

Status
Meta prevailed in this specific case, but the ruling left open the possibility of future suits by other authors.

OpenAI, Google, Meta, xAI, Anthropic, Perplexity – Multi‑defendant author lawsuits (2024–2025)

Allegation
Multiple authors, led by journalist and author John Carreyrou, filed lawsuits in California federal court alleging that OpenAI, Google, Meta, xAI, Anthropic, and Perplexity trained AI systems on pirated copies of their books obtained from shadow libraries such as LibGen and Z‑Library, without consent or compensation.

Key characteristics

  • Plaintiffs pursued individual claims rather than class actions, arguing that prior settlements undervalued authors’ rights.
  • The suits allege direct copyright infringement through unauthorized copying and storage of books, separate from the question of whether training itself is fair use.

Status
These cases were ongoing in late 2025, with no final judgments that I could find.

Broader publishing industry context

Court decisions in 2025 established an important distinction:

  • Training AI models on copyrighted books may be fair use if the books are legally obtained and the use is transformative.
  • Acquiring books through piracy or maintaining pirated “central libraries” is not protected and can trigger significant liability.

This distinction underlies many of the ongoing disputes between authors and AI developers and explains why some cases resulted in dismissals while others led to billion‑dollar settlements.

Send us a message