Meta Allegedly Used Books3, a Dataset of 191,000 Pirated Books, to Train LLaMA AI
October 25, 2020
Meta and Bloomberg allegedly used Books3, a dataset containing 191,000 pirated books, to train their AI models, including LLaMA and BloombergGPT, without author consent. Lawsuits from authors such as Sarah Silverman and Michael Chabon claim this constitutes copyright infringement. Books3 includes works from major publishers like Penguin Random House and HarperCollins. Meta argues its AI outputs are not "substantially similar" to the original books, but legal challenges continue.
- Alleged deployer
- various-generative-ai-developers, meta, eleutherai, bloomberg
- Alleged developer
- various-generative-ai-developers, the-pile, shawn-presser, meta, eleutherai, bloomberg
- Alleged harmed parties
- zadie-smith, writers, verso, stephen-king, sarah-silverman, richard-kadrey, publishers-found-in-books3, penguin-random-house, oxford-university-press, over-170000-authors-found-in-books3, michael-pollan, margaret-atwood, macmillan, harpercollins, general-public, creative-industries, christopher-golden, authors
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/996
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.