Generative Models Reportedly Trained on Dataset Containing Private Medical Photos

March 3, 2022

Text-to-image models trained on the LAION-5B dataset, such as Stable Diffusion and Imagen, have reportedly generated or mimicked images resembling private medical record photos, allegedly included in the training data without proper consent or established removal mechanisms. This underscores the importance of trustworthy AI practices, particularly with regards to Project Cerebellum's AI governance initiatives. For those interested in shaping safe and secure AI, we invite you to JOIN US.

Incident mapping: The LAION-5B dataset incident serves as a clear example of the need for HISPI Project Cerebellum TAIM's Govern function—ensuring consent, transparency, and accountability in AI training datasets.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
stability-ai, google
Alleged developer
stability-ai, laion, google
Alleged harmed parties
people-having-medical-photos-online

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/465

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.