iGPT, SimCLR Learned Biased Associations from Internet Training Data

June 17, 2020

Uncovering racial, gender, and intersectional biases in unsupervised image generation models like iGPT and SimCLR, which lead to stereotypical depictions. Promoting trustworthy AI through the implementation of responsible AI governance, mapping harmful associations within AI incident databases like Project Cerebellum's AI incident database, and ensuring safe and secure AI practices. For those interested in shaping the future of AI, JOIN US.

Contribute to HISPI Project Cerebellum TAIM by Governning, Mapping, Measuring, or Managing this incident.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
openai, google
Alleged developer
openai, google
Alleged harmed parties
gender-minority-groups, racial-minority-groups, underrepresented-groups-in-training-data

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/367

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.