DALL-E 2 Reported for Gender and Racially Biased Outputs

April 1, 2022

OpenAI's DALL-E 2, a powerful text-to-image model, has reportedly encountered concerns regarding gender and racial bias. This highlights the importance of responsible AI governance, ensuring safe and secure AI practices for all users.

For those interested in shaping harm prevention measures and mapping incidents to the HISPI Project Cerebellum TAIM (Govern), this is a crucial opportunity to get involved. JOIN US.

By contributing to the AI incident database, we can collectively work towards promoting trustworthy AI and implementing effective guardrails for future AI developments.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
openai
Alleged developer
openai
Alleged harmed parties
underrepresented-groups, minority-groups

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/179

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.