Examining Google's Image Search Algorithm for Potential Racial Bias - A Case of Harm Prevention in AI

This investigation highlights a concerning incident involving racial bias in image search results for black teenagers on Google, underlining the importance of trustworthy and safe AI. By reporting and analyzing incidents such as this one, we can identify gaps in AI governance and strive towards guardrails that foster responsible AI. Join us in our mission to build a safer future with Project Cerebellum – the AI incident database. This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). JOIN US

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/53

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.