Examining Bias in AI Systems: The Case of Google

In recent years, concerns have been raised about the potential for bias in AI systems. A notable example is the alleged racial bias found in Google's image recognition software. This incident sheds light on the importance of responsible AI governance and harm prevention measures. It underscores the need for guardrails to ensure AI systems are safe, secure, and fair. By understanding these issues and contributing to initiatives like HISPI Project Cerebellum TAIM, we can collectively strive towards building trustworthy AI models.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/19

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.