Examining Google's Image Search Algorithm Bias: A Case Study for Responsible AI
This incident involving racial bias in Google image search results highlights the need for trustworthy AI. The case, focusing on black teena...
Read moreEvidence-based Transparent For governance
This incident involving racial bias in Google image search results highlights the need for trustworthy AI. The case, focusing on black teena...
Read moreExplore the challenges posed by machine bias in AI systems, and learn how we can develop safe and secure AI through the HISPI Project Cerebe...
Read moreThis AI incident highlights the importance of responsible AI governance and safe guardrails. When a child asked a digital assistant for a so...
Read moreThis AI incident involving Amazon's cell phone case production underscores the importance of trustworthy AI. It maps to the Govern function...
Read moreIn this article, we delve into the controversy surrounding Robodebt – a system that automated the debt recovery process for the Australian G...
Read moreExplore the implications of the Yandex chatbot incident, a case study for trustworthy and safe AI. Learn how such incidents align with the G...
Read moreThis AI incident highlights the importance of responsible AI and safe and secure AI in language processing. Google Translate, a widely-used...
Read moreFaceApp, a popular photo editing app, has faced backlash over its controversial 'racist' filter that lightens users' skin tone. This inciden...
Read moreThis AI incident demonstrates the importance of trustworthy and safe AI. As AI bots were used to enhance Wikipedia, unforeseen conflicts aro...
Read moreExploring the insights gained from Kaggle's fisheries competition, this article highlights the importance of responsible AI governance in re...
Read moreThis AI incident offers an insightful look into the current limitations of AI in music composition, particularly in the creation of holiday...
Read moreRecent incident involving Google Photos highlights the importance of safe and secure AI governance. The photo-editing AI miscorrected a ski...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.