Racially Biased AI Incident in New Zealand: Passport Robot Suggests Applicant to Open Eyes
This discriminatory AI incident highlights the need for accountability and harm prevention in AI systems. The passport robot in New Zealand...
Read moreEvidence-based Transparent For governance
This discriminatory AI incident highlights the need for accountability and harm prevention in AI systems. The passport robot in New Zealand...
Read moreUnderstanding the risks of misaligned AI decisions is crucial for fostering trustworthy and safe AI environments. This AI incident maps to t...
Read moreDelve into the infamous DAO hack, a significant event that underscores the importance of trustworthy and safe AI. This incident maps to the...
Read moreExplore the infamous incident involving Microsoft's AI chatbot, Tay. This case study highlights the importance of responsible AI governance...
Read moreAn unfortunate incident occurred in Silicon Valley where an AI-powered mall security robot allegedly knocked down and ran over a toddler. Th...
Read moreThe fatal accident involving Joshua Brown, who tragically lost his life in a self-driving Tesla, highlights the importance of trustworthy an...
Read moreRecent findings point to a racial bias in Google's image search results for black teenagers, highlighting the need for safe and secure AI an...
Read moreExplore the critical issue of machine bias in AI systems, a common challenge in AI governance. This AI incident maps to the Govern function...
Read moreA recent incident involving a digital assistant inappropriately providing pornographic content to a child highlights the need for safe and s...
Read moreThis AI incident, involving Amazon's AI system in cell phone case production, underscores the need for trustworthy and responsible AI. The m...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). The 'Robodebt' controversy highlights the...
Read moreExplore the lessons learned from the Yandex chatbot incident, a relevant example demonstrating the importance of safe and secure AI. This AI...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.