Examining an Unfortunate AI Bias Incident - Promoting Responsible AI and Harm Prevention
This AI incident, unfortunately, showcases a bias in the system, highlighting the need for trustworthy and safe AI. It maps to the Govern fu...
Read moreEvidence-based Transparent For governance
This AI incident, unfortunately, showcases a bias in the system, highlighting the need for trustworthy and safe AI. It maps to the Govern fu...
Read moreInvestigating the Electric Elves incident sheds light on the importance of responsible AI governance. This AI mishap maps to the Govern func...
Read moreA recent study reveals 144 deaths associated with robotic surgery complications since 2000, emphasizing the need for responsible AI and safe...
Read moreThe Tokyo District Court has ordered Google to modify its autocomplete feature in Japan, raising awareness for the importance of trustworthy...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). The faulty feature in Google's Nest smart...
Read moreAn investigation into the alleged gender bias on LinkedIn raises questions about fairness in AI systems. This AI incident maps to the Govern...
Read moreA disturbing incident occurred at a New Zealand passport robot when an applicant of Asian descent was told to open their eyes wider. This AI...
Read moreExplore the potential consequences of AI algorithm failures within organizations, learn about guardrails for safe and secure AI with Project...
Read moreExplore a pivotal event in blockchain history: The DAO hack. This incident offers valuable insights into the importance of trustworthy AI, s...
Read moreExplore the infamous incident involving Microsoft's chatbot, Tay, a prime example of the importance of responsible AI governance. This AI in...
Read moreA recent incident involving a mall security robot knocking down and running over a toddler in Silicon Valley highlights the importance of sa...
Read moreExploring the tragic accident involving Joshua Brown, who lost his life while using a self-driving Tesla. This incident underscores the need...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.