Addressing AI Incident #29: A Case Study in Responsible Harm Prevention
Exploring the lessons learned from incident #29, we delve into the challenges posed by unforeseen consequences in AI systems. This case stud...
Read moreEvidence-based Transparent For governance
Exploring the lessons learned from incident #29, we delve into the challenges posed by unforeseen consequences in AI systems. This case stud...
Read moreRecent events surrounding the deployment of an autonomous driving application, Incident #103, have raised serious concerns about the safety...
Read moreRecent findings from Incident #104 reveal a concerning instance of unintended bias within an AI-powered recommendation system, which negativ...
Read moreRecent findings from a large-scale study reveal an unintended bias in autocomplete suggestions provided by a popular AI search engine. The b...
Read moreIncident #106 highlights an instance where the deployed AI model, intended for image recognition, exhibited a bias in its output. This demon...
Read moreRecent analysis of AI Incident Database unveiled an instance of unintended model bias in a financial predictive analysis system (Incident #1...
Read moreIn this article, we delve into Incident #108, a case study that underscores the need for robust AI governance. The incident involved a self-...
Read moreThis article delves into an instance (Incident #109) where a misconfigured autonomous vehicle's navigation system led to an accident, underl...
Read moreThis article delves into Incident #110, an instance where the lack of robust safety measures in an AI model led to unexpected outcomes. The...
Read moreRecently, an incident was reported involving a healthcare AI system demonstrating potential bias towards certain patient demographics. This...
Read moreAn incident involving a machine learning model developed for predicting loan eligibility demonstrated unintended bias, resulting in unfavora...
Read moreThis article delves into an incident involving a data breach in an AI system, highlighting the importance of robust security measures and re...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.