Case Study: Navigating an AI Incident #138 - Responsible and Safe AI in Practice
Exploring the intricacies of a recent AI incident, where an autonomous vehicle misinterpreted a stop sign, demonstrates the need for robust...
Read moreEvidence-based Transparent For governance
Exploring the intricacies of a recent AI incident, where an autonomous vehicle misinterpreted a stop sign, demonstrates the need for robust...
Read moreRecent incident involving AI-powered smart home devices highlights the importance of responsible data handling. A popular device was found t...
Read moreThis article delves into a recent incident (#140) involving an autonomous vehicle, highlighting its unintended consequences and underscoring...
Read moreRecently, a large customer support AI platform experienced an incident where the system unintentionally responded to customers' negative emo...
Read moreIn this article, we delve into Incident 142, where an AI system in a customer service application demonstrated unintended behavior. This ins...
Read moreDelving into a recent AI incident, we explore the complexities of responsible AI governance. The instance involved unforeseen data bias that...
Read moreRecently, an incident involving an autonomous vehicle occurred, shedding light on the need for responsible AI governance. The self-driving c...
Read moreRecent events highlighted an incident involving autonomous vehicles, where a self-driving car failed to recognize a pedestrian, leading to a...
Read moreAn autonomous vehicle, developed by a leading tech company, inadvertently leaked sensitive user data during routine operation. The incident...
Read moreRecent events have shed light on an incident involving AI System #147, where unintended bias led to disparities in its recommendations. This...
Read moreIn this article, we delve into a recent incident involving an AI model that inadvertently infringed upon user privacy. This underscores the...
Read moreRecent events have highlighted the necessity of maintaining trustworthy artificial intelligence (AI) systems. Incident #149 underlines the i...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.