Responsible AI in Action: Navigating LA Wildfires with Trustworthy AI Systems - Waze Case Study
Exploring the role of AI governance in crisis response, this article examines how Waze handled the recent California fires. This AI incident...
Read moreEvidence-based Transparent For governance
Exploring the role of AI governance in crisis response, this article examines how Waze handled the recent California fires. This AI incident...
Read moreExploring the lessons learned from a self-driving shuttle's first-day accident, this article sheds light on the importance of safe and secur...
Read moreA tragic incident occurred at a Volkswagen plant in Germany, where an AI-controlled robot caused the death of a worker. This underscores the...
Read moreExploring a recent AI incident involving Google and Delphi's self-driving cars in California, this article highlights the importance of safe...
Read moreIn this eye-opening article, we examine a security breach in the much-touted Face ID feature of Apple's iPhone X. The incident highlights po...
Read moreLearn about the 1983 incident where Stanislav Petrov averted a potential nuclear disaster. This incident underscores the importance of respo...
Read moreExploring an AI incident involving the Boeing 737 Max 8, this analysis sheds light on the importance of 'Leaking Abstractions', a concept in...
Read moreThis AI incident, involving a UK speed trader, underscores the critical role of responsible AI governance in financial markets. It provides...
Read moreDive into a popular misconception about neural nets and tanks, demonstrating the importance of responsible AI governance in preventing misun...
Read moreThis AI incident involving Tesla vehicles demonstrates the need for responsible AI governance. It maps to the Govern function in HISPI Proje...
Read moreA driverless Magenta line train in Delhi tragically crashed through a wall today, highlighting the importance of safe and secure AI. This in...
Read moreAn incident occurred in China where a colleague was able to unlock an iPhone X using facial recognition, raising concerns about data privacy...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.