Exploring Limitations in AI: A Case Study on the Latest Turing Test
This article sheds light on a recent Turing Test, revealing important lessons about AI limitations. Understanding these shortcomings is cruc...
Read moreEvidence-based Transparent For governance
This article sheds light on a recent Turing Test, revealing important lessons about AI limitations. Understanding these shortcomings is cruc...
Read moreExploring Waze's approach to navigating LA wildfires, this incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model...
Read moreThis AI incident involving a self-driving shuttle offers valuable insights into the importance of trustworthy AI, especially in operational...
Read moreA recent incident involving a robot killing a man at a Volkswagen plant in Germany serves as a grim reminder of the importance of responsibl...
Read moreThis autonomous vehicle incident in California involving both Google and Delphi self-driving cars raises important questions about the need...
Read moreRecent incident reveals potential vulnerabilities in Apple's Face ID system, underscoring the importance of trustworthy AI and robust govern...
Read moreIn 1983, Soviet officer Stanislav Petrov made a crucial decision that potentially averted a global nuclear disaster. This AI incident highli...
Read moreInvestigate the tragic Boeing 737 Max 8 incident, a stark reminder of the importance of AI safety and addressing leaking abstractions. This...
Read moreDive into this detailed analysis of an AI incident that impacted the global market in 2010, highlighting the importance of safe and secure A...
Read moreExploring the Neural Net Tank urban legend sheds light on potential pitfalls in AI development. This AI incident maps to the Govern function...
Read moreThis autonomous driving incident involving a Tesla vehicle highlights the need for trustworthy AI governance. The incident maps to the Gover...
Read moreIncident analysis of a driverless train crash on the Magenta line, highlighting the need for robust AI governance and harm prevention mechan...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.