Lion Air Crash: Analyzing the AI System Failure Through a Responsible Lens - Indonesia
The Lion Air crash of 2018 highlighted an AI system failure, with black box data revealing pilots' struggle to regain control. This AI incid...
Read moreEvidence-based Transparent For governance
The Lion Air crash of 2018 highlighted an AI system failure, with black box data revealing pilots' struggle to regain control. This AI incid...
Read moreThis incident sheds light on the complexities of AI governance, highlighting its potential role in public safety scenarios. The report and a...
Read moreThis study explores adverse events in robotic surgery, shedding light on the need for trustworthy AI. By delving into 14 years of FDA data,...
Read moreExplore ten eye-opening instances where AI malfunctioned, demonstrating the importance of trustworthy AI. Learn how you can help establish g...
Read moreThis article examines ten AI incidents, each highlighting potential risks in unregulated AI development. These examples map to the Govern fu...
Read moreExplore these 10 instances where AI systems malfunctioned or caused harm, emphasizing the need for safe and secure AI. This AI incident maps...
Read moreIn this article, we delve deeper into the released NYC value-added data and discuss its implications on artificial intelligence. This incide...
Read moreExperience an unexpected shift in operating hours? This AI incident highlights the need for robust AI governance, ensuring safe and secure A...
Read moreThis incident involving a driverless car near-miss with a Google car underscores the importance of responsible AI governance in road safety....
Read moreThis historic incident underscores the importance of responsible AI and safe and secure systems. The false alarm, triggered by an AI system,...
Read moreIn the week following the iPhone X release, hackers claimed they managed to bypass Face ID, emphasizing the significance of secure and trust...
Read moreOn May 6, 2010, a rapid sequence of events triggered the largest one-day stock market drop in history - the infamous 'Flash Crash'. This AI...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.