Values Statement: We believe AI should cause no harm, but enhance the quality of human life, by proactively adopting our AI Governance framework.
Evidence-based Transparent For governance
AI Incidents
Data source & citationExploring AI Limitations: A Challenging Turing Test Reveals Chatbot Shortcomings
Read moreLA Wildfires and Responsible AI: Examining Waze's Response - Mapping to Govern Function in Project Cerebellum
Read moreFirst-Day Accident of Self-Driving Shuttle: A Cautionary Tale for Responsible AI
Read moreFatal Incident Involving Industrial Robot at Volkswagen Plant Highlights Need for Responsible AI Governance
Read moreCalifornia Self-Driving Car Incidents: A Test for Responsible AI
Read moreApple's Face ID Hacked: A Lesson in Safe and Secure AI
Read moreAvertising Global Nuclear Disaster: The Heroic Action of Stanislav Petrov in 1983 - A Case Study for Safe and Secure AI
Read moreBoeing's 737 Max 8 Incident: Understanding the Role of AI in Safety and Leaking Abstractions
Read moreRole of AI in 2010 Flash Crash: Understanding the UK Speed Trader Incident
Read moreDebunking the Myth: The Neural Net Tank Urban Legend and Its Impact on AI Governance
Read moreExamining the Tesla Incident: A Case Study on Safe and Secure AI
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.