Stanislav Petrov's Pivotal Role in Preventing a Nuclear Disaster: A Case Study in AI Governance
Explore the critical incident of 1983, where the vigilance of Stanislav Petrov prevented a potential nuclear disaster. This AI incident maps...
Read moreEvidence-based Transparent For governance
Explore the critical incident of 1983, where the vigilance of Stanislav Petrov prevented a potential nuclear disaster. This AI incident maps...
Read moreExplore the Boeing 737 Max 8 disaster, an unfortunate example highlighting the importance of leaking abstractions in AI safety. This inciden...
Read moreExploring the role of an AI trading system in the 2010 'flash crash', highlighting the importance of trustworthy, safe, and secure AI. This...
Read moreDive into the fascinating world of urban legends with our investigation of The Neural Net Tank. This case study highlights potential risks a...
Read moreThis AI incident involving Tesla's autonomous vehicles highlights the importance of safe and secure AI, underlining the need for robust gove...
Read moreIncident report: A driverless Magenta line train in Delhi crashed through a wall, underscoring the importance of safe and secure AI. This AI...
Read moreIn an intriguing development, a woman in China claimed her colleague was able to unlock her iPhone X using his face. This incident underscor...
Read moreExploring an alarming incident where a smart speaker was manipulated to falsely implicate a user, raising concerns about the safety and secu...
Read moreExplore a fascinating discovery on how Amazon may prevent the Alexa-enabled Echo device from activating during commercials, shedding light o...
Read moreThis AI incident highlights the need for transparency and accountability in algorithmic decision-making, emphasizing the importance of trust...
Read moreThis incident highlights the importance of responsible AI governance. A Chinese bus ad using face recognition technology mistakenly approved...
Read moreThe challenge of eliminating bias from hiring algorithms is under scrutiny by auditors. This AI incident maps to the Govern function in HISP...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.