Boeing 737 Max 8 Incident: Unveiling the Role of AI in Leaking Abstractions
Explore the AI safety implications of the Boeing 737 Max 8 incident, highlighting the importance of 'leaking abstractions' and the role they...
Read moreEvidence-based Transparent For governance
Explore the AI safety implications of the Boeing 737 Max 8 incident, highlighting the importance of 'leaking abstractions' and the role they...
Read moreExploring the potential impact of AI on the 2010 Flash Crash, this analysis underscores the importance of responsible AI governance and trus...
Read moreThis analysis of the Neural Net Tank urban legend sheds light on the importance of safe and secure AI, emphasizing the role of Project Cereb...
Read moreIncident involving Tesla vehicles highlights the need for trustworthy AI in autonomous driving. This AI incident maps to the Govern function...
Read moreA tragic incident occurred on the Delhi Metro's Magenta line, where a driverless train crashed through a wall. This AI incident underscores...
Read moreAn intriguing incident occurred in China, where a woman's iPhone X was reportedly unlocked by her colleague's face. This underscores the imp...
Read moreThis AI incident involving an unauthorized use of Amazon's Alexa device resulted in a home raid, highlighting the importance of responsible...
Read moreExploring possible mechanisms to maintain safe and secure AI operation, this article sheds light on potential strategies employed by Amazon...
Read moreExploring an instance of an algorithmic decision gone awry, this analysis underscores the critical importance of responsible AI and trustwor...
Read moreThis AI incident highlights the need for robust governance in AI systems. A Chinese ad using facial recognition technology incorrectly ident...
Read moreExploring the complexities of testing hiring algorithms for bias, this article sheds light on the importance of AI governance in ensuring sa...
Read moreThis tragic incident involving a self-driving Uber vehicle underscores the importance of safe and secure AI. The fatality in Arizona raises...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.