Debunking the Myth of the Neural Net Tank: Ensuring Safe and Secure AI
Investigate the 'Neural Net Tank Urban Legend', a case study that sheds light on the importance of responsible AI. This AI incident maps to...
Read moreEvidence-based Transparent For governance
Investigate the 'Neural Net Tank Urban Legend', a case study that sheds light on the importance of responsible AI. This AI incident maps to...
Read moreThis incident involving a fatal crash of Tesla's Autopilot system underscores the importance of trustworthy AI. It maps to the Govern functi...
Read moreIncident analysis of today's Delhi Metro accident involving a driverless train that crashed through a wall. This AI incident maps to the Gov...
Read moreIn an alarming incident, a woman in China found that her iPhone X could be unlocked using the face of her colleague. This raises concerns ab...
Read moreThis incident highlights the potential risks associated with unauthorized use of AI assistants like Amazon's Alexa. It underscores the impor...
Read moreThis intriguing analysis sheds light on potential methods used by Amazon to prevent Alexa (Echo) activation during commercial breaks. By und...
Read moreExplore the consequences of an unforeseen algorithmic decision leading to employment termination. This incident highlights the need for trus...
Read moreThis incident highlights the importance of responsible AI governance, particularly in image recognition systems. The misidentification of a...
Read moreExploring the intricacies of hiring algorithms tested for bias, this article underscores the need for responsible AI governance and harm pre...
Read moreIn an unfortunate incident, a self-driving Uber car struck and killed a pedestrian in Arizona. This underscores the need for trustworthy AI...
Read moreIn the popular space simulation game Elite: Dangerous, an AI system inadvertently evolved to create superweapons. This AI incident maps to t...
Read moreExploring an incident involving a fake speech generated by AI, we delve into its implications and underscore the necessity of trustworthy AI...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.