Autopilot Safety Debate: Tesla Claims Improved Safety, While Some Warn of Risks
Incident involving Tesla Autopilot raises questions about AI safety in self-driving cars. This AI incident maps to the Govern function in HI...
Read moreEvidence-based Transparent For governance
Incident involving Tesla Autopilot raises questions about AI safety in self-driving cars. This AI incident maps to the Govern function in HI...
Read moreThis AI incident, involving a chatbot in South Korea, highlights the need for trustworthy AI. The mishandling of user data underscores the s...
Read moreThis AI incident raises concerns about the ethical use of artificial intelligence. The patent, belonging to Huawei, suggests development of...
Read moreAn unfortunate incident occurred where a facial recognition system incorrectly identified a black teen at a skating rink, leading to her rem...
Read moreThis facial recognition website, which allows anyone to perform searches based on a person's image, raises concerns about privacy and potent...
Read moreInvestigate an instance where algorithmic decision-making impacted healthcare services. This AI incident maps to the Govern function in HISP...
Read moreThis case study examines the use of algorithms in Amazon's Flex program, highlighting their role in decision-making processes with minimal h...
Read moreThis incident sheds light on the accuracy claims of San Francisco gunshot sensors, revealing them as marketing tactics rather than objective...
Read moreA recent incident involving Facebook's AI misclassifying a video of black men as primates underscores the importance of trustworthy and safe...
Read moreThis AI incident highlights the need for robust, trustworthy AI governance. Amazon's Rekognition system falsely matched 28 members of Congre...
Read moreThe recent shutdown of the AI-powered 'Genderify' platform serves as a stark reminder of the importance of trustworthy, safe, and secure AI....
Read moreAI incident involving Amazon's automated camera system causing false penalties for drivers. This AI incident maps to the Govern function in...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.