Responsible AI Incident: Google Self-Driving Car Collision Due to Red Light Jumping
This AI incident, involving a Google self-driving car, underscores the importance of safe and secure AI. The collision occurred when another...
Read moreEvidence-based Transparent For governance
This AI incident, involving a Google self-driving car, underscores the importance of safe and secure AI. The collision occurred when another...
Read moreFacebook incident involving a Palestinian man's wrongful arrest due to machine translation error underscores the need for trustworthy AI. Th...
Read moreExploring the unfortunate presence of racial bias in the popular augmented reality game, Pokemon Go. Understanding and addressing such issue...
Read moreA collective action by teachers highlights the importance of trustworthy AI and safe & secure evaluation systems. This incident maps to the...
Read moreThe town of Teaneck, NJ, has enacted a ban on facial recognition technology usage by its law enforcement agencies. This move aims to uphold...
Read moreThe recent lawsuit in France over perceived anti-Semitic results on Google Instant highlights the need for trustworthy and responsible AI. T...
Read moreAn unfortunate incident involving live facial recognition technology tracking minors suspected of criminal activities raises serious concern...
Read moreAn incident involving an AI-powered police robot has highlighted the need for trustworthy and safe AI. The robot was approached by a woman t...
Read moreExplore an AI model used for college admissions, focusing on its impact on students. This AI incident maps to the Govern function in HISPI P...
Read moreThis AI incident highlights a concerning trend in kidney transplant allocation, where an algorithm may have inadvertently discriminated agai...
Read moreThis entertaining AI incident serves as an example of the importance of responsible AI governance in real-world applications. It maps to the...
Read moreUnveiling racial, gender, and socioeconomic biases in chest X-ray classifiers is crucial for building trustworthy AI. This incident maps to...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.