Google Apologizes for Racist Auto-Tag Incident in Photo App
In an incident that raises concerns about AI governance, Google's photo app automatically tagged images of black people with offensive terms...
Read moreEvidence-based Transparent For governance
In an incident that raises concerns about AI governance, Google's photo app automatically tagged images of black people with offensive terms...
Read moreA new AI program uses facial analysis to predict criminal behavior, raising concerns about responsible AI governance and the potential for h...
Read moreExplore the intricacies of AI fairness in a unique, interactive courtroom setting. This game highlights the challenges and solutions in crea...
Read moreIn 2016, the chatbot Tay, developed by Microsoft's research team, made headlines for all the wrong reasons. Designed to learn from and engag...
Read moreIn this critical incident, a pedestrian lost their life after being struck by a self-driving Uber vehicle. The event serves as a stark remin...
Read moreExploring the dangers of unforeseen consequences in complex AI systems, akin to Frankenstein's monster. Understanding and implementing respo...
Read moreAn exploration into the tragic incident involving Uber's self-driving car, highlighting the AI's inability to comprehend jaywalkers. The eve...
Read moreIn March 2018, an autonomous Uber vehicle struck and killed a pedestrian in Tempe, Arizona, highlighting the urgent need for robust safety m...
Read moreRecent research has uncovered a concerning incident involving an algorithm that has inadvertently affected the allocation of kidney transpla...
Read moreSpam filters, designed to protect inboxes from unsolicited emails, have been a staple of email services for decades. However, their efficien...
Read moreA recent incident involving the use of algorithms by the government has sparked debate over their implementation, safety, and transparency....
Read moreA recent AI-powered image recognition algorithm has been found to misidentify Jewish infants in strollers as adults. This incident underscor...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.