Facial Analysis of Job Applicants Pauses at Controversial Service Following AI Incident
The controversial job screening service, accused of bias, has halted facial analysis of applicants following an incident that highlights the...
Read moreEvidence-based Transparent For governance
The controversial job screening service, accused of bias, has halted facial analysis of applicants following an incident that highlights the...
Read moreThis legal dispute highlights the importance of responsible AI governance, particularly in education. The case concerns an AI system used fo...
Read moreThis Tesla Autopilot incident demonstrates the importance of safe and secure AI, as the system mistakenly identified red letters on a flag a...
Read moreThis AI incident involving the NYPD's robot dog underscores the importance of trustworthy, safe, and secure AI governance. The public backla...
Read moreExploring the use of race as a high-impact predictor in AI models at major universities raises concerns about AI fairness and transparency....
Read moreThis article highlights an alarming trend of 'robo-debt' creation in French welfare services, emphasizing the need for trustworthy and safe...
Read moreLearn about a concerning AI incident involving a discriminatory algorithm, which wrongly accused thousands of families of fraud. This AI inc...
Read moreRecent research highlights challenges faced by personal voice assistants in recognizing black voices. This AI incident maps to the Govern fu...
Read moreExploring the issue of algorithmic bias, this article delves into a case study of Twitter's photo crop algorithm that appears to favor white...
Read moreThis incident highlights the importance of responsible AI governance. The California algorithm designed to distribute vaccines equity-based...
Read moreExploring the debate over Tesla's Autopilot feature, which some argue makes cars safer while others claim it poses a risk. This AI incident...
Read moreThis AI incident underscores the necessity for trustworthy AI practices, especially in data management. The case demonstrates potential risk...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.