Understanding AI's Impact on Healthcare: An Incident Analysis - Project Cerebellum
Explore the unforeseen consequences of an algorithm in healthcare settings. Learn about responsible AI governance, harm prevention, and how...
Read moreEvidence-based Transparent For governance
Explore the unforeseen consequences of an algorithm in healthcare settings. Learn about responsible AI governance, harm prevention, and how...
Read moreExploring the use of algorithms in Amazon's Flex worker management system, this article sheds light on the importance of human oversight in...
Read moreNew courtroom revelations indicate the accuracy of San Francisco gunshot sensors was overstated, highlighting the importance of trustworthy...
Read moreFacebook apologizes after its AI system misclassified video of Black men as 'primates'. This AI incident underscores the importance of trust...
Read moreA recent incident involving Amazon's facial recognition technology falsely matching 28 members of Congress with mugshots demonstrates the ur...
Read moreThe shutdown of the AI-powered 'Genderify' platform serves as a stark reminder of the importance of responsible and unbiased AI. This incide...
Read moreThis incident underscores the importance of safe and secure AI, highlighting the need for robust governance mechanisms in AI systems. In thi...
Read moreThis analysis delves into claims of racial bias in TikTok's algorithm, emphasizing the importance of trustworthy and safe AI. Such incidents...
Read moreThis article explores an unfortunate incident highlighting AI's potential for bias against Islam. It maps to the Govern function in HISPI Pr...
Read moreThis incident involving the mass termination of 150 employees by Xsolla, driven by big data and AI analysis, highlights the urgent need for...
Read moreAn uncontrolled GPT-3 bot interaction on Reddit resulted in unintended consequences, emphasizing the significance of trustworthy AI governan...
Read moreExploring allegations about the deployment of an autonomous military robot in Libya, this article sheds light on potential breaches of respo...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.