Analyzing Twitter's Controversial Photo Crop Algorithm: Bias Towards White Faces and Women
This AI incident sheds light on the need for responsible AI governance in image processing algorithms. The algorithm, favored by Twitter, ha...
Read moreEvidence-based Transparent For governance
This AI incident sheds light on the need for responsible AI governance in image processing algorithms. The algorithm, favored by Twitter, ha...
Read moreThis article discusses an AI-driven vaccine allocation algorithm in California, which raises concerns about equitable distribution. The algo...
Read moreThis AI incident involving Tesla's Autopilot system underscores the need for responsible AI governance. The debate between safety proponents...
Read moreThis AI incident involving a South Korean chatbot underscores the need for trustworthy, safe, and secure AI practices. It also maps to the G...
Read moreThis incident highlights the need for responsible AI governance. A surveillance group exposed a patent detailing an AI-powered system design...
Read moreAn unfortunate incident occurred where an AI system mistakenly identified a teen as a banned troublemaker, leading to her denial of entry at...
Read moreThis AI-driven facial recognition website demonstrates the power of technology in enhancing public safety but also raises privacy concerns....
Read moreDive into an incident where an algorithm impacted health care, emphasizing the importance of responsible AI governance. This AI incident map...
Read moreThis AI-related incident highlights the importance of responsible AI governance in employment decisions. Amazon's use of algorithms to manag...
Read moreThis incident involving misleading YouTube videos targets children, highlighting the importance of safe and secure AI. These videos demonstr...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). The software company, under growing public...
Read moreExplore our analysis of the controversial COMPAS recidivism algorithm, demonstrating the importance of trustworthy AI. This incident maps to...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.