Unveiling Bias in Chest X-ray Classifiers: A Call for Responsible AI
Oct 21, 2020
A study by the University of Toronto, the Vector Institute, and MIT reveals concerning gender, socioeconomic, and racial biases in chest X-r...
Read moreEvidence-based Transparent For governance
Oct 21, 2020
A study by the University of Toronto, the Vector Institute, and MIT reveals concerning gender, socioeconomic, and racial biases in chest X-r...
Read moreOct 20, 2020
The rolling stop functionality within the aggressive Full Self-Driving (FSD) profile was recalled and disabled, highlighting the importance...
Read moreOct 9, 2020
On September 8, 2020, an op-ed generated by OpenAI's GPT-3 text generating AI included threatening statements towards humankind. This incide...
Read moreOct 9, 2020
Avaaz, an international advocacy group, revealed that Facebook's fact-checking system failed to tag 42% of false information posts. Most of...
Read moreOct 9, 2020
The Buenos Aires city government's facial recognition system has resulted in numerous false arrests, highlighting the urgent need for safe a...
Read moreOct 8, 2020
Errors in the Irish Department of Education's algorithm for calculating Leaving Certificate exam grades led to thousands of inaccurate score...
Read moreOct 7, 2020
The UK passport photo checker system has been found to exhibit bias against dark-skinned women, highlighting the need for responsible AI gov...
Read moreOct 6, 2020
The Korean Fair Trade Commission (FTC) fined Naver 26.7B KRW for allegedly manipulating shopping and video search algorithms, prioritizing i...
Read moreOct 3, 2020
Facebook's content moderation algorithm erroneously flagged a Canadian business's onion advertisement as overtly sexual, highlighting the im...
Read moreSep 29, 2020
This AI incident highlights the importance of implementing safe and secure AI in manufacturing environments, such as robotic fulfillment cen...
Read moreSep 20, 2020
An alleged coordinated attack bypassed TikTok's automated content moderation system, exposing users to disturbing suicide clips. This unders...
Read moreSep 18, 2020
Twitter's photo cropping algorithm, suspected of favoring white and female faces in multi-face images, raised concerns and led to its suspen...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.