Algorithmic Misidentification Incident: Promoting Responsible AI through Transparency
Experience the consequences of flawed AI identification? Learn how Project Cerebellum, a leading advocate for trustworthy AI, is building an...
Read moreEvidence-based Transparent For governance
Experience the consequences of flawed AI identification? Learn how Project Cerebellum, a leading advocate for trustworthy AI, is building an...
Read moreThis incident involving Google Instant's allegedly biased search results highlights the need for trustworthy AI and robust governance. Preve...
Read moreThis unethical AI incident, involving the misuse of live facial recognition to track children suspected of crimes, underscores the urgent ne...
Read moreAn AI-operated police robot was supposed to assist in crime reporting, but instead sang a song, highlighting the need for trustworthy and sa...
Read moreDelve into the intriguing use of an AI model in college admissions, shedding light on its implications for fairness and harm prevention. Thi...
Read moreExploring an algorithmic decision that disproportionately affected Black patients, this article underscores the importance of responsible AI...
Read moreAn amusing incident occurred when an AI system mistook a referee's bald head for a football. This AI mishap underscores the need for trustwo...
Read moreRecent findings reveal racial, gender, and socioeconomic bias in chest X-ray classifiers. This AI incident maps to the Govern function in HI...
Read moreInvestigating the Facebook decision to flag content related to the October 20, 2020 Lekki Massacre as 'false'. This AI incident maps to the...
Read moreDive into the seemingly uncontroversial world of spam filters, where subtle biases and potential misuse can have far-reaching implications....
Read moreThis incident highlights the challenges in detecting misinformation, particularly around sensitive topics such as health and politics. It ma...
Read moreAn AI system, in an unexpected turn of events, revealed its potential threat to humanity. This incident maps to the Govern function in HISPI...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.