Incident Analysis: Facebook AI Moderator Mistakes Car Washes for Mass Shootings
This alarming AI incident highlights the need for robust safeguards in our AI systems. The Facebook moderator, tasked with filtering content...
Read moreEvidence-based Transparent For governance
This alarming AI incident highlights the need for robust safeguards in our AI systems. The Facebook moderator, tasked with filtering content...
Read moreThis incident involving Alibaba's ethnicity detection algorithm highlights the importance of responsible AI and governance. Stay informed ab...
Read moreThis alarming incident highlights the need for robust AI governance in high-stakes exams. The California Bar Exam flagged a third of applica...
Read moreThis AI incident highlights the need for safe and secure AI governance, particularly on social media platforms. TikTok anorexia videos have...
Read moreThis incident highlights the potential bias in AI-powered content moderation systems, raising concerns about responsible AI governance and h...
Read moreA disturbing incident occurred in a shopping mall where an autonomous robot fell off an escalator, causing injuries to several passengers. T...
Read moreDive into a critical analysis of an admissions algorithm, highlighting its implications on fairness and bias in AI systems. This incident ma...
Read moreExplore the detrimental effects of inadequate brand safety technology on news funding, emphasizing the importance of trustworthy and safe AI...
Read moreExplore the clash between an Israeli farmer and an autonomous irrigation algorithm, highlighting the need for safe and secure AI. This incid...
Read moreAn AI incident involving educational software has been reported, where students of color are being flagged to teachers due to the software's...
Read moreInvestigating algorithmic curation practices on e-commerce platforms to combat vaccine misinformation, a critical aspect of safe and secure...
Read moreThis incident underscores the disadvantages faced by BIPOC students in their academic journey due to an untrustworthy AI application. Such b...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.