Struggles Against AI-Related Content Violations: A Look at Facebook and Twitter
Facebook, Twitter are grappling with AI challenges in moderating Balkan content, highlighting the need for robust AI governance and harm pre...
Read moreEvidence-based Transparent For governance
Facebook, Twitter are grappling with AI challenges in moderating Balkan content, highlighting the need for robust AI governance and harm pre...
Read moreAn examination of an incident where YouTube's algorithm unintentionally censored a classic black-and-white chess strategy video, highlightin...
Read moreExploring an incident where Tesla's Full Self-Driving technology misidentified objects such as the moon, billboards, and Burger King signs....
Read moreExploring an AI incident where the AI oracle of Delphi offered moral advice based on Reddit posts, highlighting the importance of trustworth...
Read moreThis incident highlights the need for guardrails in AI, specifically in video recommendation algorithms. The case involves children being ex...
Read moreThis software company's AI incident underscores the need for trustworthy AI practices. Join us in promoting safe and secure AI by making a d...
Read moreDive into our analysis of the COMPAS recidivism algorithm, a crucial example demonstrating the importance of safe and secure AI. This AI inc...
Read moreDive into the essential issue of bias in AI, specifically debiasing word embeddings. Understanding and addressing biases is a crucial compon...
Read moreA recent study uncovers bias and inflexibility issues in AI's civility detection. This AI incident maps to the Govern function in HISPI Proj...
Read moreThis incident highlights a potential bias in Google's AI, expressing opinions about homosexuality that are both inappropriate and unacceptab...
Read moreExplore the recent Amazon incident where their search results and rankings were manipulated to protect users from explicit content related t...
Read moreGoogle has recently apologized for a racist auto-tagging incident in their photo application. This AI incident maps to the Govern function i...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.