Accidental YouTube Algorithm Block: A Case Study in AI Governance and Trustworthy AI
An examination of an unintended algorithmic blockade on the 'black v white' CHESS strategy video, shedding light on the importance of respon...
Read moreEvidence-based Transparent For governance
An examination of an unintended algorithmic blockade on the 'black v white' CHESS strategy video, shedding light on the importance of respon...
Read moreExploring a real-world scenario involving Tesla's Full Self-Driving feature, this article highlights the importance of safe and secure AI. T...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). In this article, we delve into the complex...
Read moreIn an alarming incident, fraudsters successfully cloned a company director's voice to execute a $35 million bank heist. This underscores the...
Read moreExploring a case of misleading claims in website accessibility overlays, this AI incident highlights the need for safe and secure AI. This A...
Read moreZillow's decision to exit its home buying business and reduce staff by 25% raises questions about the role of AI in such businesses. This AI...
Read moreThis incident highlights the importance of responsible AI governance, as children were exposed to potentially harmful content on YouTube. Th...
Read moreThis AI incident sheds light on the company's efforts towards responsible AI governance, demonstrating their commitment to trustworthy and s...
Read moreIn this article, we delve into the analysis of the controversial COMPAS recidivism algorithm. This investigation underscores the importance...
Read moreExplore the challenges of debiasing word embeddings in AI and how it contributes to gender fairness. This AI incident maps to the Govern fun...
Read moreA recent study highlights the shortcomings of AI in civility detection, raising concerns about its bias and inflexibility. This AI incident...
Read moreThis incident showcases the potential biases in AI systems, highlighting the need for trustworthy AI and safe & secure AI governance. By rep...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.