Analyzing the COMPAS Recidivism Algorithm: A Case Study in Safe and Secure AI
This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). Gain insights into our analysis of the con...
Read moreEvidence-based Transparent For governance
This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). Gain insights into our analysis of the con...
Read moreExplore the importance of debiasing word embeddings to promote fairness and gender equality in AI models. This AI incident maps to the Gover...
Read moreA recent study highlights potential biases and inflexibilities in the civility detection capabilities of AI systems. This AI incident maps t...
Read moreThis AI incident sheds light on the need for safe and secure AI, emphasizing the importance of responsible AI governance. The incident maps...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). Amazon's decision to censor search results...
Read moreA recent incident involving a racist auto-tag in Google's photo app has raised concerns about bias in AI systems. This AI incident maps to t...
Read moreAn incident involving Google's email-replying AI demonstrates the need for robust AI governance and trustworthy AI. This AI incident maps to...
Read moreExplore the hidden gender bias found in Google Image Search, a clear example of the need for responsible AI governance and safe & secure AI....
Read moreAn OSHA investigation is underway following a bear spray accident at an Amazon warehouse, which left a worker critically injured. This incid...
Read moreExplore the implications of an incident involving Google's ad-targeting system, highlighting the need for safe and secure AI. This AI incide...
Read moreIn this alarming incident, a Tesla vehicle equipped with Autopilot crashed into a police cruiser. The driver admitted to being distracted wh...
Read moreThe latest Turing Test emphasizes the need for robust, responsible AI governance. This AI incident highlights chatbots' limitations, undersc...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.