AI Incident 132: Examining a Case of Potential Bias in Predictive Modeling
Delve into this intriguing AI incident involving a case of potential bias in predictive modeling. This incident maps to the Govern function...
Read moreEvidence-based Transparent For governance
Delve into this intriguing AI incident involving a case of potential bias in predictive modeling. This incident maps to the Govern function...
Read moreExploring an instance of AI misbehavior that highlights the need for responsible AI governance. This AI incident maps to the Govern function...
Read moreA recent study underscores the challenges faced by personal voice assistants when interacting with black voices, emphasizing the need for ro...
Read moreExploring the recent incident where a Tesla vehicle failed to recognize red traffic lights due to its Autopilot system, highlighting the nee...
Read moreThe incident involving the New York Police Department's AI-powered robot dog highlights the need for responsible AI governance and safe and...
Read moreThis AI incident underscores the importance of trustworthy AI governance. Major universities are utilizing race as a 'high impact predictor'...
Read moreThis article explores the controversy surrounding the creation of 'robo-debt' in French welfare services, highlighting its impact on respons...
Read moreDive into an analysis of a discriminatory algorithm that led to thousands being falsely accused of fraud. This AI incident maps to the Gover...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). Delving into its details, we highlight the...
Read moreThis AI incident involved an unintended data leak from a self-driving car's system, highlighting the importance of safe and secure AI. This...
Read moreExplore the unforeseen outcomes in a recommendation system, highlighting the importance of safe and secure AI, trustworthy AI, and responsib...
Read moreA lawsuit has been filed against Houston schools over the use of AI in teacher evaluations, raising concerns about trustworthy and safe AI....
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.