Exploring Machine Bias: A Crucial Aspect of Trustworthy AI
Dive into the critical issue of machine bias, a common challenge in AI systems. Understanding and addressing this concern is essential for t...
Read moreEvidence-based Transparent For governance
Dive into the critical issue of machine bias, a common challenge in AI systems. Understanding and addressing this concern is essential for t...
Read moreA recent incident involving a child accessing pornographic content through a digital assistant underscores the need for trustworthy and safe...
Read moreThe recent Amazon A.I. blunder, where it produced NSFW cell phone cases, underscores the importance of trustworthy and safe AI. This AI inci...
Read moreThis investigation sheds light on the government's efforts to conceal documents pertaining to the Robodebt system, potentially revealing wha...
Read moreExplore the recent incident involving the Yandex chatbot, a valuable lesson on safe and secure AI. This AI incident maps to the Govern funct...
Read moreIn an unfortunate incident demonstrating the need for safe and secure AI governance, Google Translate misgendered female historians and male...
Read moreFaceApp has acknowledged the concern surrounding its 'racist' skin-lightening filter, emphasizing the need for safe and secure AI developmen...
Read moreAn AI experiment to improve Wikipedia was met with unforeseen challenges when users engaged in petty edit wars. This AI incident maps to the...
Read moreThis analysis of Kaggle's fisheries competition reveals insights into data preprocessing, machine learning models, and potential biases. It...
Read moreExplore the difficulties faced by current AI models when creating traditional music, such as Christmas carols. Understanding these challenge...
Read moreIncident analysis of a Google Photos mishap involving ski photos, emphasizing the importance of trustworthy AI for image recognition tasks....
Read moreAn intriguing incident unfolded when a store hired an AI robot to assist customers. Unfortunately, the bot's behavior inadvertently frighten...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.