Analyzing the COMPAS Recidivism Algorithm: A Case Study for Safe and Trustworthy AI
Delve into our comprehensive analysis of the controversial COMPAS recidivism algorithm, shedding light on its implications for harm preventi...
Read moreEvidence-based Transparent For governance
Delve into our comprehensive analysis of the controversial COMPAS recidivism algorithm, shedding light on its implications for harm preventi...
Read moreExplore how AI models can reinforce gender stereotypes through word embeddings. Learn about techniques to debias these models, ensuring trus...
Read moreA recent study highlights a concerning display of bias and inflexibility in an AI system designed for civility detection. This incident unde...
Read moreDive into this incident involving Google's AI, which demonstrated bias towards homosexuality. As we strive for trustworthy and unbiased AI,...
Read moreThis AI incident, which maps to the Govern function in HISPI Project Cerebellum's Trusted AI Model (TAIM), demonstrates Amazon's commitment...
Read moreGoogle has issued an apology following a controversial incident where its photo app inappropriately tagged images of African Americans with...
Read moreInvestigate the instance where Google's AI responded to emails by expressing affection, revealing potential vulnerabilities in trustworthy A...
Read moreExploring a significant AI incident, this article highlights the hidden gender bias in Google Image Search. It underscores the importance of...
Read moreOSHA is currently investigating a bear spray accident at an Amazon warehouse that left a worker critical. This incident underscores the impo...
Read moreThis AI incident raises questions about the ad-targeting system within Google, highlighting the need for trustworthy and safe AI practices....
Read moreThis Tesla accident, where the driver was watching a movie instead of the road, underscores the critical role of responsible AI. This incide...
Read moreThis AI incident highlights the progress and challenges in AI development, demonstrating the need for robust guardrails for artificial intel...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.