Controversial AI Software Company Implementing Changes Towards Responsible AI Governance
This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). The company, under scrutiny due to recent...
Read moreEvidence-based Transparent For governance
This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). The company, under scrutiny due to recent...
Read moreDelving into the COMPAS recidivism algorithm, we highlight the importance of trustworthy AI and safe and secure AI practices. This AI incide...
Read moreExplore our analysis of gender bias in word embeddings, a cornerstone of AI model training. Understand how this incident relates to the 'Gov...
Read moreRecent findings highlight the ease with which Google's anti-internet troll AI platform can be deceived, underscoring the importance of respo...
Read moreThis incident highlights the need for safe and secure AI, as Google's AI was found expressing prejudiced opinions about homosexuality. This...
Read moreThis Amazon incident highlights the importance of responsible AI governance in content moderation, showcasing how its algorithm adjustments...
Read moreGoogle has apologized for a racist auto-tagging incident in their photo app, emphasizing the need for trustworthy and safe AI practices. Thi...
Read moreAn unusual incident involving Google's email-replying AI has been reported, where the system persistently expressed affection by saying 'I l...
Read moreThis AI incident highlights a concern about gender bias in Google Image Search, underscoring the need for trustworthy AI and safe and secure...
Read moreThis warehouse incident underscores the need for trustworthy and responsible AI in work environments. OSHA is currently investigating the be...
Read moreThis analysis delves into an incident involving Google's ad-targeting system, highlighting potential issues and their implications for trust...
Read moreThis unfortunate incident involving a Tesla with Autopilot highlights the need for robust governance in AI systems. The driver's inattention...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.