Understanding Facebook's Decision to Reject Certain Fashion Ads: A Look at Responsible AI
Exploring the recent rejections of fashion ads on Facebook, this article sheds light on the importance of responsible AI and its role in gov...
Read moreEvidence-based Transparent For governance
Exploring the recent rejections of fashion ads on Facebook, this article sheds light on the importance of responsible AI and its role in gov...
Read moreThis incident highlights the challenges in implementing safe and secure AI, specifically in content moderation. Both Facebook and Twitter ar...
Read moreAn intriguing incident occurred where the YouTube algorithm inadvertently flagged a popular chess strategy video, 'black v white', as violat...
Read moreExplore the recent AI incident involving Tesla's Full Self-Driving technology being misled by common objects like the moon, billboards, and...
Read moreThis AI incident, involving the AI oracle of Delphi, highlights the importance of trustworthy AI and safe and secure AI governance. The AI,...
Read moreA $35 million heist exposed a chilling use of deepfakes in voice manipulation technology. This incident underscores the importance of trustw...
Read moreThis alarming AI incident involves YouTube videos deceiving children. It highlights the need for safe and secure AI, a key aspect of Project...
Read moreFollowing negative publicity, this software company is taking steps to enhance its AI systems. This incident maps to the Govern function in...
Read moreThis analysis delves into the COMPAS recidivism algorithm, highlighting its impact on justice and underscoring the need for safe and secure...
Read moreExplore how AI can perpetuate gender stereotypes through word embeddings, and learn about debiasing strategies to promote trustworthy AI. Re...
Read moreA recent study uncovers biases and inflexibilities in AI's civility detection capabilities, underscoring the importance of responsible AI go...
Read moreDelve into a significant incident involving Google's AI, showcasing the importance of trustworthy AI and harm prevention. This AI incident m...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.