Values Statement: We believe AI should cause no harm, but enhance the quality of human life, by proactively adopting our AI Governance framework.
Evidence-based Transparent For governance
AI Incidents
Data source & citationInvestigation into Critical Amazon Warehouse Bear Spray Incident: Highlighting AI's Role in Worker Safety
Read moreUnveiling the Concealed Gender Bias in Google Image Search: A Call for Responsible AI
Read moreGoogle's Emotional AI Misstep: The Case of the Overly Affectionate Auto-Responder
Read moreApology from Google over Racist Auto-Tag Incident in Photo App Highlights Need for Responsible AI Governance
Read moreAmazon Alters Rankings & Search Results for Safe and Secure AI: A Case Study on LGBTQ+ Literature
Read moreExamining Google's AI Bias Regarding Homosexuality: A Case for Responsible AI
Read moreStudy Reveals Bias and Inflexibility in AI Civility Detection: A Call for Responsible AI Governance
Read moreDebiasing Bias in AI: The Case of Word Embeddings - Promoting Fair and Trustworthy AI
Read moreImpact of Popularity Bias in AI-Powered Media Recommendation Systems: A Case Study
Read moreAI Failure in Retail: Target's Lack of Pregnancy Detection Alert
Read moreFacebook Lawsuit Over Myanmar Violence Allegations Highlights Need for Responsible AI Governance
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.