Values Statement: We believe AI should cause no harm, but enhance the quality of human life, by proactively adopting our AI Governance framework.
Evidence-based Transparent For governance
AI Incidents
Data source & citationProPublica Investigation Uncovers Machine Bias in Criminal Justice AI - A Case for Responsible AI Governance
Read moreDebiasing Word Embeddings: Redefining AI Gender Stereotypes - Towards Trustworthy AI
Read moreGoogle's Comment-Ranking System: A Potential Risk for Responsible AI Governance
Read moreBias in Google's Sentiment Analysis API: A Reflection of Human Bias
Read moreCensorship of LGBTQ+ Content on Amazon Maps to the Govern Function in HISPI Project Cerebellum Trusted AI Model (TAIM): Preventing Harm and Ensuring Responsible AI
Read moreGoogle Photos AI Misfire: Recognition Flaws in Identifying Gorillas
Read moreSmart Reply Feature in Inbox by Gmail: An Analysis of AI Application and Impact on Responsible Email Communication
Read moreExamining Google Image Search's Alleged Gender Bias: A Responsible AI Issue
Read more5 Pivotal AI Incidents in 2017 Highlighting the Need for Responsible AI Governance
Read moreEmployment Decision Made by AI: Case Study on Accountability
Read moreTop 5 AI Incidents from 2017 Illustrating the Need for Trustworthy AI and Responsible Governance
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.