Values Statement: We believe AI should cause no harm, but enhance the quality of human life, by proactively adopting our AI Governance framework.
Evidence-based Transparent For governance
AI Incidents
Data source & citationExamining Google's Image Search Incident: Race Bias and the Importance of Responsible AI
Read moreExploring Machine Bias: A Key Issue in Responsible AI
Read moreUnderage User Exposes Risks in AI Content Filtering: Unsuitable Results From a Digital Assistant
Read moreInappropriate Amazon AI-Generated Cell Phone Cases: An Unfortunate Mishap (NSFW)
Read moreUncovering the Truth Behind Robodebt: An Examination of AI Governance
Read moreUnderstanding the Role of Responsible AI in the Yandex Chatbot Incident
Read moreGoogle Translate Misclassifies Occupations Based on Gender: A Case of Irresponsible AI
Read moreApology Issued for Controversial Skin-Toning Filter on FaceApp: A Reminder of the Need for Responsible AI
Read moreEdit Wars on Wikipedia Caused by AI Bots Highlight the Need for Responsible AI Governance
Read moreLessons Learned from Kaggle's Fisheries Competition: Advancing Responsible AI Practices
Read moreImproving AI's Performance in Composing Holiday Music: An Analysis of an Incident
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.