Values Statement: We believe AI should cause no harm, but enhance the quality of human life, by proactively adopting our AI Governance framework.
Evidence-based Transparent For governance
AI Incidents
Data source & citationFacebook's AI Mishap Leads to Wrongful Arrest: An Incident Highlighting the Need for Responsible AI
Read moreExploring Racial Bias in Augmented Reality: Pokémon GO Incident
Read moreTeachers Plan Widespread Appeals of Perceived Bias in AI-Based Evaluation Systems - Harm Prevention and Responsible AI Governance
Read moreApple Card: A Case Study on Gender Bias in AI-Powered Fintech — Mapping the Govern Function
Read moreGovernment Grading: Assessing Accountability in Algorithm Usage
Read moreBias in AI: UK Passport Photo Checker Shows Preference Against Dark-Skinned Women - A Case for Responsible AI
Read moreAI Incident: Jewish Baby Stroller Misclassification - Importance of Responsible AI
Read moreYouTube's Role in Radicalization: The Christchurch Shooter Incident - Highlighting the Importance of Safe and Secure AI
Read moreApology for Exclusion in COVID-19 Vaccine Allocation Plan by Stanford University: A Reminder of AI Governance and Harm Prevention
Read moreCourt Ruling on 'Discriminatory' Deliveroo Algorithm Highlights Importance of Responsible AI Governance
Read moreLawsuit Over Teacher Evaluation System in Houston Schools Highlights Need for Responsible AI Governance
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.