Values Statement: We believe AI should cause no harm, but enhance the quality of human life, by proactively adopting our AI Governance framework.
Evidence-based Transparent For governance
AI Incidents
Data source & citationLessons Learned from the First Day Mishap of a Self-Driving Shuttle: Embracing Responsible AI Governance
Read moreResponsible AI in Action: Analyzing Waze's Response to Los Angeles Wildfires
Read moreExposing Chatbot Limitations: A Case Study in AI's Journey Towards Trustworthy Intelligence
Read moreTesla on Autopilot Involved in Accident While Driver Was Watching a Movie: Highlighting the Need for Safe and Secure AI
Read moreExploring Potential Risks in Google's Ad Targeting System: A Cautionary Tale for Responsible AI
Read moreInvestigation of Critical Amazon Warehouse Worker's Bear Spray Incident Highlights Importance of Safe and Secure AI
Read moreUnveiling the Subtle Bias in Google Image Search: A Case Study for Responsible AI
Read moreExploring Google's Emotional AI: The Case of Persistent 'I Love You'
Read moreApology Issued by Google over Racist Auto-Tag Incident in Photo App - Emphasizing the Importance of Responsible AI
Read moreAmazon's AI-Powered Content Moderation Aims to Protect Users from Harmful Content - Project Cerebellum
Read moreUnauthorized Access: Voice Cloning Leads to $35 Million Bank Heist – Highlighting the Need for Trustworthy AI
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.