Values Statement: We believe AI should cause no harm, but enhance the quality of human life, by proactively adopting our AI Governance framework.
Evidence-based Transparent For governance
AI Incidents
Data source & citationRobot Employee Causes Customer Fear at Retail Store: An Example of the Need for Responsible AI
Read moreUnderstanding & Addressing Faulty Reward Functions in AI Systems for Trustworthy AI - Project Cerebellum
Read moreSelf-Driving Uber Vehicle Violates Traffic Law: A Case for Responsible AI Governance
Read moreUnpatriotic Messages from Chatbots Prompt Action in China: A Case Study on AI Governance
Read moreAutopilot Malfunction Incident Raises Concerns for AI Governance
Read moreAI Incident: AI-Powered Robot Security Guard Malfunctions Near Water Source
Read moreDeadly AI Incident at Manesar Factory: Emphasizing Need for Safe and Secure AI
Read morePreventing AI Failures: The Case of 'Snow Blindness' in Self-Driving Cars - Promoting Safe & Trustworthy AI
Read moreResponsible AI Incident: Google Self-Driving Car Involved in Collision After Other Vehicle Jumped Red Light - Safe and Secure AI Development
Read moreWrong Facebook Translation Leads to Unwarranted Arrest: A Case for Responsible AI
Read moreAugmented Reality in Pokemon Go Highlights Persisting Racial Bias
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.