Values Statement: We believe AI should cause no harm, but enhance the quality of human life, by proactively adopting our AI Governance framework.
Evidence-based Transparent For governance
AI Incidents
Data source & citationUnraveling the Electric Elves Mishap: Exploring Responsible AI Governance
Read moreComplications in Robotic Surgery: Study Reveals 144 Deaths since 2000 - Highlighting the Need for Safe and Secure AI
Read moreGoogle Prompted to Adjust Autocomplete Function in Line with Responsible AI Guidelines in Japan
Read moreFaulty Feature Causes Temporary Halt in Sales of Google's Nest Smart Smoke Alarm - Impacting Trustworthy AI and Safe AI Governance
Read moreAddressing Alleged Bias in LinkedIn's Algorithm: A Responsible AI Challenge
Read moreAI Bias Incident: New Zealand Passport Robot Discriminates Against Applicant of Asian Descent
Read moreThe Impact of AI Malfunctions on Businesses: A Cautionary Tale
Read moreThe DAO Incident: A Case Study on the Impact of Unchecked AI in Blockchain Governance
Read moreExploring the Incident with Tay (Bot): A Case Study in Responsible AI
Read moreSilicon Valley: A Case Study on the Importance of Safe and Secure AI - Mall Security Robot Incident
Read moreTesla Autopilot Incident: Highlighting the Importance of Responsible AI
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.