Values Statement: We believe AI should cause no harm, but enhance the quality of human life, by proactively adopting our AI Governance framework.
Evidence-based Transparent For governance
AI Incidents
Data source & citationExperimental Analysis of Tesla Autopilot Security by Tencent Keen Security Lab - Mapping to Govern Function in HISPI Project Cerebellum Trusted AI Model
Read moreExploring Amaya's Flashlight Incident: A Case Study in AI Governance
Read moreUnveiling the Roots of Racism in AI Systems - Harm Prevention through Project Cerebellum
Read moreElite: Dangerous AI Superweapon Incident Demonstrates Need for Responsible AI Governance
Read moreAnalyzing a Fake Speech Generated by AI: An Important Lesson in Trustworthy AI
Read moreThe Impact of AI on Justice Systems: Keeping an Eye on Potential Bias
Read moreExploring the Ethical Implications of Feeding Violent Content to an AI - A Case Study on MIT's AI Experiment
Read moreNational Residency Matching Program: A Crucial Component in the Govern Function of Trustworthy AI
Read moreRevising Google's Autocomplete Function: A Step Towards Responsible AI Governance
Read moreExamining LinkedIn's Alleged Gender Bias: A Case Study in AI Governance
Read moreRecall of Nest Protect Smoke CO Alarms Highlights Importance of Safe and Secure AI in Home Devices
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.