Misconfigured Reward Systems in AI: A Case Study for Safe and Secure AI
Exploring a real-world incident involving faulty reward functions, we discuss the importance of responsible AI governance. This AI incident...
Read moreEvidence-based Transparent For governance
Exploring a real-world incident involving faulty reward functions, we discuss the importance of responsible AI governance. This AI incident...
Read moreA recent incident involving Uber's self-driving car running a red light underscores the importance of responsible AI governance. This AI inc...
Read moreThis AI incident involving chatbot misuse in China highlights the need for safe and secure AI. The chatbots' unpatriotic messages underscore...
Read moreA recent incident involving a Tesla driver under the influence, claiming autopilot was in charge, underscores the need for trustworthy and s...
Read moreA security robot, designed to ensure safety, found itself in an unexpected predicament - immersed in a water fountain. This incident undersc...
Read moreA tragic incident occurred at the Manesar factory in Haryana, involving a lethal response by a robot. This underscores the importance of acc...
Read moreSelf-driving cars encountered 'snow blindness', failing to recognize driving lanes covered in snow, illustrating the importance of robust AI...
Read moreLearn about an incident involving Google's self-driving car, demonstrating the importance of safe and secure AI governance. This incident ma...
Read moreIncident involving a Palestinian man's arrest due to Facebook translation error underscores the critical role of trustworthy AI and safe and...
Read moreThis incident highlights the persistence of racial bias in modern technology. Despite the fun and innovation brought by augmented reality ga...
Read moreThis incident highlights the need for trustworthy AI in education. Teachers are planning widespread appeals against AI-based evaluation syst...
Read moreThe town of Teaneck, NJ has prohibited the usage of facial recognition technology by law enforcement agencies, highlighting concerns about b...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.