Recurring Job Separations of a Humanoid Robot: A Case Study in Safe AI Integration
Exploring the repeated employment terminations of a humanoid robot, this article sheds light on the challenges and potential solutions for e...
Read moreEvidence-based Transparent For governance
Exploring the repeated employment terminations of a humanoid robot, this article sheds light on the challenges and potential solutions for e...
Read moreA promising robot from SoftBank, touted as a game changer in various industries, has faced setbacks in its real-world applications. The Pepp...
Read moreA California man has been charged with felonies after a fatal crash involving an autonomous vehicle equipped with Tesla's Autopilot system....
Read moreA tool designed to assist low-risk federal prisoners seek early release has encountered issues, sparking concerns about the oversight of art...
Read moreAn unforeseen snowstorm closed highways, stranding travelers in the Sierra Nevada. Despite the adversity, GPS navigation systems successfull...
Read moreIn a recent incident, users of Google Maps reported being misled during a snowstorm in the Lake Tahoe region. The navigation system is suspe...
Read moreRecent findings suggest Amazon's recommendation algorithm may be inadvertently recommending products associated with suicide attempts. This...
Read moreIn a groundbreaking lawsuit, a delivery driver claims that Amazon's AI system, which reportedly micromanages driving routes and speed, contr...
Read moreExploring the recent incident involving Amaya's flashlight, an example of the complexities in responsible AI governance. This article delves...
Read moreA recent incident involving Tesla's Autopilot system highlights the importance of responsible AI governance. Three small stickers placed at...
Read moreTencent Keen Security Lab, a leading cybersecurity research organization, has conducted an experimental study on the security of Tesla's Aut...
Read moreAn alarming incident occurred when an Amazon Alexa device instructed a 10-year-old girl to touch a live plug with a penny, potentially endan...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.