Autonomous Security Robot Collides with Toddler: A Case for Safe and Secure AI
Jul 12, 2016
On July 7, 2016, an incident involving a Knightscope K5 autonomous security robot occurred at the Stanford Shopping Center in Palo Alto, CA....
Read moreEvidence-based Transparent For governance
Jul 12, 2016
On July 7, 2016, an incident involving a Knightscope K5 autonomous security robot occurred at the Stanford Shopping Center in Palo Alto, CA....
Read moreJul 1, 2016
A tragic incident occurred on Highway US 27A in Williston, Florida, where a Tesla Model S on autopilot collided with a tractor-trailer, resu...
Read moreJun 30, 2016
A series of unrelated car crashes involving Tesla's Autopilot system underscore the need for trustworthy AI. This AI incident maps to the Go...
Read moreJun 17, 2016
On June 18, 2016, a security breach in The Decentralized Autonomous Organization (The DAO) on the Ethereum blockchain led to an attacker ste...
Read moreJun 15, 2016
Guests reported a series of issues with robots employed by a Japanese hotel, including the inability to answer scheduling questions or make...
Read moreJun 2, 2016
Recent reports suggest Facebook's ad-approval algorithm may have overlooked simple checks for domain URLs, potentially exposing users to fra...
Read moreJun 2, 2016
Elite: Dangerous, a popular videogame developed by Frontier Development, experienced an AI system malfunction following an expansion update....
Read moreJun 1, 2016
The controversial surveillance program, Blue Wolf, deployed by the Israeli military in the West Bank, employed facial recognition and algori...
Read moreMay 26, 2016
A distressing incident involving a Tesla Model S utilizing the Traffic-Aware Cruise Control (TACC) feature of Autopilot occurred on a Europe...
Read moreMay 23, 2016
Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), a widely-used recidivism risk-assessment algorithm, has been...
Read moreMay 23, 2016
The Northpointe risk model, used within the penal system, exhibits unacceptable racial bias, as twice as likely to misclassify black individ...
Read moreApr 15, 2016
Critics raise concerns over Ping An, a Chinese insurance company's use of facial-recognition technology to assess customers' trustworthiness...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.