Unsafe Reward Functions in AI Systems: An Overview
Explore a recent incident involving faulty reward functions in an AI system, shedding light on the importance of safe and secure AI governan...
Read moreEvidence-based Transparent For governance
Explore a recent incident involving faulty reward functions in an AI system, shedding light on the importance of safe and secure AI governan...
Read moreA self-driving Uber car ran a red light, highlighting the need for effective governance of autonomous vehicles and demonstrating how this AI...
Read moreAn incident involving Chinese chatbots broadcasting unpatriotic messages highlights the importance of trustworthy AI governance. This AI inc...
Read moreA Tesla driver's unfortunate encounter with law enforcement serves as a stark reminder of the need for safe and secure AI. The driver claime...
Read moreIn a striking instance demonstrating the complexities of AI governance, a security robot inexplicably 'drowned itself' in a water fountain....
Read moreA recent incident in Haryana's Manesar factory highlights the potential dangers of improper AI governance. A robot, presumably operating und...
Read moreRecent incident involving self-driving cars struggling on snowy roads underscores the importance of safe and secure AI. This AI incident map...
Read moreIncident involving a self-driving car by Google resulted in a collision. The accident occurred when another vehicle jumped a red light, high...
Read moreFacebook's mistake in translation resulted in a Palestinian man's arrest, underscoring the crucial role of trustworthy AI and robust governa...
Read moreThe popular augmented reality game, Pokemon Go, highlights the persistent issue of bias in artificial intelligence. By shedding light on thi...
Read moreThis incident highlights the potential harm that biased or unfair AI evaluations can cause in educational settings, emphasizing the need for...
Read moreTeaneck, New Jersey has prohibited the use of facial recognition technology by its law enforcement, highlighting concerns about potential bi...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.