Incident #107: Unintended Data Leakage in Autonomous Vehicle System
This AI incident involved an unintended data leak from a self-driving car's system, highlighting the importance of safe and secure AI. This...
Read moreEvidence-based Transparent For governance
This AI incident involved an unintended data leak from a self-driving car's system, highlighting the importance of safe and secure AI. This...
Read moreExplore the impact and analysis of a recent autonomous vehicle incident, highlighting the need for robust AI governance and trustworthy AI p...
Read moreExplore a recent AI incident involving autonomous vehicles and their decision-making processes. This incident highlights the importance of r...
Read moreExplore the unforeseen issues in the autonomy of self-driving cars, shedding light on the importance of trustworthy AI. This AI incident map...
Read moreDive into this comprehensive analysis of Incident #122, a revealing look at an important lesson in AI harm prevention. This incident maps to...
Read moreAn instance of an unforeseen outcome occurred in an AI system, highlighting the need for robust safeguards and responsible AI practices. Thi...
Read moreExploring the unforeseen outcomes of an autonomous transport system incident, this article highlights the need for responsible AI and robust...
Read moreDelve into the details of Incident #127, a noteworthy example highlighting the importance of trustworthy AI. This incident maps to the Gover...
Read moreThis AI incident provides valuable insights into the govern function within the HISPI Project Cerebellum Trusted AI Model (TAIM). Understand...
Read moreExplore the potential risks and consequences of an unchecked AI system, as demonstrated in incident #128. This AI incident maps to the Gover...
Read moreDive into this detailed analysis of Incident #119, shedding light on its implications for trustworthy AI. This AI incident maps to the Gover...
Read moreExplore the unforeseen outcome in incident #126, shedding light on the importance of responsible AI and safe and secure systems. This AI inc...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.