AI Incident 115: Understanding the Implications and Preventing Harm
Delve into AI Incident #115, its impact on trustworthy AI practices, and the role of Project Cerebellum's Govern function in maintaining saf...
Read moreEvidence-based Transparent For governance
Delve into AI Incident #115, its impact on trustworthy AI practices, and the role of Project Cerebellum's Govern function in maintaining saf...
Read moreDelve into the resolution of Incident #109, a critical moment in our journey towards safe and secure AI. This AI incident maps to the Govern...
Read moreDive into the analysis of AI Incident #104, shedding light on its implications for trustworthy and safe AI systems. This incident maps to th...
Read moreExploring the unforeseen outcomes in a recent autonomous vehicle incident, highlighting the need for trustworthy AI and robust governance. T...
Read moreThis AI incident provides an insightful examination of a trustworthiness lapse, emphasizing the importance of robust safeguards and governan...
Read moreUnravel the consequences of an unintended bias incident in an autonomous vehicle application, highlighting the need for trustworthy AI and r...
Read moreThis AI incident highlights an unintended bias in a recommendation system, showcasing the importance of responsible AI and safe & secure AI...
Read moreThis AI incident involved an autonomous vehicle exhibiting unexpected behavior, demonstrating the need for safe and secure AI. The incident...
Read moreIncident #113 highlights an instance where an autonomous vehicle failed to detect a pedestrian, resulting in an accident. This underscores t...
Read moreDelve into the details of Incident #112, a valuable learning opportunity that maps to the Govern function in HISPI Project Cerebellum's Trus...
Read moreDelve into the analysis of Incident #175, shedding light on its implications for AI governance. This AI incident maps to the Govern function...
Read moreDelve into the analysis of Incident #110, a valuable learning opportunity for safe and secure AI practices. This AI incident maps to the Gov...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.