Examining AI Incident #174: Unintended Consequences in Autonomous Vehicles
Recently, an incident involving an autonomous vehicle has shed light on the need for responsible AI governance. In this case, the autonomous...
Read moreEvidence-based Transparent For governance
Recently, an incident involving an autonomous vehicle has shed light on the need for responsible AI governance. In this case, the autonomous...
Read moreDelving into a recent incident (#175), we analyze its root causes, the ensuing impact, and the vital lessons it offers for responsible AI go...
Read moreRecent incident involving an autonomous service system demonstrated the importance of robust AI governance. The system, intended to assist c...
Read moreIncident #177 highlights an instance where the implementation of an AI model led to unforeseen consequences, emphasizing the need for robust...
Read moreIncident #178 underscores the importance of robust guardrails for AI, highlighting the consequences of insufficient harm prevention measures...
Read moreRecently, our team encountered an incident involving unintended model drift in Project Cerebellum's AI model. This instance underscores the...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.