Autopilot Safety Debate: Tesla vs. Crash Victims - The Need for Responsible AI Governance
This AI incident involving Tesla's Autopilot system raises questions about safety and the need for trustworthy AI governance. While Tesla ma...
Read moreEvidence-based Transparent For governance
This AI incident involving Tesla's Autopilot system raises questions about safety and the need for trustworthy AI governance. While Tesla ma...
Read moreThis AI incident involving a South Korean chatbot highlights the importance of responsible AI governance, emphasizing trustworthy data pract...
Read moreThis AI incident highlights the importance of robust governance and trustworthy AI. The patent, if implemented, could pose significant harm...
Read moreAn unfortunate incident at a roller rink highlights the need for trustworthy AI. This AI mistake, which incorrectly identified a teenager as...
Read moreThis facial recognition website, capable of transforming anyone into a virtual law enforcement officer or potential stalker, underscores the...
Read moreExploring an incident involving AI in healthcare, we discuss its implications and emphasize the importance of responsible AI governance for...
Read moreThis incident highlights an AI system employed by Amazon to manage its Flex workers with minimal human intervention. It underscores the impo...
Read moreThis incident highlights the potential risks of unregulated AI, particularly in content recommendation systems. The case demonstrates why it...
Read moreExploring recent adjustments towards trustworthy AI by a software company under scrutiny, particularly focusing on the implementation of saf...
Read moreDelving into the COMPAS recidivism algorithm, we highlight the importance of responsible AI governance and harm prevention. This AI incident...
Read moreExplore the intricacies of debiasing word embeddings, a crucial aspect of responsible AI development. This study addresses gender bias and a...
Read moreA recent study has uncovered bias and inflexibility in the way AI systems detect civility. This AI incident maps to the Govern function in H...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.