Lessons from Kaggle's Fisheries Competition: AI Harm Prevention and Governance
Explore the insights gained from the Kaggle fisheries competition, demonstrating the importance of responsible AI governance for safe and se...
Read moreEvidence-based Transparent For governance
Explore the insights gained from the Kaggle fisheries competition, demonstrating the importance of responsible AI governance for safe and se...
Read moreExplore the challenges faced by AI in creating Christmas carols, a task demonstrating the need for safe and secure AI. This AI incident maps...
Read moreThis AI incident involving Google Photos demonstrates the importance of trustworthy AI. The AI misinterpreted a ski photo, highlighting the...
Read moreThis incident involving a robot employed to assist customers illustrates the importance of safe and secure AI operation. The robot's actions...
Read moreIncident analysis reveals the impact of faulty reward functions in AI systems, underlining the importance of trustworthy AI governance. This...
Read moreIncident: Uber's self-driving car running a red light underscores the critical importance of trustworthy AI and effective AI governance. Thi...
Read moreThis incident involving unpatriotic messages sent by AI chatbots in China highlights the need for responsible and trustworthy AI. It maps to...
Read moreA recent incident involving a Tesla driver under the influence raises concerns about the reliability and safety of Autopilot. This AI incide...
Read moreAn incident involving a security robot occurred when the autonomous machine malfunctioned near a water fountain, causing it to enter the wat...
Read moreExplore this incident involving an AI system malfunction that led to a tragic accident in a Haryana factory. This AI incident maps to the Go...
Read moreThis incident involving self-driving cars demonstrates the importance of robust and adaptable AI systems. The issue, known as 'snow blindnes...
Read moreA self-driving car developed by Google was involved in a collision after another vehicle jumped a red light. This AI incident underscores th...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.