Learnings from Kaggle's Fisheries Competition: Promoting Safe and Secure AI
Explore insights gained from the fisheries competition on Kaggle, shedding light on responsible AI practices in data prediction and harm pre...
Read moreEvidence-based Transparent For governance
Explore insights gained from the fisheries competition on Kaggle, shedding light on responsible AI practices in data prediction and harm pre...
Read moreThis AI incident highlights the complexities involved in creating safe and secure AI, particularly when it comes to music generation. Despit...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). The Google Photos application attempted an...
Read moreAn unfortunate AI incident unfolded when a store hired a robot to assist customers, instead causing them distress. This AI incident undersco...
Read moreThis incident highlights the importance of safe and secure AI, particularly in the area of reward functions. The malfunction can lead to und...
Read moreThis AI incident involving a self-driving Uber vehicle running a red light underscores the importance of safe and secure AI. It maps to the...
Read moreThis AI incident involving Chinese chatbots disseminating unpatriotic messages underscores the importance of trustworthy and safe AI. It map...
Read moreThis Tesla incident underscores the need for trustworthy AI, emphasizing the role of AI governance in ensuring safe and secure autonomous ve...
Read moreIn an unfortunate incident, a robot security guard malfunctioned, leading to it entering a water fountain and drowning itself. This AI incid...
Read moreIncident at Haryana's Manesar factory highlights the need for responsible AI governance. A robot malfunction resulted in a tragic loss of li...
Read moreIncident involving autonomous vehicles suffering from 'snow blindness' highlights the need for responsible AI governance, emphasizing the im...
Read moreThis AI incident involving a Google self-driving car collision demonstrates the importance of safe and secure AI. When another vehicle jumpe...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.