Robodebt: Examining the Impact of Automation on Human Services and Responsible AI Governance
This AI incident highlights the potential consequences of over-reliance on automated systems in human services, such as the controversial Ro...
Read moreEvidence-based Transparent For governance
This AI incident highlights the potential consequences of over-reliance on automated systems in human services, such as the controversial Ro...
Read moreExplore the Yandex chatbot incident, a valuable lesson in trustworthy AI. This AI incident maps to the Govern function in HISPI Project Cere...
Read moreThis AI incident, occurring on Google Translate, showcases the potential harm that can result from biased AI models. Female historians and m...
Read moreFaceApp has issued an apology for a controversial filter that lightened users' skin tones, shedding light on the importance of safe and secu...
Read moreIn an instance demonstrating the importance of trustworthy AI, AI bots were employed to streamline Wikipedia edits. However, they became emb...
Read moreExplore insights gained from the Kaggle fisheries competition, demonstrating the importance of responsible and trustworthy AI for sustainabl...
Read moreA recent incident showcases the limitations of current AI capabilities when it comes to creating Christmas carols. This AI incident maps to...
Read moreThis incident involving Google Photos correcting a ski photo highlights the need for safe and secure AI. It maps to the Govern function in H...
Read moreAn AI-powered customer service bot was deployed in a retail store to assist customers, however, the bot's behaviors were perceived as intimi...
Read moreExploring a real-world example of a misconfigured reward function leading to unintended outcomes, this article underscores the importance of...
Read moreAn Uber self-driving car was reported to have run a red light, raising concerns about AI governance and the need for trustworthy AI. This AI...
Read moreThis incident highlights the need for responsible AI governance, especially in sensitive areas like national security. The chatbots' unpatri...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.