South Korean Chatbot Incident Highlights Importance of Responsible AI and Data Governance
This AI incident involving a South Korean chatbot underscores the need for safe and secure AI practices, particularly in handling user data....
Read moreEvidence-based Transparent For governance
This AI incident involving a South Korean chatbot underscores the need for safe and secure AI practices, particularly in handling user data....
Read moreA recent surveillance group expose uncovered a troubling patent by Huawei, detailing the development of an AI system designed for Uighur det...
Read moreAn unfortunate incident occurred when a teen was incorrectly identified as a troublemaker by an AI system at a roller rink, highlighting the...
Read moreThis facial recognition website, while marketed for public safety, raises concerns about privacy and potential misuse. It could be used not...
Read moreExplore the consequences when an algorithm impacts healthcare decisions. This AI incident maps to the Govern function in HISPI Project Cereb...
Read moreExploring an instance of minimal human intervention in Amazon Flex worker dismissals driven by algorithms, this article underscores the impo...
Read moreA recent courtroom testimony unveiled the 'marketing' nature of accuracy claims made for San Francisco gunshot sensors. This incident unders...
Read moreAn A.I. incident occurred on Facebook where a video of Black men was incorrectly labeled as 'Primates'. This incident underscores the import...
Read moreThis AI incident highlights the importance of responsible AI governance, raising concerns about false matches in facial recognition technolo...
Read moreThis AI incident highlights the importance of trustworthy AI, particularly in applications like 'Genderify'. The platform was shut down foll...
Read moreThis AI incident highlights the need for trustworthy AI governance. By misinterpreting actions, Amazon's AI cameras are wrongly penalizing d...
Read moreExploring concerns over racial bias in the TikTok algorithm, we delve into the importance of trustworthy AI and safe and secure platforms. T...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.