Unraveling Electric Elves: An Examination of a Responsible AI Incident and Its Lessons
Explore the intricacies of an incident involving 'Electric Elves', delving into the aspects of trustworthy AI, safe and secure AI practices,...
Read moreEvidence-based Transparent For governance
Explore the intricacies of an incident involving 'Electric Elves', delving into the aspects of trustworthy AI, safe and secure AI practices,...
Read moreThis AI incident underscores the importance of safe and secure AI in healthcare, particularly in robotic surgery. According to a study, ther...
Read moreThis incident marks Google's response to a court order in Japan, emphasizing the need for responsible autocomplete function design. It maps...
Read moreGoogle's Nest has temporarily halted sales of its smart smoke alarm due to a faulty feature, highlighting the critical role of trustworthy A...
Read moreThis incident raises questions about AI fairness and bias within LinkedIn's systems. It maps to the Govern function in HISPI Project Cerebel...
Read moreAn incident involving a racially biased AI in the New Zealand passport application system was reported, asking an applicant of Asian descent...
Read moreExplore real-world examples of AI incidents, their impacts, and lessons learned. This article maps to the 'Govern' function in HISPI Project...
Read moreDive into the aftermath of The DAO hack, its impact on blockchain governance, and the subsequent soft and hard forks. This AI incident maps...
Read moreExploring the Taylor Swift-inspired Microsoft chatbot failure and its implications for trustworthy AI, this incident maps to the Govern func...
Read moreA toddler was knocked down and run over by a mall security robot in Silicon Valley, highlighting the need for safe and secure AI. This AI in...
Read moreExplore the tragic incident involving Joshua Brown, a self-driving car advocate who lost his life in an accident while testing a Tesla. This...
Read moreA recent incident involving racial bias in image search results for black teenagers on Google raises concerns about responsible AI governanc...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.