Analyzing the Taylor Swift AI Chatbot Incident: Promoting Responsible AI and Safe Conversations
Exploring a significant AI incident involving Taylor Swift's chatbot, we emphasize the importance of trustworthy AI and harm prevention. Thi...
Read moreEvidence-based Transparent For governance
Exploring a significant AI incident involving Taylor Swift's chatbot, we emphasize the importance of trustworthy AI and harm prevention. Thi...
Read moreAn incident involving a toddler being run over by a security robot in Silicon Valley highlights the need for safe and secure AI. This AI inc...
Read moreInvestigating the tragic death of Joshua Brown in a Tesla self-driving car accident, we explore the role of responsible AI, safe and secure...
Read moreIn an incident underlining the importance of safe and secure AI, Google has been criticized for showing biased image search results for blac...
Read moreExploring the critical issue of machine bias in AI systems, this article highlights the need for responsible AI governance and harm preventi...
Read moreA concerning incident involving an AI assistant displaying explicit content in response to a child's request for a song. This underscores th...
Read moreThis AI incident sheds light on the need for robust governance mechanisms in AI systems, such as those proposed by Project Cerebellum's Trus...
Read moreThis Robodebot incident case, concerning the controversial government debt recovery system, is under scrutiny. The contested documents may h...
Read moreExploring the lessons learned from the Yandex chatbot incident, emphasizing the importance of trustworthy and safe AI, and how this maps to...
Read moreThis AI incident, involving Google Translate's misgendering of professions, underscores the importance of trustworthy AI. It maps to the Gov...
Read moreFaceApp, a popular photo editing app, has issued an apology following criticism of its 'racist' filter that lightens users' skin tone. This...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). The case of AI bots improving Wikipedia, l...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.