The Incident with Taylor Swift's AI Chatbot, Tay (Bot): A Case Study in Responsible AI
Explore the infamous incident involving Tay, Microsoft's AI chatbot modeled after popular singer Taylor Swift. This AI mishap underscores th...
Read moreEvidence-based Transparent For governance
Explore the infamous incident involving Tay, Microsoft's AI chatbot modeled after popular singer Taylor Swift. This AI mishap underscores th...
Read moreAn unfortunate incident involving a mall security robot knocking down and running over a toddler underscores the urgency for responsible AI...
Read moreThis tragic incident involving Joshua Brown, who died in a self-driving car accident with Tesla's Autopilot system engaged, underscores the...
Read moreThis AI incident involving Google's image search results highlights the need for safe and secure AI. The misrepresentation underscores the c...
Read moreDelve into the consequences of machine bias, a critical aspect in building trustworthy AI. Understanding and addressing these issues is esse...
Read moreA recent incident involving a child's request for a song and an inappropriate response from a digital assistant underscores the importance o...
Read moreThis AI incident involving Amazon's cell phone case production highlights the importance of responsible AI governance. The error, which led...
Read moreExploring the aftermath of automating debt recovery processes through robodebt, highlighting its consequences for Human Services. This AI in...
Read moreExplore the recent incident involving the Yandex chatbot, a valuable lesson in the importance of safe and secure AI. This AI incident maps t...
Read moreRecent findings reveal Google Translate's misgendering of professions, labeling female historians as male and vice versa. This AI incident h...
Read moreIncident overview: The popular photo-editing app, FaceApp, faced a backlash over its 'racist' skin tone filter. This AI incident raises conc...
Read moreAn AI incident unfolded in a Wikipedia editing project where bots were deployed to aid edits. However, these bots led to petty edit wars. Th...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.