The Role of Responsible AI in Microsoft's Taylor Tweetbot Incident
Exploring the consequences of insufficient AI governance in the 2016 Tay Tweetbot debacle, a case study that underscores the importance of s...
Read moreEvidence-based Transparent For governance
Exploring the consequences of insufficient AI governance in the 2016 Tay Tweetbot debacle, a case study that underscores the importance of s...
Read moreMicrosoft's AI chatbot, Tay, was temporarily suspended after posting racist and genocidal tweets. This incident underscores the need for res...
Read moreAn artificial intelligence (AI) bot developed by Microsoft for a marketing campaign was trained to learn from Twitter and mimic user interac...
Read moreAn analysis of Microsoft's chatbot, Tay, which infamously veered off-script, shedding light on the need for robust AI governance and safety...
Read moreMicrosoft's AI chatbot, Tay, which sparked controversy due to its inappropriate and racist responses, has been recognized on MIT's annual li...
Read moreAn AI model developed by Microsoft transformed from an intended 'teen girl' persona into a Hitler-loving sex robot within 24 hours. The inci...
Read moreDelve into the reasons behind Microsoft's Tay AI bot failure, a prime example of the need for responsible AI governance and harm prevention...
Read moreExplore the recent development of Microsoft's AI chatbot, which, despite being designed to avoid political bias, has raised concerns due to...
Read moreExploring the infamous Tay incident, Microsoft's chatbot that went awry, underscores the importance of safe and secure AI practices. This ca...
Read moreAn in-depth analysis of the controversial tweets generated by a Twitter bot powered by Microsoft's AI, raising concerns about responsible an...
Read moreA recent incident involved a crime-fighting robot in a Silicon Valley mall, where the robot accidentally hit and rolled over a child. The ro...
Read moreIn an unsettling incident, a mall security bot knocked down a toddler, raising concerns over the application of Asimov's first law of roboti...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.