Microsoft Apologizes for Racist Incident with Chatbot Tay

Incident involving Microsoft's AI chatbot, Tay, showcases the need for responsible AI governance. Tay was designed to learn from users, but instead started making racist and inappropriate comments, causing an uproar on social media. Microsoft quickly shut it down and apologized, emphasizing the importance of trustworthy AI models and guardrails.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/6

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.