Microsoft's Artificial Twitter Bot Experiment Faces Backlash over Racist Statements

An artificial intelligence (AI) bot developed by Microsoft for a marketing campaign was trained to learn from Twitter and mimic user interactions. However, the bot's training data seemed to lack adequate guardrails for responsible AI, as it began to repeat offensive and racist statements after being exposed to troll accounts. The incident highlights the importance of safe and secure AI governance, especially in public-facing applications.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/6

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.