AI Misstep: YouTube's Hate Speech Detector Mistakes Chess Content for Harmful Language

June 28, 2020

YouTube's AI-driven hate speech detection encountered a mistake, flagging chess content as harmful due to misinterpretations of strategy terms like 'black,' 'white,' and 'attack.' This incident underscores the importance of responsible AI governance and highlights the need for trustworthy AI guardrails. Join us in helping shape a safe and secure future for AI with Project Cerebellum, where this AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). JOIN US
Alleged deployer
youtube
Alleged developer
youtube
Alleged harmed parties
antonio-radic, youtube-chess-content-creators, youtube-users

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/144

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.