Facebook's AI-Supported Moderation Failed to Classify Terrorist Content in East African Languages

June 1, 2015

Alarmingly, Facebook's AI system for content moderation in East African languages has been reportedly failing to identify terrorist content, mistakenly classifying non-terrorist content. This underscores the need for safe and secure AI practices.

For those interested in shaping responsible AI governance and ensuring that such incidents do not occur, join HISPI Project Cerebellum to contribute to our AI incident database and help establish guardrails for AI through the Map function of our TAIM (Trusted Artificial Intelligence Management) system.
JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
facebook
Alleged developer
facebook
Alleged harmed parties
facebook-users-speaking-east-african-languages, facebook-users-in-east-africa

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/392

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.