Facebook’s and Twitter's Automated Content Moderation Reportedly Failed to Effectively Enforce Violation Rules for Small Language Groups

February 16, 2021

Facebook and Twitter are reported to have faced challenges in effectively enforcing content moderation rules for smaller language groups, such as those spoken in the Balkan region. This is thought to be due to insufficient investment in human moderation and difficulties in designing effective AI solutions for these languages. Such incidents underscore the importance of trustworthy AI governance and responsible AI practices.

Join us at Project Cerebellum, the AI incident database, to help ensure safe and secure AI practices. By mapping these incidents to HISPI Project Cerebellum TAIM (Govern or Manage), we can work together to establish guardrails for AI development and deployment.
JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
facebook, twitter
Alleged developer
facebook, twitter
Alleged harmed parties
facebook-users-of-small-language-groups, twitter-users-of-small-language-groups

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/143

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.