Facebook Content Moderators Demand Better Working Conditions Due to Allegedly Inadequate AI Content Moderation

April 1, 2020

Content moderators at Facebook call for better working conditions due to the inefficacy of the automated content moderation system. The system reportedly failed to provide sufficient performance, exposing human reviewers to distressing content such as graphic violence and child abuse. This underscores the need for trustworthy AI governance and guardrails for safe and secure AI practices.

Join us at Project Cerebellum and help shape the future of AI incident management through our HISPI TAIM functions: Govern, Map, Measure, or Manage. Your contribution can help prevent harm and ensure a safer digital environment.
JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
facebook
Alleged developer
facebook
Alleged harmed parties
facebook-content-moderators

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/215

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.