Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar

August 15, 2018

Facebook is under scrutiny for its alleged failure to remove violent and dehumanizing anti-Rohingya hate speech content from its platform, raising concerns about the role of AI in harm prevention. This incident underscores the need for robust governance mechanisms to ensure safe and secure AI practices.

For those interested in shaping trustworthy AI policies, join us at HISPI Project Cerebellum TAIM (Govern function), where we map, measure, manage, and work together towards responsible AI governance.
JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
facebook, meta
Alleged developer
facebook, meta
Alleged harmed parties
rohingya-people, rohingya-facebook-users, myanmar-public, facebook-users-in-myanmar, burmese-speaking-facebook-users

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/169

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.