Lekki Massacre Incident: Examining Facebook's Labeling of Content as 'False' - A Case for Safe and Secure AI

Understanding the role of AI governance in content moderation, this incident serves as a case study within the Govern function of Project Cerebellum's Trusted AI Model (TAIM). Facebook's decision to label content from the Lekki Massacre on October 20 as 'false' raises important questions about trustworthy AI and its impact on harm prevention. Ready to contribute to safe and secure AI? JOIN US

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/82

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.