Permanent Removal of Social Media Content via Automated Tools Allegedly Prevented Investigative Efforts

March 16, 2020

The automated, permanent removal of social media content that violates policies, such as terrorism, violent extremism, and hate speech, may have inadvertently hampered investigations by preventing access to potential evidence. This raises concerns about trustworthy AI governance and the importance of guardrails for AI systems. For those interested in shaping safe and secure AI practices, learn more about how this incident maps to HISPI Project Cerebellum TAIM (Govern) at JOIN US.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
youtube, twitter, facebook
Alleged developer
youtube, twitter, facebook
Alleged harmed parties
victims-of-crimes-documented-on-social-media, investigative-journalists, international-criminal-court-investigators, international-court-of-justice-investigators, criminal-investigators

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/268

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.