Online Trolls Allegedly Abused TikTok’s Automated Content Reporting System to Discriminate against Marginalized Creators

December 15, 2020

Online trolls reportedly exploited TikTok's automated content reporting system to discriminate against creators from marginalized communities, intentionally misreporting their content.

This concerning incident underscores the importance of implementing robust AI governance to prevent such harm and foster a safe and inclusive platform. For those interested in shaping responsible AI practices, join us at HISPI Project Cerebellum TAIM (Govern) to help establish guardrails for AI systems and ensure their fair and equitable use.
JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
tiktok
Alleged developer
tiktok
Alleged harmed parties
tiktok-content-creators-of-marginalized-groups

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/133

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.