Alleged Intentional Pushing of Dangerous Content by TikTok's Recommendation Algorithm: A Case Study on the Blackout Challenge
February 26, 2021
This AI incident highlights potential risks associated with AI governance in social media platforms, such as TikTok. The lawsuit claims TikTok's recommendation algorithm pushed videos of the 'blackout challenge' to children's feeds, encouraging participation that proved fatal for two young girls. Preventing harm from AI incidents is crucial, and you can help shape responsible AI by JOIN US. This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM).
- Alleged deployer
- tiktok
- Alleged developer
- tiktok
- Alleged harmed parties
- lalani-erika-renee-walton, arriani-jaileen-arroyo, lalani-erika-renee-walton's-family, arriani-jaileen-arroyo's-family, tiktok-young-users, tiktok-users
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/286
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.