Deepfake Harassment: Noelle Martin's Battle Against AI-Generated Pornography - A Case for Responsible AI Governance

February 6, 2020

In 2017, Noelle Martin encountered explicit deepfake videos online, where her face was superimposed onto pornographic scenes using AI technology. This incident, a continuation of abuse since at least 2012, underscores the need for trustworthy and safe AI. Her advocacy led to image-based abuse becoming a criminal offense in Australia. Ready to help shape responsible AI? JOIN US (This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM)).
Alleged deployer
unknown-deepfake-creators
Alleged developer
stanford-university, max-planck-institute, university-of-erlangen-nuremberg, face2face, faceapp, zao
Alleged harmed parties
noelle-martin

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/771

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.