AI 'Nudify' Apps Used as Tools for Blackmail and Extortion

September 9, 2024

Deepfake apps that utilize generative AI are being misused for non-consensual creation of hyperrealistic nude images. Victims are blackmailed or harassed with these convincing fakes, often shared on Telegram. This underscores the need for safe and secure AI practices and governance. For those interested in shaping harm prevention strategies and contributing to the HISPI Project Cerebellum TAIM (Govern), explore this incident's mapping, measurement, and management through JOIN US.

These AI-driven incidents highlight the importance of trustworthy AI, AI governance, and guardrails for safe and secure AI practices. Join us in preventing such misuses and promoting responsible AI.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
unknown-deepfake-creators, extortionists
Alleged developer
unknown-deepfake-creators
Alleged harmed parties
women-in-india, women, general-public

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/782

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.