Global Cybercrime Network Storm-2139 Allegedly Exploits AI to Generate Deepfake Content
December 19, 2024
A global cybercrime network, Storm-2139, allegedly exploited stolen credentials and developed custom tools to bypass AI safety guardrails. They reportedly generated harmful deepfake content, including nonconsensual intimate images of celebrities, and their software is reported to have disabled content moderation, hijacked AI access, and resold illicit services. Microsoft disrupted the operation and filed a lawsuit in December 2024, later identifying key members of the network in February 2025.
- Alleged deployer
- unidentified-storm-2139-actor-from-illinois, unidentified-storm-2139-actor-from-florida, storm-2139, ricky-yuen-(cg-dot), phat-phung-tan-(asakuri), arian-yadegarnia-(fiz), alan-krysiak-(drago)
- Alleged developer
- unidentified-storm-2139-actor-from-illinois, unidentified-storm-2139-actor-from-florida, storm-2139, ricky-yuen-(cg-dot), phat-phung-tan-(asakuri), arian-yadegarnia-(fiz), alan-krysiak-(drago)
- Alleged harmed parties
- victims-of-deepfake-abuse, openai, microsoft, celebrities, azure-openai-customers, ai-service-providers
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/955
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.