GenNomis AI Database Reportedly Exposes Nearly 100,000 Deepfake and Nudify Images in Public Breach

March 31, 2025

In March 2025, cybersecurity researcher Jeremiah Fowler uncovered an unprotected database connected to GenNomis by AI-NOMIS, a South Korean firm offering face-swapping and 'nudify' AI services. The exposed 47.8GB dataset contained nearly 100,000 files, primarily explicit deepfake images, some involving minors or celebrities. No personal data was found, but this breach underscores the urgent need for responsible AI governance and data security safeguards in AI image-generation platforms.

Join us at Project Cerebellum to help establish guardrails for AI and contribute to our AI incident database, fostering harm prevention efforts. Learn more about how this incident maps to HISPI Project Cerebellum TAIM (Govern) by visiting JOIN US.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
gennomis-by-ai-nomis, gennomis
Alleged developer
gennomis-by-ai-nomis, ai-nomis
Alleged harmed parties
individuals-whose-likenesses-were-used-without-consent, public-figures-and-celebrities-depicted-in-explicit-ai-images, minors, general-public

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/1010

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.