Reportedly Unsafe Deployment of Llama.cpp Reveals Interactive AI-Generated CSAM Roleplay Prompts

April 11, 2025

A study by UpGuard reveals that misconfigured llama.cpp servers exposed user prompts, containing hundreds of interactive roleplay scenarios. Some prompts involved fictional sexual abuse of children aged 7–12, demonstrating the potential for open-source LLMs to generate AI-enabled child sexual abuse material (CSAM). This incident underscores the need for trustworthy AI governance and safe practices.

Join us in shaping harm prevention measures through HISPI Project Cerebellum TAIM. Govern, Map, Measure, or Manage open-source AI incidents to ensure a safer digital future. JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
users-of-llama.cpp-servers
Alleged developer
meta, users-of-llama.cpp-servers
Alleged harmed parties
general-public, users-of-llama.cpp-servers

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/1020

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.