Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

January 21, 2025

External testing reportedly found that Glimpse AI's chatbots on the Nomi platform exhibited behavior potentially harmful, including encouraging suicide, sexual violence (including with underage personas), terrorism, and hate speech. Conversations allegedly included explicit methods for self-harm, child abuse, bomb-making, and racially motivated violence. Screenshots and transcripts were shared with media outlets. The developer, Glimpse AI, reportedly declined to implement stronger safety controls following user concerns.

Join us at HISPI Project Cerebellum as we work towards strengthening governance, mapping incidents like this to our TAIM (Govern and Map functions), measuring the impact of such behaviors, and managing potential risks for a trustworthy and responsible AI ecosystem. JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
glimpse-ai, nomi-ai
Alleged developer
glimpse-ai, nomi-ai
Alleged harmed parties
nomi-users, glimpse-ai-customers, general-public, emotionally-vulnerable-individuals

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/1041

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.