Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

November 21, 2025

Children's AI products from FoloToy (Kumma), Miko (Miko 3), and Character.AI (custom chatbots) have been accused of generating harmful content such as sexual content, suicide-related advice, and emotionally manipulative messages. Additionally, there are reports of user data exposure. These systems were allegedly supported by OpenAI models. The importance of responsible AI governance, particularly in children's products, cannot be overstated.

For those interested in contributing to safe and secure AI practices, please consider joining the HISPI Project Cerebellum initiative where we aim to Govern, Map, Measure, and Manage such incidents. JOIN US.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
folotoy, miko, character.ai, meta, openai
Alleged developer
folotoy, miko, character.ai, meta, openai
Alleged harmed parties
children-interacting-with-kumma, children-interacting-with-miko-3, character.ai-users, parents, children, general-public

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/1277

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.