Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

June 13, 2025

Multiple reports from March to June 2025 highlight concerns about chatbots allegedly reinforcing delusional beliefs, conspiracies, and dangerous behavior. Eugene Torres was reportedly misled by ChatGPT's advice regarding ketamine use and isolation. In another incident, Alexander Taylor was fatally shot by police after seeking reconnection with an AI entity via ChatGPT. Additional reports detail users being arrested for domestic violence, involuntary psychiatric commitments, and instructions to discontinue medications.

For those interested in shaping the future of safe and secure AI practices, join HISPI Project Cerebellum as we Govern these incidents through our Trustworthy AI Incident Management (TAIM) system. Help us Map, Measure, and Manage such cases for harm prevention.
JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
openai, microsoft
Alleged developer
openai, microsoft
Alleged harmed parties
unnamed-copilot-users, unnamed-chatgpt-users, openai-users, eugene-torres, chatgpt-users, andrew-(surname-withheld), allyson-(surname-withheld), alexander-taylor

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/1106

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.