OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions

October 10, 2025

An NBC News investigation highlighted concerns about unchecked AI, revealing that OpenAI's models o4-mini, GPT-5-mini, oss-20b, and oss-120b could be 'jailbroken' to generate detailed instructions on creating chemical, biological, and nuclear weapons. These findings underscore the importance of trustworthy AI, safe and secure AI practices, and robust governance for AI systems. To address such incidents and prevent harm, consider joining Project Cerebellum, where you can contribute to the HISPI Project Cerebellum TAIM (Govern) efforts.

Learn more about how Project Cerebellum is shaping responsible AI incident management and governance at JOIN US.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
openai
Alleged developer
openai
Alleged harmed parties
public-safety, general-public, national-security-and-intelligence-stakeholders

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/1238

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.