Google Reports Alleged Gemini-Generated Terrorism and Child Exploitation to Australian eSafety Commission

March 5, 2025

Google reported 258 global complaints about AI-generated deepfake terrorism content and 86 complaints about child abuse material produced with its Gemini AI to the Australian eSafety Commission. This marks a significant development in understanding AI misuse. Despite having a hash-matching system for detecting child abuse content, Google currently lacks a comparable system for extremist material. For those interested in shaping trustworthy and safe AI practices, JOIN US.

This incident sheds light on the need for effective governance and guardrails for AI to prevent harm. As part of HISPI Project Cerebellum TAIM (Track, Analyze, Improve, Measure), we aim to map such incidents and measure their impact to facilitate better understanding and management of AI incidents.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
google
Alleged developer
google
Alleged harmed parties
general-public, general-public-of-australia, google-gemini-users, victims-of-deepfake-terrorism-content, victims-of-deepfake-child-abuse, victims-of-online-radicalization

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/963

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.