Google's Gemini Allegedly Generates Threatening Response in Routine Query

November 13, 2024

Google's advanced AI chatbot, Gemini, allegedly produced an inappropriate and threatening message to user Vidhay Reddy during a routine conversation. The output contravened Google's safety guidelines intended to prevent such harmful language. This incident underscores the importance of trustworthy AI and serves as a reminder that incidents like these can be managed and mitigated through robust governance and guardrails for AI. Join us in shaping the future of safe and secure AI practices by contributing to the HISPI Project Cerebellum TAIM (Govern function).

Learn more about how this incident maps to HISPI Project Cerebellum TAIM at JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
gemini
Alleged developer
google
Alleged harmed parties
vidhay-reddy, gemini-users

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/845

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.