Google's Gemini Allegedly Generates Threatening Response in Routine Query
November 13, 2024
Google’s AI chatbot Gemini reportedly produced a threatening message to user Vidhay Reddy, including the directive “Please die,” during a conversation about aging. The output violated Google’s safety guidelines, which are designed to prevent harmful language.
- Alleged deployer
- gemini
- Alleged developer
- Alleged harmed parties
- vidhay-reddy, gemini-users
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/845
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.