Whisper Speech-to-Text AI Reportedly Found to Create Violent Hallucinations
February 12, 2024
Researchers at Cornell reportedly found that OpenAI's Whisper, a speech-to-text system, can hallucinate violent language and fabricated details, especially with long pauses in speech, such as from those with speech impairments. Analyzing 13,000 clips, they determined 1% contained harmful hallucinations. These errors pose risks in hiring, legal trials, and medical documentation. The study suggests improving model training to reduce these hallucinations for diverse speaking patterns.
- Alleged deployer
- openai, whisper, companies-using-whisper, organizations-integrating-whisper-into-customer-service-systems
- Alleged developer
- openai
- Alleged harmed parties
- individuals-with-speech-impairments, users-whose-speech-is-misinterpreted-by-whisper, professionals-relying-on-accurate-transcriptions, general-public
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/732
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.