Mother in Arizona Received Fake Ransom Call Featuring AI Voice of Her Daughter

January 20, 2023

An Arizona mother fell victim to a ransom call that utilized AI voice synthesis, claiming to be her daughter in distress. Upon confirming her daughter's safety, it was determined the voice was fake. This incident underscores the importance of trustworthy and safe AI practices.

Join us at HISPI Project Cerebellum to help establish guardrails for such incidents and promote harm prevention in the realm of AI governance. Learn more about our efforts to Measure AI safety and manage AI Incidents with the HISPI Project Cerebellum TAIM.JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
unknown-scammers
Alleged developer
unknown
Alleged harmed parties
jennifer-destefano, destefanos-family

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/537

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.