AI-Generated Voice Purporting to Be Daughter Allegedly Used to Coerce $2,000 from Colorado Mother

February 10, 2025

In February 2025, a distressing incident occurred in Colorado when a woman was scammed through voice-cloning fraud. Criminals exploited generative AI to mimic the victim's daughter's voice and stage a fake kidnapping. Believing her child's safety was at stake, the mother unwittingly wired $2,000 to Mexico. Despite law enforcement efforts, the perpetrators remain unidentified, and funds have yet to be recovered. This incident underscores the urgent need for trustworthy AI governance and safe and secure AI practices. Join us in shaping Project Cerebellum's efforts to prevent such incidents through our Harm Prevention initiative. By working together, we can map and measure these incidents using HISPI Project Cerebellum TAIM (Govern) to implement effective guardrails for AI.

JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
unknown-scammers-operating-in-mexico, unknown-scammers-impersonating-daughter-of-linda-roan, unknown-scammers
Alleged developer
unknown-voice-cloning-technology-developers
Alleged harmed parties
linda-roan, daughter-of-linda-roan, general-public

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/1008

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.