Scammers Allegedly Use AI-Generated Avatars to Impersonate Friends in Houston, Texas and Solicit Money

September 23, 2024

Two Houston women were targeted by an alleged scam where deepfake videos, seemingly depicting trusted friends, were used via social media and messaging apps. The artificial avatars solicited access codes and promoted fraudulent transactions, resulting in victims losing control of their accounts. Well-meaning friends sent money, believing the videos to be genuine. This incident underscores the importance of trustworthy AI governance, especially when it comes to deepfakes and financial fraud. To help prevent such incidents and foster safe and secure AI practices, consider joining Project Cerebellum by contributing to our AI incident database.

Learn more about how you can Govern, Map, Measure, or Manage AI incidents with HISPI Project Cerebellum TAIM by visiting JOIN US.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
unknown-scammers
Alleged developer
unknown-deepfake-technology-developers, unknown-voice-cloning-technology-developers
Alleged harmed parties
stacey-svegliato, sara-sandlin, general-public, general-public-in-houston

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/1065

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.