AI-Enabled Fraud Schemes Reportedly Increasing Consumer Harm and Challenging Detection

June 22, 2024

AI tools, such as language models, voice cloning, and synthetic IDs, are being exploited by scammers to orchestrate increasingly convincing fraudulent activities. These schemes result in substantial financial losses and identity theft. To combat this growing threat, banks are deploying AI-driven verification systems. However, experts warn that AI-enhanced fraud remains a significant issue due to its difficulty in detection.

Learn more about the role of responsible AI governance and how Project Cerebellum is mapping such incidents within the HISPI Project Cerebellum TAIM (Govern) for harm prevention and safe AI. JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
unknown-scammers
Alleged developer
openai, ai-tool-creators, unknown-deepfake-technology-developers, unknown-voice-cloning-technology-developers
Alleged harmed parties
bank-customers

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/735

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.