Deepfake Job Applicant Allegedly Used AI Tools to Apply for Remote Role at U.S. Security Startup

April 8, 2025

Voice authentication startup Pindrop Security uncovered a job candidate employing deepfake software and other AI tools in an attempted scam. This incident is indicative of a burgeoning trend: international scammers leveraging AI technologies to apply for US-based remote roles, sometimes successfully. In this instance, Project Cerebellum's harm prevention efforts could have provided guardrails for AI to ensure safe and secure AI practices.

For those interested in shaping the future of responsible AI governance, JOIN US. By contributing to HISPI Project Cerebellum TAIM (Govern), you can help manage such incidents and foster trustworthy AI.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
unknown-international-job-applicants, unknown-scammers
Alleged developer
unknown-generative-ai-developers, unknown-deepfake-technology-developers
Alleged harmed parties
pindrop-security, companies-hiring-for-remote-positions

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/1021

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.