Purported Deepfake Video Allegedly Used to Harass Washington State Patrol Trooper

December 30, 2025

A Washington State Patrol trooper filed a lawsuit alleging that coworkers circulated an AI-generated deepfake video, falsely depicting him in an intimate encounter with another trooper. The non-consensual video, which contributed to harassment and discrimination based on his sexual orientation, underscores the need for safe and secure AI practices in the workplace. While the creator of the video remains unidentified, the agency has declined to comment due to pending litigation. This incident serves as a stark reminder of the potential harm that can result from misuse of AI. By joining Project Cerebellum, you can help establish guardrails for AI and ensure incidents like this are properly addressed—MAP the current case to HISPI Project Cerebellum TAIM (Measure function).

JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
unknown-washington-state-patrol-employee(s)
Alleged developer
unknown-deepfake-technology-developers
Alleged harmed parties
collin-pearson, lgbtq+-law-enforcement-officers, washington-state-patrol-employees, epistemic-integrity

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/1345

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.