Bias in AI Deepfake Detection Undermines Election Security in Global South
September 2, 2024
AI deepfake detection tools are reportedly failing voters in the Global South due to biases in their training data. These tools, which prioritize English language and Western faces, show reduced accuracy when detecting manipulated content from non-Western regions. As a result of this detection gap, election integrity faces threats from and the amplification of misinformation, which leaves journalists and researchers with inadequate resources to combat the issue.
- Alleged deployer
- unknown-deepfake-detection-technology-developers, true-media, reality-defender
- Alleged developer
- unknown-deepfake-detection-technology-developers, true-media, reality-defender
- Alleged harmed parties
- global-south-citizens, political-researchers, global-south-local-fact-checkers, non-native-english-speakers, global-south-journalists, civil-society-organizations-in-developing-countries
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/801
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.