HISPI Project Cerebellum
AI Incidents

Facial Recognition System in Buenos Aires Triggers Police Checks Based on False Matches

February 5, 2024

Buenos Aires's facial recognition system mistakenly flagged innocent people as criminals, leading to wrongful stops and detentions. Judicial investigations indicate the technology may have been misused for unauthorized surveillance and data collection. Despite privacy risks, the system has been used widely without full disclosure of standards or safeguards,
Alleged deployer
government-of-argentina, government-of-buenos-aires, argentinean-ministry-of-security
Alleged developer
government-of-argentina
Alleged harmed parties
argentinean-citizens, buenos-aires-residents, guillermo-ibarrola

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/829

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.