Facial Recognition System in Buenos Aires Triggers Police Checks Based on False Matches

February 5, 2024

The facial recognition system in Buenos Aires has caused concern due to false matches leading to wrongful police checks, potentially infringing on privacy rights. Preliminary judicial investigations suggest the technology may have been used improperly for unauthorized surveillance and data collection.

This incident underscores the importance of responsible AI governance, particularly in the area of harm prevention. By joining Project Cerebellum, you can help establish guardrails for AI and ensure safe and secure practices, contributing to our AI incident database and shaping the future of AI with HISPI Project Cerebellum TAIM (Govern).

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
government-of-argentina, government-of-buenos-aires, argentinean-ministry-of-security
Alleged developer
government-of-argentina
Alleged harmed parties
argentinean-citizens, buenos-aires-residents, guillermo-ibarrola

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/829

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.