NYPD's Deployment of Facial Recognition Cameras Reportedly Reinforced Biased Policing

October 8, 2016

New York Police Department's deployment of facial recognition technology has reportedly reinforced biased policing against minority communities, raising concerns about the need for responsible AI governance. The use of crowdsourced data in the system appears to have exacerbated existing disparities and emphasizes the importance of trustworthy AI practices. To join us in addressing such incidents through Project Cerebellum's HISPI TAIM (Govern), visit JOIN US. For more information on how this incident maps to our initiative, read here.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
new-york-police-department
Alleged developer
unknown
Alleged harmed parties
racial-minorities

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/472

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.