BIAS IN AI SYSTEMS: Racially Biased Passport Machine Mistakes Closed Eyes for Inappropriate Gaze

An incident involving a passport machine at New York's JFK Airport has raised concerns about the potential bias in AI systems. The machine rejected a traveler's photo, stating that 'the subject's eyes are closed.' However, the traveler was quick to point out that her eyes were not closed; they simply appeared to be squinted due to the camera's bright flash. This incident underscores the need for safe and secure AI, particularly in critical areas like border control. The HISPI Project Cerebellum TAIM is a valuable resource for managing such incidents, fostering responsible AI governance and harm prevention.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/48

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.