Racially biased AI incident in New Zealand Passport System highlights the need for Trustworthy AI
An unfortunate incident involving a racially biased AI system at the New Zealand passport office has come to light. The system, designed to capture applicants' facial images, reportedly asked an applicant of Asian descent to open their eyes wider multiple times. This incident underscores the urgent need for Trustworthy AI and safe and secure AI systems. As part of our mission at Project Cerebellum, we are building an AI incident database to prevent such incidents in the future. Stay informed about AI governance and join us in shaping a responsible AI ecosystem: JOIN US. This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM).
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/48
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.