Incident: New Zealand Passport Robot Displays Racial Bias towards Applicant of Asian Descent
An incident occurred at a passport robot in New Zealand where an applicant of Asian descent was instructed to 'open your eyes' repeatedly du...
Read moreEvidence-based Transparent For governance
An incident occurred at a passport robot in New Zealand where an applicant of Asian descent was instructed to 'open your eyes' repeatedly du...
Read moreAn AI system used to judge a beauty contest displayed racial bias, favoring lighter-skinned contestants over those with darker skin tones. T...
Read moreAn AI-judged beauty contest, with 6,000 participants, unexpectedly showed a bias towards white contestants. This raises concerns about the l...
Read moreExploring the uncharted territory of AI integration, a beauty contest in Japan has made history by enlisting an artificial intelligence syst...
Read moreIn a global beauty contest judged by AI, an alarming trend emerged - the majority of winners were white. This incident highlights the need f...
Read moreAn AI system used to judge a beauty contest faced accusations of racial bias after the winners primarily consisted of light-skinned individu...
Read moreExploring the issue of bias in AI systems, a consequence of our own data and design choices. Discussing the importance of responsible AI pra...
Read moreExploring the impact of racial bias in Artificial Intelligence systems, this article sheds light on the need for safe and secure AI developm...
Read moreA recent AI-judged beauty contest inadvertently demonstrated the potential for AI to perpetuate racial bias, highlighting the need for respo...
Read moreUnderstanding the potential risks and impacts of faulty algorithms in AI systems is crucial for responsible AI governance. Learn from real-w...
Read moreExplore the innovative approach of The DAO, a decentralized autonomous organization, in shaping the future of responsible AI governance. By...
Read moreDelve into the 2016 DAO attack, a significant incident that underscored the need for robust AI governance and safe and secure AI practices....
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.