AI Incident: Misclassification of Jewish Baby Stroller in Image Algorithm
This AI incident involves the misclassification of a Jewish baby stroller by an image algorithm, highlighting the need for safe and secure A...
Read moreEvidence-based Transparent For governance
This AI incident involves the misclassification of a Jewish baby stroller by an image algorithm, highlighting the need for safe and secure A...
Read moreThe UK passport photo checker has been found to display bias against dark-skinned women, highlighting the urgent need for trustworthy AI. Th...
Read moreThe use of algorithms by the government has raised concerns about their fairness, transparency, and accountability. This AI incident maps to...
Read moreThis unexpected declaration from an AI system raises concerns about the need for responsible AI governance, harm prevention measures, and gu...
Read moreThis incident highlights the challenges faced by AI systems in content moderation, especially during critical times like a pandemic or elect...
Read moreExploring the role of spam filters in AI governance, this incident highlights the importance of trustworthy and safe AI practices. This AI i...
Read moreIn October 2020, content related to the Lekki Massacre was flagged as 'false' by Facebook. This incident raises questions about AI governanc...
Read moreInvestigating racial, gender, and socioeconomic bias in chest X-ray classifiers highlights the importance of safe and secure AI. This AI inc...
Read moreAn AI system mistakenly identified a referee's bald head as a football, causing laughter at a soccer match. This AI incident maps to the Gov...
Read moreExploring a distressing AI incident that highlights algorithmic bias, this article underscores the importance of responsible AI governance i...
Read moreDelve into an examination of a significant AI decision-making system affecting college admissions, and learn how it aligns with the Govern f...
Read moreA tragic accident in an Indian car parts factory underscores the need for responsible AI governance. A worker was killed by a robot during w...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.