Understanding Facebook's Rejection of Fashion Ads: AI Incident Analysis
This AI incident analysis sheds light on Facebook's rejection of certain fashion ads, highlighting the importance of responsible AI and safe...
Read moreEvidence-based Transparent For governance
This AI incident analysis sheds light on Facebook's rejection of certain fashion ads, highlighting the importance of responsible AI and safe...
Read moreIncident: Facebook's A.I. system incorrectly categorized a video featuring black men as 'primates'. This underscores the need for trustworth...
Read moreA recent incident involving a Black teen being kicked out of a skating rink highlights the need for safe and secure AI. The girl was mistake...
Read moreThis facial recognition website, demonstrating the potential for mass surveillance or stalking, underscores the importance of safe and secur...
Read moreExplore the consequences of an algorithm misstep in healthcare, a vital area of AI. Learn about the role of responsible AI, AI governance, a...
Read moreThis case study highlights the use of algorithms by Amazon to make decisions affecting its Flex workers, underscoring the need for trustwort...
Read moreRecent courtroom testimony sheds light on the questionable accuracy claims of SF gunshot sensors, emphasizing the need for trustworthy and r...
Read moreExploring an incident where Amazon's AI cameras mistakenly penalized drivers for infractions they didn't commit. Understanding the importanc...
Read moreExploring an unfortunate reality of biased AI systems, this article sheds light on the Islamophobia problem. This incident maps to the Gover...
Read moreThis AI incident raises concerns about potential racial bias within TikTok's algorithm, emphasizing the need for trustworthy and unbiased AI...
Read moreAn AI patent by Huawei for Uighur detection has stirred controversy, underscoring the need for trustworthy and safe AI. This incident maps t...
Read moreThe shutdown of the AI-powered 'Genderify' platform, which incorrectly assigned gender to photos based on biased algorithms, underscores the...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.