Addressing Bias in AI: The Islamophobia Issue
Exploring an unfortunate reality - the biases in AI that can contribute to Islamophobia. This AI incident maps to the Govern function in HIS...
Read moreEvidence-based Transparent For governance
Exploring an unfortunate reality - the biases in AI that can contribute to Islamophobia. This AI incident maps to the Govern function in HIS...
Read moreIncident involving job losses at Xsollai mapped to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). CEO's letter add...
Read moreThis AI incident serves as a cautionary tale, highlighting the importance of robust governance for safe and secure AI. The unsupervised inte...
Read moreThis incident raises questions about the role of AI in modern warfare, particularly autonomous weapons. It highlights the need for robust AI...
Read moreThis AI incident involving Facebook's facial recognition technology underscores the necessity for privacy protection in the development and...
Read moreExploring an instance of misleading AI claims in the medical field, this article underscores the need for trustworthy AI and safe & secure A...
Read moreUncovering a concerning case of algorithmic bias, this incident illustrates how AI can perpetuate racial disparities in healthcare delivery....
Read moreThe new California law aims to improve warehouse conditions for workers, emphasizing the need for trustworthy AI in work environments. This...
Read moreAn unexpected collision between robots ignited a fire at an online-only grocery store in the UK, raising concerns about the need for reliabl...
Read moreIn this analysis, we delve into Microsoft's decision to replace human journalists with robots. This move highlights the need for trustworthy...
Read moreExplore the recent moves by tech firms to establish ethical guardrails around AI development, contributing to the growth of trustworthy and...
Read moreIncident highlights a lapse in Facebook's AI moderation system, where harmless videos of car washes were incorrectly flagged as violent cont...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.