Examining New York City's Value-Added Data: A Responsible AI Approach to Harm Prevention
In this article, we delve into the analysis of NYC value-added data, highlighting its implications for trustworthy and safe AI. This AI inci...
Read moreEvidence-based Transparent For governance
In this article, we delve into the analysis of NYC value-added data, highlighting its implications for trustworthy and safe AI. This AI inci...
Read moreThis AI incident underscores the importance of responsible AI governance and safe operations. It highlights an uncommon but potentially dang...
Read moreIn this article, we explore the ProPublica investigation into machine bias, highlighting its impact on fairness in AI. This AI incident maps...
Read moreExplore the challenges of debiasing word embeddings, a key step towards building trustworthy AI. This AI incident maps to the Govern functio...
Read moreExploring the potential impact of Google's comment-ranking system on diverse perspectives, this article highlights the importance of respons...
Read moreExploring the similarities between the biases found in Google's Sentiment Analysis API and human decision-making. This AI incident maps to t...
Read moreThis AI incident underscores the importance of trustworthy AI in preventing harm, especially when it comes to book censorship. Amazon's cont...
Read moreTesla's factory issues underscore the need for trustworthy AI governance. This AI incident maps to the 'Govern' function in HISPI Project Ce...
Read moreExploring five significant AI incidents of 2017 that underscore the importance of safe, secure, and trustworthy AI. This AI incident maps to...
Read moreExploring five notable AI failures of 2017, this article highlights the importance of safe and secure AI. Understanding these incidents maps...
Read moreExplore these five incidents that underscore the importance of safe and secure AI. This AI incident maps to the Govern function in HISPI Pro...
Read moreThis AI incident involving an autonomous vehicle demonstrates the need for robust safety mechanisms in AI decision-making. It maps to the Go...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.