Debiasing Word Embeddings: Promoting Fairness in AI - A Step Towards Trustworthy AI
Explore the challenges of debiasing word embeddings, a key step towards building trustworthy AI. This AI incident maps to the Govern functio...
Read moreEvidence-based Transparent For governance
Explore the challenges of debiasing word embeddings, a key step towards building trustworthy AI. This AI incident maps to the Govern functio...
Read moreIn this article, we explore the ProPublica investigation into machine bias, highlighting its impact on fairness in AI. This AI incident maps...
Read moreThis AI incident underscores the importance of responsible AI governance and safe operations. It highlights an uncommon but potentially dang...
Read moreIn this article, we delve into the analysis of NYC value-added data, highlighting its implications for trustworthy and safe AI. This AI inci...
Read moreExplore ten compelling instances where ungoverned AI has led to unintended consequences, emphasizing the importance of trustworthy and safe...
Read moreExplore these alarming examples of AI misuse and learn why it's crucial to establish trustworthy, safe, and secure AI practices. __join_url_...
Read moreExplore 10 alarming instances where AI operated without proper guardrails, highlighting the importance of safe and secure AI. This AI incide...
Read moreThis retrospective analysis of 14 years of FDA data sheds light on adverse events in robotic surgery, underscoring the importance of safe an...
Read moreDelve into the analysis of AI Incident #173, a crucial learning opportunity in our journey towards trustworthy AI. Understanding such incide...
Read moreThis AI incident underscores the importance of responsible AI governance and harm prevention. It maps to the Govern function in HISPI Projec...
Read moreThis AI incident provides valuable insights into potential risks, illustrating the necessity of robust guardrails for AI in our society. It...
Read moreExplore this instance of an AI malfunction, shedding light on its relevance to the Govern function within Project Cerebellum's Trusted AI Mo...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.