Examining Misleading AI Claims in Medicine: Harm Prevention and Responsible AI
This AI incident highlights the importance of safe and secure AI, especially in critical areas such as medicine. It maps to the Govern funct...
Read moreEvidence-based Transparent For governance
This AI incident highlights the importance of safe and secure AI, especially in critical areas such as medicine. It maps to the Govern funct...
Read moreAn investigation uncovered racial bias in algorithms analyzing X-rays, raising concerns about AI fairness and safety. This AI incident maps...
Read moreCalifornia's recently passed warehouse worker bill represents a significant stride in AI governance, addressing concerns over working condit...
Read moreIn this incident, three robots collided in an Ocado warehouse causing a fire. This AI incident maps to the Govern function in HISPI Project...
Read moreIn a move that raises questions about the future of journalism, Microsoft has reportedly let go of human journalists to replace them with ro...
Read moreExplore how tech companies are implementing ethical guardrails in their AI systems, contributing to the development of trustworthy AI. This...
Read moreThis AI incident involving Facebook's video moderation system demonstrates the challenges in maintaining trustworthy AI. The system incorrec...
Read moreAlibaba's cloud unit is under scrutiny due to an ethnicity detection algorithm issue. The incident raises concerns about AI bias and the nee...
Read moreThe recent California Bar Exam, in which AI identified a third of applicants suspected of cheating, underscores the crucial role of trustwor...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). The misuse of hashtags by pro-anorexia con...
Read moreThis Tesla incident underscores the importance of responsible AI governance. The driver's inattention while using autopilot tragically led t...
Read moreExplore this incident, which maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM), and how it contributes to shap...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.