Thousands Accused of Fraud by Discriminatory AI Algorithm: A Case for Responsible AI Governance
Explore the consequences of a biased algorithm, learn how such incidents map to the Govern function in HISPI Project Cerebellum Trusted AI M...
Read moreEvidence-based Transparent For governance
Explore the consequences of a biased algorithm, learn how such incidents map to the Govern function in HISPI Project Cerebellum Trusted AI M...
Read moreA recent study reveals struggles in voice recognition for black voices by personal voice assistants. This underscores the importance of resp...
Read moreDive into the details of AI Incident #120, shedding light on its implications for trustworthy AI and safe AI governance. Understanding these...
Read moreDelve into AI Incident #115, its impact on trustworthy AI practices, and the role of Project Cerebellum's Govern function in maintaining saf...
Read moreExploring a real-world AI incident involving an autonomous vehicle, this article highlights the importance of safe and secure AI in autonomo...
Read moreThis AI incident demonstrates the unintended consequences of autonomous vehicles, highlighting the importance of responsible AI and governan...
Read moreDive into Incident #118, a valuable learning opportunity for safe and secure AI development. This incident maps to the Govern function in HI...
Read moreDelve into the details of Incident #119, a valuable learning opportunity that sheds light on the importance of trustworthy and safe AI. This...
Read moreIncident analysis of #122 highlights a significant event in our AI governance landscape. This AI incident maps to the Govern function in HIS...
Read moreExplore the unintended bias incident in a predictive modeling AI system, highlighting its impact on fairness and accountability in AI. This...
Read moreThis AI incident, incident #123, demonstrates the importance of robust governance mechanisms for responsible AI development and deployment....
Read moreExploring a recent autonomous vehicle incident that underscores the importance of trustworthy, safe, and secure AI. This AI incident maps to...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.