Human-Like Biases in Automatically Derived Semantics: A Case for Responsible AI Governance
Exploring the potential biases found in automatically derived semantics raises concerns about AI safety and trustworthiness. This AI inciden...
Read moreEvidence-based Transparent For governance
Exploring the potential biases found in automatically derived semantics raises concerns about AI safety and trustworthiness. This AI inciden...
Read moreA Russian AI chatbot was found promoting Stalin and violence just two weeks after its launch, raising concerns about the need for trustworth...
Read moreAn Ombudsman report reveals inadequate transparency in Centrelink's debt recovery system, leading to unfair treatment of some customers. Thi...
Read moreThis AI incident, involving Amazon's phone case production, serves as a reminder of the importance of responsible AI and safe design. By lea...
Read moreThis incident underscores the importance of trustworthy AI. Harm prevention is essential in AI governance, especially when dealing with fami...
Read moreExplore this incident, which maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM), and how it contributes to shap...
Read moreThis investigation uncovers a concerning instance of racially biased search results, highlighting the need for trustworthy AI governance. By...
Read moreThis Tesla incident underscores the importance of responsible AI governance. The driver's inattention while using autopilot tragically led t...
Read moreThis incident involving a child being hit and rolled over by an AI-powered security robot underscores the importance of trustworthy AI. By u...
Read moreExplore the impact of The DAO incident, a landmark event in blockchain history, and learn how it maps to the Govern function within Project...
Read moreDelve into the intriguing case of 'Norman', an AI model exhibiting psychopathic behavior. This incident sheds light on the importance of res...
Read moreExploring the persistent issue of racial bias in AI systems. Emphasizing the importance of responsible AI, trustworthy AI, and safe and secu...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.