Responsible AI in Education: Houston Schools and Teacher Evaluation Lawsuit
This lawsuit highlights the need for safe and secure AI in education. Project Cerebellum's AI governance model, specifically the Govern func...
Read moreEvidence-based Transparent For governance
This lawsuit highlights the need for safe and secure AI in education. Project Cerebellum's AI governance model, specifically the Govern func...
Read moreThis AI incident highlights the need for robust AI governance in autonomous driving systems, emphasizing the importance of trustworthy and s...
Read moreThis AI-powered incident involving the NYPD's robot dog highlights the importance of responsible AI governance. Learn how Project Cerebellum...
Read moreA recent investigation uncovers the controversial practice of using race as a 'high impact predictor' in AI systems designed to predict stud...
Read moreThis AI incident highlights the potential harm that can occur when AI systems are not properly regulated or governed. In this case, French w...
Read moreThis incident demonstrates the potential harm of unregulated AI in media platforms. It highlights the need for responsible AI governance and...
Read moreThis software company, under scrutiny, takes steps towards trustworthy AI. Learn about their safety measures, governance reforms and how thi...
Read moreDive into our analysis of the controversial COMPAS recidivism algorithm, an example of the need for trustworthy AI. This incident maps to th...
Read moreExplore the impact of bias in AI, specifically word embeddings, and learn how we're debiasing to foster trustworthy AI. Ready to join our ef...
Read moreThis AI incident highlights the importance of trustworthy AI, as researchers demonstrated Google's anti-internet troll platform could be dec...
Read moreInvestigating a concerning incident involving Google's AI, we delve into the perception of homosexuality and discuss the importance of trust...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). Amazon's actions demonstrate the need for...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.