Critique of Algorithm Use in Government: Highlighting the Need for Responsible AI Governance
This analysis examines the use of algorithms by government entities, revealing areas where improvements are necessary to ensure trustworthy...
Read moreEvidence-based Transparent For governance
This analysis examines the use of algorithms by government entities, revealing areas where improvements are necessary to ensure trustworthy...
Read moreThis incident underscores the need for responsible AI governance. The biased passport photo checker system, which disproportionately fails d...
Read moreExploring an AI algorithm that misclassified a Jewish baby stroller. Understanding how incidents like these highlight the need for trustwort...
Read moreExploring how AI, particularly YouTube algorithms, may have contributed to the radicalization process of the Christchurch shooter. This inci...
Read moreThis incident involving the allocation of early COVID-19 vaccines at Stanford University underscores the need for robust AI governance in pu...
Read moreThe recent gender bias complaints against Apple Card highlight the urgent need for trustworthy AI in finance. This AI incident maps to the G...
Read moreThe U.S. Department of Housing and Urban Development (HUD) has charged Facebook for facilitating housing discrimination through its ad platf...
Read moreIn this case, the court ruled that Deliveroo's AI algorithm exhibited discrimination in delivery assignment. This AI incident maps to the Go...
Read moreA job screening service has halted facial analysis of applicants, demonstrating the importance of trustworthy AI. This AI incident maps to t...
Read moreThis AI incident, concerning a lawsuit over an alleged flawed teacher evaluation system in Houston schools, highlights the need for robust A...
Read moreThe Tesla Autopilot system misreading red traffic lights highlights the importance of safe and secure AI, a key focus of Project Cerebellum....
Read moreRecent disturbing YouTube videos have been observed tricking children, highlighting the need for safe and secure AI. This AI incident maps t...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.