Government's Poor Performance in Algorithmic Accountability: A Call to Action
The use of algorithms by the Government has raised concerns regarding accountability and transparency. This AI incident highlights the need...
Read moreEvidence-based Transparent For governance
The use of algorithms by the Government has raised concerns regarding accountability and transparency. This AI incident highlights the need...
Read moreRecent findings reveal that the UK passport photo checker system shows bias against dark-skinned women, emphasizing the importance of trustw...
Read moreExplore the implications of this AI incident, which reveals potential bias in an image recognition algorithm related to baby strollers. This...
Read moreExplore how the Christchurch shooter incident highlights the need for safe and secure AI, especially in content moderation. This AI incident...
Read moreThis allocation incident highlights the need for transparency and fairness in AI governance, particularly in critical healthcare decisions....
Read moreThe recent gender bias complaints against Apple Card serve as a stark reminder of the need for trustworthy and safe AI in financial technolo...
Read moreThe Department of Housing and Urban Development (HUD) has accused Facebook of violating the Fair Housing Act by allowing housing discriminat...
Read moreA recent court decision deemed Deliveroo's algorithm discriminatory, highlighting the importance of trustworthy and fair AI. This AI inciden...
Read moreA job screening service has suspended facial analysis for applicants, signaling a commitment to trustworthy AI and safe & secure AI practice...
Read moreThis incident sheds light on the critical role of trustworthy AI in education. The Houston Schools' teacher evaluation lawsuit underscores t...
Read moreThis AI incident involving Tesla's Autopilot system underscores the importance of safe and secure autonomous driving technology. The system...
Read moreThis incident highlights the need for responsible AI governance, emphasizing harm prevention through guardrails for AI. The disturbing video...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.