Government Grading on Algorithms: A Dismal Performance - Highlighting Responsible AI
The Leaving Cert algorithm fiasco underscores the urgent need for responsible AI governance. Improving this situation is not just about fixi...
Read moreEvidence-based Transparent For governance
The Leaving Cert algorithm fiasco underscores the urgent need for responsible AI governance. Improving this situation is not just about fixi...
Read moreThis AI incident highlights the need for responsible AI governance, particularly in critical areas such as identity verification. It maps to...
Read moreRecent AI incident involving misclassification of a Jewish baby stroller highlights the need for responsible AI governance. This incident ma...
Read moreThis tragic incident involving the Christchurch shooter raises important questions about AI governance and radicalization on platforms like...
Read moreUnderstanding AI incident distribution is crucial for trustworthy AI. This vaccine allocation case maps to the Govern function in HISPI Proj...
Read moreThis AI incident involving Apple Card maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). The incident undersco...
Read moreThe U.S Department of Housing and Urban Development (HUD) has charged Facebook with enabling housing discrimination through its targeted adv...
Read moreExplore a recent court ruling on Deliveroo's AI-based algorithm, which allegedly displayed discriminatory practices. This AI incident maps t...
Read moreRecent halt in the use of facial analysis by a job screening service underscores the need for robust governance in AI. This incident maps to...
Read moreThe lawsuit against the AI-powered teacher evaluation system in Houston schools underscores the critical role of safe and secure AI. This AI...
Read moreThe recent incident involving Tesla's Autopilot system mistaking red reflective letters on a traffic sign for green lights underscores the i...
Read moreThis incident highlights the need for safe and secure AI in content moderation systems. The inappropriate videos misleading children undersc...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.