Government Grading on Algorithms: A Failure in Responsible AI Governance
This incident highlights the importance of safe, secure, and trustworthy algorithms within government decision-making processes. This AI inc...
Read moreEvidence-based Transparent For governance
This incident highlights the importance of safe, secure, and trustworthy algorithms within government decision-making processes. This AI inc...
Read moreThis case study demonstrates potential bias in an AI system, focusing on the UK passport photo checker that disproportionately rejected phot...
Read moreThis AI incident highlights the importance of safe and secure AI, particularly in sensitive contexts such as images related to religion or c...
Read moreExplore how the Christchurch shooter incident underscores the need for responsible AI and robust AI governance. Understand the impact on pla...
Read moreExploring the ethical considerations of vaccine distribution, this case study highlights Stanford University's prioritization of vaccines fo...
Read moreGender bias complaints against Apple Card reveal a concerning aspect of fintech, underscoring the need for trustworthy and safe AI. This AI...
Read moreThe U.S. Department of Housing and Urban Development (HUD) has accused Facebook of enabling housing discrimination by allowing real estate a...
Read moreA recent court ruling highlights the importance of responsible AI governance, as Deliveroo's use of an algorithm that appeared discriminator...
Read moreA job screening service has temporarily halted the use of facial analysis technology for applicant review, raising questions about the role...
Read moreExplore the lawsuit challenging an AI system used for teacher evaluations in Houston schools, highlighting the importance of responsible AI...
Read moreIn this concerning AI incident, a Tesla vehicle operating on Autopilot mistook red traffic lights for green ones, demonstrating the importan...
Read moreExploring concerning YouTube videos that manipulate children, highlighting the need for safe and secure AI, governance, and guardrails. This...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.