Exploring YouTube's Role in Radicalization: The Christchurch Shooting Incident
This incident serves as a sobering reminder of the potential risks associated with AI, particularly social media platforms like YouTube. The...
Read moreEvidence-based Transparent For governance
This incident serves as a sobering reminder of the potential risks associated with AI, particularly social media platforms like YouTube. The...
Read moreStanford University recently apologized for an oversight in their coronavirus vaccine plan that neglected many front-line doctors. This inci...
Read moreThe gender bias complaints against Apple Card underscore the need for trustworthy AI governance, emphasizing the role of Project Cerebellum...
Read moreThe U.S. Department of Housing and Urban Development (HUD) has filed charges against Facebook for enabling housing discrimination through it...
Read moreThe Italian court has taken a significant step towards promoting safe and secure AI by ruling against the discriminatory practices of Delive...
Read moreA job screening service has halted the use of facial analysis following concerns about racial bias. This AI incident maps to the Govern func...
Read moreAn ongoing lawsuit highlights the need for trustworthy AI in education, focusing on the use of AI for teacher evaluations. This AI incident...
Read moreThis AI incident highlights the importance of safe and secure autonomous driving technology. The Tesla Autopilot system failed to recognize...
Read moreAn incident involving the NYPD's AI-powered robot dog highlights the need for responsible AI governance. This AI incident maps to the Govern...
Read moreThe use of race as a 'high impact predictor' in student success algorithms raises concerns about fairness and accountability in AI. This AI...
Read moreThis AI incident involving French welfare services highlights the potential risks of automation and its impact on human rights, emphasizing...
Read moreThis AI incident highlights the importance of trustworthy AI and AI governance. The case of the discriminatory algorithm, which wrongly accu...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.