Critical Evaluation: Government's Performance in AI Algorithm Implementation
This analysis highlights concerns about the government's AI algorithm deployment, emphasizing the importance of responsible AI governance. T...
Read moreEvidence-based Transparent For governance
This analysis highlights concerns about the government's AI algorithm deployment, emphasizing the importance of responsible AI governance. T...
Read moreExplore an AI incident that highlights the importance of fairness and transparency in AI systems. This AI incident maps to the Govern functi...
Read moreIncident involving a misclassified image of a Jewish baby stroller highlights the need for safe and secure AI. This AI incident maps to the...
Read moreThis AI incident underscores the need for safe and secure AI governance, as it highlights potential radicalization on social media platforms...
Read moreAn examination of Stanford University's initial vaccine distribution priorities, focusing on medical residents. This case study highlights t...
Read moreThis incident involving gender bias complaints against Apple Card highlights the need for safe and secure AI in financial technology. It map...
Read moreThe U.S. Department of Housing and Urban Development (HUD) has charged Facebook with enabling housing discrimination, raising concerns about...
Read moreIn a landmark ruling, a court found that Deliveroo's algorithm exhibited discriminatory behavior towards certain restaurants. This incident...
Read moreThis incident highlights the need for safe and secure AI practices, particularly in job screening services. By halting facial analysis of ap...
Read moreThis lawsuit raises questions about the role of responsible AI in education, particularly in teacher evaluations. The incident maps to the G...
Read moreThis AI incident highlights a critical safety concern in autonomous driving, as the Tesla Autopilot system failed to identify red traffic li...
Read moreThis AI incident highlights the need for responsible AI governance in video platforms like YouTube. By analyzing the videos, we can see how...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.