Italian Court Upholds Fairness in AI: Deliveroo Rider-Ranking Algorithm Found Discriminatory
A landmark decision by an Italian court underscores the importance of responsible AI governance. The court ruled against Deliveroo's rider-r...
Read moreEvidence-based Transparent For governance
A landmark decision by an Italian court underscores the importance of responsible AI governance. The court ruled against Deliveroo's rider-r...
Read moreA job screening service temporarily halted its facial analysis process used on applicants, raising concerns about AI bias and privacy. This...
Read moreThis incident raises questions about the responsible use of AI in education, particularly when it comes to high-stakes evaluations. It maps...
Read moreAn alarming incident involving Tesla's Autopilot system has raised concerns about the safety of self-driving cars. The AI system failed to r...
Read moreThe deployment of NYPD's robot dog sparked controversy, highlighting the need for safe and secure AI governance. This AI incident maps to th...
Read moreThis incident raises concerns about the fairness of AI in education, highlighting the use of race as a 'high impact predictor' of student su...
Read moreThis AI incident highlights the potential risks of automated debt collection in welfare services, emphasizing the need for responsible AI go...
Read moreThis AI incident highlights the need for robust AI governance to prevent harm. The discriminatory algorithm, a stark reminder of the challen...
Read moreA recent study underscores challenges faced by personal voice assistants when processing black voices, emphasizing the importance of trustwo...
Read moreUncovering potential biases in AI: Twitter's photo crop algorithm exhibits a preference towards white females, raising concerns for responsi...
Read moreExplore the potential consequences of an algorithm aimed at 'equity', which may inadvertently exclude 2 million Californians from additional...
Read moreAssessing the safety of autonomous vehicles remains a critical concern. While Tesla claims its Autopilot system enhances safety, crash victi...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.