Deliveroo's Controversial AI Algorithm: A Case Study in Responsible AI Governance
In this article, we delve into the recent court ruling against Deliveroo for using an allegedly 'discriminatory' algorithm. This incident un...
Read moreEvidence-based Transparent For governance
In this article, we delve into the recent court ruling against Deliveroo for using an allegedly 'discriminatory' algorithm. This incident un...
Read moreThis housing discrimination case against Facebook highlights the importance of trustworthy AI and responsible AI governance. By allegedly en...
Read moreThe Apple Card algorithm has raised concerns over potential gender bias, leading to allegations against Goldman Sachs. This AI incident unde...
Read moreAn AI system, designed to alleviate human fear of robots, inadvertently expressed apprehension about its potential to cause harm. This incid...
Read moreThis humorous AI malfunction underscores the importance of trustworthy and safe AI. The incident can be related to the Govern function in HI...
Read moreThis AI incident highlights racial, gender, and socioeconomic bias in chest X-ray classifiers. It underscores the need for trustworthy AI an...
Read moreExploring Facebook's decision to label content related to the October 20 Lekki Massacre as 'false'. This incident highlights the importance...
Read moreExplore the complexities behind spam filters, often overlooked AI applications. This incident maps to the Govern function in HISPI Project C...
Read moreThis case study highlights how false information regarding COVID-19 and voting can bypass Facebook's fact-checks. It underscores the importa...
Read moreIncident involving the UK passport photo checker system demonstrates the potential risks associated with unchecked AI. This AI incident maps...
Read moreExploring the role of YouTube in radicalizing individuals, this article underscores the importance of safe and secure AI. This AI incident m...
Read moreExplore an AI bias incident where a baby stroller was incorrectly labeled as Jewish, highlighting the need for trustworthy AI. This AI incid...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.