Starbucks prioritizes work schedules for responsible AI practices
Exploring how Starbucks utilizes responsible AI to manage employee work schedules, addressing fairness, predictability, and data privacy con...
Read moreEvidence-based Transparent For governance
Exploring how Starbucks utilizes responsible AI to manage employee work schedules, addressing fairness, predictability, and data privacy con...
Read moreRecent reports suggest that Starbucks may be underpaying its baristas, sparking concerns about fair compensation and workforce management. T...
Read moreAn investigation reveals that AI-driven scheduling software employed by the global coffee chain, Starbucks, has caused distress among its wo...
Read moreIn response to recent AI incidents, the software company is taking decisive steps towards enhancing trustworthy AI practices. Learn about th...
Read moreExplore the profound impact of machine bias, as highlighted in the ProPublica investigation on criminal sentencing systems. The study reveal...
Read moreRecent claims by ProPublica regarding racial bias in a particular algorithm have sparked debates on AI fairness. However, a closer look at t...
Read moreExplore the troubling reality of racial bias in algorithms used to determine sentences in U.S. courts, raising significant concerns about re...
Read moreExploring recent incidents demonstrating how AI, if improperly trained or designed, can inadvertently reinforce racial biases, highlighting...
Read moreExploring the alarming trend of racial bias in algorithms, particularly those affecting black men. This analysis highlights the urgent need...
Read moreAn analysis of a widely-used algorithm in crime prediction has revealed its performance is on par with random human predictions, casting dou...
Read moreA revealing analysis by ProPublica has uncovered racial bias in the Compas criminal justice risk scoring system. The study highlights the im...
Read moreExploring the potential for algorithmic bias in criminal risk assessment tools, their impact on fairness and justice in AI governance.
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.