NYPD's AI-powered Robo-dog Incident Demonstrates the Need for Responsible AI Governance
This incident involving the NYPD's robotic dog highlights the importance of safe and secure AI. The rapid response to public backlash unders...
Read moreEvidence-based Transparent For governance
This incident involving the NYPD's robotic dog highlights the importance of safe and secure AI. The rapid response to public backlash unders...
Read moreIn an alarming development, race is being used as a 'High Impact Predictor' in some university admissions systems powered by AI. This practi...
Read moreExploring the recent 'robo-debt' issue in French welfare services, this article emphasizes the need for safe and secure AI in govern functio...
Read moreExplore a recent AI incident involving discriminatory algorithms and its alarming consequences for innocent families. This AI incident maps...
Read moreA recent study reveals challenges faced by personal voice assistants when processing black voices, emphasizing the need for trustworthy and...
Read moreAn investigation into a concerning AI incident reveals that Twitter's photo crop algorithm favors white faces and women, potentially reinfor...
Read moreThis AI incident in California's vaccine allocation algorithm highlights the need for transparency, fairness, and trustworthy AI practices....
Read moreExploring the debate surrounding Tesla's Autopilot system, this article reviews several incidents that question its safety claims. The discu...
Read moreThis AI incident underscores the need for trustworthy AI governance, emphasizing data privacy concerns in the tech industry. The chatbot's m...
Read moreThe recent surveillance group's exposure of a Huawei patent raises concerns about the use of AI for ethnic profiling. This incident maps to...
Read moreA recent incident involving a black teen being kicked out of a skating rink after facial recognition misidentified her underscores the need...
Read moreExplore the risks of misuse in facial recognition technology, highlighting a site that allows anyone to act as a cop or stalker. This AI inc...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.