Evaluating the Performance of Crime Prediction Algorithms: A Comparison to Random Baselines
Exploring the limitations of a popular crime prediction algorithm, this analysis reveals that it performs no better than random guesses. The...
Read moreEvidence-based Transparent For governance
Exploring the limitations of a popular crime prediction algorithm, this analysis reveals that it performs no better than random guesses. The...
Read moreThis incident involving a deepfake of former President Barack Obama highlights the potential misuse of artificial intelligence for deceptive...
Read moreExplore six recent AI incidents and learn how they underscore the importance of trustworthy, safe, and secure AI development. This AI incide...
Read moreExplore ten notable AI failures in 2018, shedding light on the importance of trustworthy and safe AI. This AI incident map aligns with the G...
Read moreDive into ten significant AI failures of 2018, each illustrating the importance of responsible AI governance. This AI incident analysis maps...
Read moreExploring the real-life implications of AI automation in the workplace, this article sheds light on a case where a man was dismissed by an a...
Read moreExploring five significant AI mishaps from 2017 that highlight the ongoing journey towards responsible AI and safe AI governance. This AI in...
Read moreInspect these significant AI failures from 2017, underscoring the ongoing pursuit for trustworthy AI. Learn how Project Cerebellum's AI gove...
Read moreA tragic accident unfolded at a Volkswagen manufacturing facility in Germany, where an autonomous robot operated by AI malfunctioned, result...
Read moreJust a week after the iPhone X release, hackers claimed to break Face ID, highlighting potential vulnerabilities in facial recognition syste...
Read moreA recent incident involving a driverless car manufactured by Company X and Google's autonomous vehicle brought forth claims of a near-miss....
Read moreExplore the chilling consequences of a human-made AI misinterpretation in the 1983 Soviet nuclear false alarm incident. This AI incident map...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.