The Simplification of Deepfakes: A Growing Concern for Responsible AI Governance
Explore the alarming ease with which deepfakes can be created, their potential impacts on trustworthy AI, and the role of safe and secure AI...
Read moreEvidence-based Transparent For governance
Explore the alarming ease with which deepfakes can be created, their potential impacts on trustworthy AI, and the role of safe and secure AI...
Read moreAn AI-generated speech, attributed to former U.S. President Barack Obama, was circulated online. The speech did not originate from the forme...
Read moreThis article delves into a comprehensive analysis of the much-debated COMPAS recidivism algorithm, highlighting its impact on fairness and t...
Read moreIn the rapidly evolving landscape of Artificial Intelligence, ensuring fairness has emerged as a key concern. This article sheds light on th...
Read moreA recent study published in the Proceedings of Machine Learning Research has questioned the accuracy of bail algorithms used by courts acros...
Read moreExploring the limitations of AI in predicting recidivism, a study revealed its results were comparable to human judgement. The findings high...
Read moreA recent study has uncovered a surprising fact - algorithms used to predict repeat offenders are no more accurate than inexperienced humans....
Read moreA recent study by the National Institute of Justice challenges the traditional notion that AI-driven algorithms are more accurate than human...
Read moreArtificial Intelligence (AI) is increasingly being employed to aid in criminal justice, from convicting criminals to determining jail senten...
Read moreExploring the ongoing debate around eliminating racial bias in criminal justice algorithms, this article highlights the need for responsible...
Read moreIn a recent study, human workers on Mechanical Turk outperformed the popular risk assessment tool, COMPAS, in predicting recidivism rates. T...
Read moreA recent study challenges the reliability of court software used for predicting criminal risk, suggesting it may be no more accurate than su...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.