AI Incident: Unfair Bias in Predictive Policing AI Algorithm Leads to Wrongful Imprisonment
This AI incident sheds light on the potential danger of unchecked AI, specifically predictive policing algorithms. The misuse of data and la...
Read moreEvidence-based Transparent For governance
This AI incident sheds light on the potential danger of unchecked AI, specifically predictive policing algorithms. The misuse of data and la...
Read moreAn incident involving MIT scientists demonstrated the potential dangers of unregulated AI development when a 'psychopathic' AI was created b...
Read moreExplore the role of the National Residency Matching Program in shaping a fair and efficient labor market, powered by AI. This AI incident ma...
Read moreThis AI incident highlights a concerning case of bias, demonstrating the importance of trustworthy AI. By analyzing this event, we can learn...
Read moreDelve into the Electric Elves incident, a stark reminder of the need for safe and secure AI. This AI incident maps to the Govern function in...
Read moreA recent study reveals a concerning 'nonnegligible' number of complications during robotic surgeries, with 144 reported deaths since 2000. T...
Read moreThe recent order to modify Google's autocomplete function highlights the importance of responsible AI governance. This incident maps to the...
Read moreGoogle's Nest has halted sales of its smart smoke alarm following reports of a faulty feature, highlighting the importance of trustworthy AI...
Read moreThis incident highlights the importance of responsible AI governance in ensuring fairness. While LinkedIn denies allegations of gender bias...
Read moreAn incident involving a passport robot in New Zealand demonstrates the potential harm of unchecked AI. The applicant of Asian descent was to...
Read moreExplore the potential impacts of faulty AI systems on businesses, emphasizing the importance of responsible AI governance and safety measure...
Read moreExplore the events surrounding The DAO hack, soft fork, and hard fork, a pivotal moment demonstrating the need for trustworthy AI and safe &...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.