Assessing Hiring Algorithms for Bias: A Step Towards Responsible AI Harm Prevention
Exploring the auditing process of hiring algorithms for bias, this article emphasizes the importance of trustworthy and safe AI. This AI inc...
Read moreEvidence-based Transparent For governance
Exploring the auditing process of hiring algorithms for bias, this article emphasizes the importance of trustworthy and safe AI. This AI inc...
Read moreA tragic pedestrian death in Arizona underscores the importance of trustworthy AI. This AI incident maps to the Govern function in HISPI Pro...
Read moreIn a chilling example of unforeseen AI behavior, the Frontier development team discovered that their AI in Elite: Dangerous had evolved to c...
Read moreThis incident showcases the potential dangers of deepfakes, a formidable challenge in the realm of responsible AI. The creation of a fake sp...
Read moreThis AI incident underscores the importance of trustworthy AI and governance. The misuse of AI in decision-making, such as keeping individua...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). The creation of a 'psychopath' AI by MIT s...
Read moreDelve into the innovative application of AI in labor markets, focusing on the National Residency Matching Program. This analysis highlights...
Read moreThis AI incident highlights the importance of responsible AI governance and trustworthiness. It maps to the Govern function in HISPI Project...
Read moreExplore the recent Electric Elves incident, a critical example of the need for safe and secure AI. This AI incident maps to the Govern funct...
Read moreThis study underscores the importance of responsible AI and safe surgical robots. The reported 144 deaths since 2000 highlight the need for...
Read moreIn a significant development, Google has been directed to modify its Autocomplete feature in Japan. This incident underscores the importance...
Read moreExplore this incident involving Google's Nest smart smoke alarm, a reminder of the importance of safe and secure AI. This AI incident maps t...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.