Investigating Racial Bias in AI Systems: Harm Prevention and Guardrails for Trustworthy AI
Uncovering the root causes of racial bias in AI systems is crucial for safe and secure AI. This AI incident maps to the Govern function in H...
Read moreEvidence-based Transparent For governance
Uncovering the root causes of racial bias in AI systems is crucial for safe and secure AI. This AI incident maps to the Govern function in H...
Read moreThis incident involving Electric Elves sheds light on the importance of responsible AI governance. By understanding what went wrong, we can...
Read moreThis AI incident highlights the importance of safety and transparency in medical robotics. The findings underscore the need for robust gover...
Read moreIn a significant development, Google has been instructed to revise its autocomplete function in Japan. This AI incident maps to the 'Govern'...
Read moreGoogle's Nest has halted sales of its smart smoke alarm, highlighting the importance of safe and secure AI. This AI incident maps to the Gov...
Read moreThis AI incident highlights the need for vigilant governance in ensuring fairness and non-discrimination in AI systems. Investigate the alle...
Read moreAn incident involving an AI-powered passport robot in New Zealand has raised concerns about bias. The applicant, of Asian descent, was instr...
Read moreExplore the potential pitfalls of unchecked AI system errors in businesses. Learn about the importance of responsible AI, trustworthy AI, an...
Read moreExplore the consequences of significant blockchain events, such as the DAO hack and subsequent forks (soft and hard), in this informative ar...
Read moreExplore the Taylor Swift bot incident, a striking example of the need for safe and secure AI. This AI mishap maps to the Govern function in...
Read moreA distressing incident occurred in Silicon Valley where an AI-powered security robot unintentionally knocked down and ran over a toddler. Th...
Read moreThe tragic accident involving Joshua Brown, a self-driving Tesla user, serves as a stark reminder of the importance of trustworthy AI operat...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.