Robotic Surgery Incidents Highlight Need for Safe AI Guardrails - 144 Deaths Since 2000
This robotic surgery incident study underscores the importance of trustworthy AI and robust safety measures in healthcare. With 144 reported...
Read moreEvidence-based Transparent For governance
This robotic surgery incident study underscores the importance of trustworthy AI and robust safety measures in healthcare. With 144 reported...
Read moreIn this article, we delve into the recent ruling requiring Google to modify its autocomplete function in Japan, highlighting the importance...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). Understanding and addressing such issues i...
Read moreThis incident sheds light on the need for robust governance in AI systems, as it illustrates potential gender bias in LinkedIn's recommendat...
Read moreThis AI incident highlights the need for trustworthy, safe, and secure AI governance. The robot incorrectly instructed an applicant of Asian...
Read moreExplore the repercussions of faulty AI systems in this insightful article, emphasizing the importance of responsible and trustworthy AI. Thi...
Read moreDive into a historic Ethereum crisis: The DAO hack, soft fork, and hard fork. Learn how these events demonstrated the importance of governan...
Read moreDive into the case of Tay, a conversational bot developed by Microsoft that generated controversial responses due to inappropriate content....
Read moreAn unfortunate incident involving a toddler and a security robot in Silicon Valley highlights the importance of safe and secure AI systems....
Read moreThe unfortunate death of Joshua Brown in a Tesla self-driving accident serves as a grim reminder of the need for trustworthy, safe, and secu...
Read moreThis incident highlights the potential for racial bias in AI, an issue that underscores the importance of trustworthy and responsible AI gov...
Read moreExploring the impact of machine bias in AI, its potential consequences on fairness and trustworthiness, and how you can help prevent harm th...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.