Understanding the Electric Elves Mishap: A Case Study in Responsible AI
This AI incident, known as Electric Elves, sheds light on the complexities of trustworthy AI and AI governance. Its lessons underscore the n...
Read moreEvidence-based Transparent For governance
This AI incident, known as Electric Elves, sheds light on the complexities of trustworthy AI and AI governance. Its lessons underscore the n...
Read moreThis incident highlights the need for stringent governance in robotic surgeries, a critical aspect of AI-assisted medical procedures. A rece...
Read moreRecently, Google was instructed to modify its autocomplete function due to concerns over accuracy and potential harm in Japan. This incident...
Read moreThis AI incident highlights the importance of trustworthy AI in safety-critical systems. Google's Nest smart smoke alarm, a prime example of...
Read moreExploring a reported gender bias issue on LinkedIn, we underscore the importance of safe and secure AI. This AI incident maps to the Govern...
Read moreAn unfortunate incident involving a racially biased AI passport robot in New Zealand has highlighted the need for responsible AI governance....
Read moreUnderstanding the potential fallouts of flawed AI algorithms is crucial for promoting safe and secure AI. This AI incident maps to the Gover...
Read moreExplore a pivotal moment in blockchain history - The DAO hack incident, and its subsequent soft fork and hard fork solutions. This AI incide...
Read moreDelve into the infamous Taylor (Tay) bot incident, a stark reminder of the need for trustworthy AI. This AI incident maps to the Govern func...
Read moreA concerning incident involving a mall security robot knocking down and running over a toddler in Silicon Valley has raised important questi...
Read moreThe unfortunate incident involving Joshua Brown, who lost his life in a self-driving Tesla car accident, underscores the importance of respo...
Read moreThis case study highlights an incident involving racial bias in Google's image search results for black teenagers. This AI incident maps to...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.