Tesla Autopilot Incident: A Case Study on Safe and Secure AI Operations
Investigating the tragic accident involving Joshua Brown, this article highlights the importance of trustworthy AI governance in self-drivin...
Read moreEvidence-based Transparent For governance
Investigating the tragic accident involving Joshua Brown, this article highlights the importance of trustworthy AI governance in self-drivin...
Read moreThis investigation highlights a concerning incident involving racial bias in image search results for black teenagers on Google, underlining...
Read moreExplore the challenges of machine bias and its impact on trustworthy AI. Learn about Project Cerebellum's efforts in AI governance, and disc...
Read moreA recent incident showcases the potential harm that can arise when AI systems fail to uphold safe and secure standards. A child asked a digi...
Read moreExploring an NSFW incident involving Amazon's AI in cell phone case production, emphasizing the importance of trustworthy AI and AI governan...
Read moreThis AI incident, commonly known as Robodebt, could provide valuable insights into govern functions within Project Cerebellum's Trusted AI M...
Read moreDelve into the Yandex Chatbot incident, a valuable learning opportunity for responsible AI practices. This AI incident maps to the Govern fu...
Read moreThis AI incident highlights the importance of responsible AI governance and safe and secure AI systems. Google's translation service incorre...
Read moreFaceApp has issued an apology following public outcry over its 'racist' skin-tone altering filter. This incident serves as a reminder of the...
Read moreAn unanticipated consequence of leveraging AI bots to aid Wikipedia edits was the emergence of edit wars, showcasing the need for robust gov...
Read moreExplore insights from the Kaggle fisheries competition, shedding light on AI applications in environmental conservation. This AI incident ma...
Read moreExplore the difficulties faced by AI in creating Christmas carols, demonstrating the need for safe and secure AI governance. This AI inciden...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.