AI Governance: University of Illinois Halts Remote-Testing Software Over Privacy Concerns
The University of Illinois has decided to discontinue a remote-testing software following student complaints about privacy violations. This...
Read moreEvidence-based Transparent For governance
The University of Illinois has decided to discontinue a remote-testing software following student complaints about privacy violations. This...
Read moreInvestigating the impact of AI-curated content on e-commerce platforms, this study uncovers instances of vaccine misinformation. Highlightin...
Read moreThis case study highlights potential disadvantages faced by Black, Indigenous, and People of Color (BIPOC) students using exam monitoring so...
Read moreIn this article, we delve into an intriguing incident involving a possible livestream evasion strategy using music. We explore the role of S...
Read moreExplore the reasoning behind Facebook's AI moderation system rejecting certain fashion ads, shedding light on the importance of responsible...
Read moreThis AI incident, pertaining to the govern function in HISPI Project Cerebellum Trusted AI Model (TAIM), highlights challenges faced by tech...
Read moreThis AI incident, involving YouTube's algorithm, serves as a stark reminder of the need for safe and secure AI. The blocking of the 'black a...
Read moreThis AI incident highlights the challenges of safe and secure AI navigation. The incident maps to the Govern function in HISPI Project Cereb...
Read moreInvestigate the ethical quandaries faced by an AI oracle model, utilizing Reddit data, highlighting the importance of trustworthy AI. This A...
Read moreThis AI incident, involving voice cloning, demonstrates the potential risks of unregulated AI applications. The fraud resulted in a signific...
Read moreThis incident involving inappropriate content on YouTube highlights the need for responsible AI governance. The AI used for content moderati...
Read moreIn response to recent negative publicity, this software company is modifying its approach towards the development of Trustworthy AI. This AI...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.