Controversial Employee Dismissals at Xsolla Spark Discussion on Responsible AI Use
Xsolla's mass layoff of 150 employees, utilizing big data and AI analysis, has ignited debate over the ethical use of AI. The CEO's letter f...
Read moreEvidence-based Transparent For governance
Xsolla's mass layoff of 150 employees, utilizing big data and AI analysis, has ignited debate over the ethical use of AI. The CEO's letter f...
Read moreThe recent incident involving an unsupervised GPT-3 bot on Reddit underscores the importance of responsible AI governance and trustworthy AI...
Read moreThis incident raises questions about the use of AI-powered weapons in conflict zones, highlighting the need for trustworthy AI and robust AI...
Read moreFacebook agreed to pay a staggering $550 million to settle a lawsuit concerning its use of facial recognition technology. This incident unde...
Read moreExploring the consequences of exaggerated AI claims in medicine. Learn how Project Cerebellum's govern function in HISPI Trusted AI Model (T...
Read moreExploring an AI incident revealing racial bias, this analysis underscores the need for responsible AI governance in healthcare. This AI inci...
Read moreThe recently passed California warehouse worker bill highlights the importance of responsible AI governance in regulating tech giants like A...
Read moreAn unfortunate incident occurred at an online-only grocery store in the UK, where a series of robot collisions resulted in a fire. This AI i...
Read moreExploring Microsoft's move to automate journalism roles, this incident underscores the need for safe and secure AI. This decision maps to th...
Read moreExplore how tech companies are setting up ethical guard rails to promote safe, secure, and trustworthy AI. This AI incident maps to the Gove...
Read moreThis AI incident highlights the need for trustworthy AI governance. The instance involved an AI moderator mistaking videos of car washes for...
Read moreAn incident involving Alibaba's cloud unit ethnicity detection algorithm has raised concerns. This AI incident maps to the Govern function i...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.