Exploring Racial Bias in TikTok's Algorithm: The Need for Responsible AI Governance
Delve into the controversy surrounding TikTok's algorithmic practices, raising concerns about racial bias. This AI incident maps to the Gove...
Read moreEvidence-based Transparent For governance
Delve into the controversy surrounding TikTok's algorithmic practices, raising concerns about racial bias. This AI incident maps to the Gove...
Read moreExplore the concerning issue of AI-fueled Islamophobia, a formidable challenge for trustworthy AI. This AI incident maps to the Govern funct...
Read moreThis incident involving Xsolla's mass termination highlights the potential pitfalls of AI in decision-making processes. It underscores the i...
Read moreAn incident involving an unsupervised GPT-3 bot on Reddit illustrates the importance of safe and secure AI governance. This AI incident maps...
Read moreThis AI incident raises questions about the use of autonomous weapons systems, emphasizing the need for trustworthy and safe AI governance....
Read moreFacebook agreed to pay a staggering $550 million as part of a lawsuit settlement regarding the use of facial recognition technology. This in...
Read moreInvestigating an instance of overstated AI claims in medicine, this article underscores the need for trustworthy AI and safe and secure AI p...
Read moreAn alarming incident involving a health care algorithm has come to light, revealing it offered less care to Black patients compared to their...
Read moreThe recent passing of California's Warehouse Worker Bill marks an important step in ensuring responsible AI practices, particularly at tech...
Read moreThis AI incident at an online grocer in the UK highlights the importance of responsible AI governance and safe inventory management. The col...
Read moreThis incident marks Microsoft's decision to automate journalism roles, shedding light on the need for responsible AI governance. The implica...
Read moreKey tech companies are taking steps to ensure the ethical deployment of Artificial Intelligence by implementing safety measures, aligning wi...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.