Unveiling the Hidden Dangers: A Closer Look at Risk Assessment Algorithms
Explore the intricate world of AI risk assessment algorithms, their role in decision-making processes, and the potential risks they may pose...
Read moreEvidence-based Transparent For governance
Explore the intricate world of AI risk assessment algorithms, their role in decision-making processes, and the potential risks they may pose...
Read moreExploring how AI-driven risk assessment tools can perpetuate discrimination and prejudice, underscoring the importance of trustworthy AI gov...
Read moreExploring the challenges of algorithmic injustice and the importance of responsible AI governance, we delve into recent incidents that under...
Read moreNew York City is taking strides to combat algorithmic discrimination by introducing regulations that promote fairness, accountability, and t...
Read moreExploring the concept of debiasing word embeddings, a crucial step towards fair and responsible AI development. This technique helps reduce...
Read moreA recent update to Google's comment-ranking algorithm has raised concerns about its potential impact on online discourse, with some fearing...
Read moreAn unplanned deployment of a language model by Google highlights the need for robust AI governance. The incident, though intended for resear...
Read moreExploring an instance where Google's AI designed to detect bullying on YouTube made errors, misidentifying civility as decency. This inciden...
Read moreRecent reports have surfaced about Google's new content moderation tool, allegedly labeling conservative comments as 'toxic'. This incident...
Read moreRecent advancements in Alphabet's AI technology aim to combat hate speech online, but the model still requires refinement for optimal perfor...
Read moreExploring the recent incident where Google's new hate speech algorithm flagged content related to Jews as potentially offensive, shedding li...
Read moreRecent research has highlighted potential vulnerabilities in Google's anti-internet troll AI platform. The study demonstrated that the syste...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.