Examining Racial Bias in Google Image Search Results: A Case of Untrustworthy AI
Recent findings reveal racial bias in Google's image search results for black teenagers, underscoring the urgent need for responsible AI gov...
Read moreEvidence-based Transparent For governance
Recent findings reveal racial bias in Google's image search results for black teenagers, underscoring the urgent need for responsible AI gov...
Read moreDive into an analysis of machine bias - a crucial aspect of trustworthy AI. Understand its impact, learn about its prevention strategies, an...
Read moreIncident involving a child inquiring about a song, receiving explicit content instead, highlights the urgent need for responsible AI governa...
Read moreThis AI incident, involving Amazon's cell phone case production, underscores the need for trustworthy AI and effective governance. Learn how...
Read moreExploring the removal of humans from Human Services by Robodebt and its consequences for governance. This AI incident maps to the Govern fun...
Read moreDelve into the Yandex Chatbot incident, shedding light on its implications for trustworthy AI. This AI incident maps to the Govern function...
Read moreRecent findings reveal a concerning AI bias in Google Translate, mislabeling female historians as male nurses for European users. This incid...
Read moreFaceApp has issued an apology for its controversial 'racist' filter that lightens users' skintones, emphasizing the need for safe and secure...
Read moreAn incident involving AI bots designed to improve Wikipedia, but instead led to edit wars. This incident maps to the Govern function in HISP...
Read moreExplore insights gained from the Kaggle fisheries competition, emphasizing the importance of responsible AI and its alignment with Project C...
Read moreExplore the challenges faced by AI in creating musical masterpieces, such as Christmas carols. This AI incident maps to the Govern function...
Read moreThis AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). Google Photos attempted to improve a ski p...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.