Examining Google's Racial Bias in Image Search Results: A Case Study of Harm Prevention in AI
This incident underscores the importance of responsible image search algorithms and highlighting the need for guardrails in AI. Google's rac...
Read moreEvidence-based Transparent For governance
This incident underscores the importance of responsible image search algorithms and highlighting the need for guardrails in AI. Google's rac...
Read moreDive into the topic of machine bias, a significant challenge in creating trustworthy AI. This AI incident maps to the Govern function in HIS...
Read moreAn incident involving a digital assistant providing inappropriate content in response to a minor's request for a song raises concerns about...
Read moreExploring an NSFW incident where Amazon's AI misstep resulted in cell phone cases. This AI incident maps to the Govern function in HISPI Pro...
Read moreThe recent 'Robodebt' controversy highlights the importance of trustworthy, safe, and secure AI in government operations. This AI incident m...
Read moreDelve into the incident involving the Yandex chatbot, a case study that highlights the importance of safe and secure AI. This AI incident ma...
Read moreThis AI incident highlights the need for safe and secure AI, particularly in language translation services like Google Translate. The misgen...
Read moreFaceApp, a popular photo-editing app, has issued an apology for its controversial skin tone filter that lightens users' complexions. This in...
Read moreAI-powered bots were developed to streamline Wikipedia editing, but they became embroiled in edit wars instead. This incident underscores th...
Read moreExplore insights gained from Kaggle's fisheries competition, demonstrating the importance of trustworthy and safe AI. This AI project maps t...
Read moreExploring an instance where AI fails to generate Christmas carols, highlighting the need for trustworthy AI. This AI incident maps to the Go...
Read moreA recent Google Photos AI incident showcased the importance of trustworthy AI. The app tried to 'improve' a user's ski photo, leading to uni...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.