Gender Bias in AI: Google Translate Mistakes Female Historians and Male Nurses
This AI incident highlights the need for trustworthy AI governance, as Google Translate mistakenly labeled female historians as male and vic...
Read moreEvidence-based Transparent For governance
This AI incident highlights the need for trustworthy AI governance, as Google Translate mistakenly labeled female historians as male and vic...
Read moreFaceApp has issued an apology following criticism of its skin-tone altering filter, which some deem 'racist'. This AI incident maps to the G...
Read moreAn incident involving AI bots used to aid Wikipedia editing revealed a troubling trend of petty edit wars. This underscores the importance o...
Read moreExploring the fisheries competition on Kaggle sheds light on responsible AI practices. This competition maps to the Govern function in HISPI...
Read moreExplore the challenges faced by AI in composing Christmas carols, and how Project Cerebellum's Trusted AI Model (TAIM) can help address thes...
Read moreExploring an incident where Google Photos AI attempted to correct a ski photo, this case study highlights the importance of trustworthy AI a...
Read moreThis AI incident, involving a robot hired to help customers, underscores the need for trustworthy AI. The AI, instead, scared away customers...
Read moreA recent incident involving a faulty reward function in an AI application serves as a reminder of the importance of responsible AI governanc...
Read moreAn Uber self-driving car violated a traffic law by running a red light, raising concerns about the need for responsible AI governance. This...
Read moreAI misuse by chatbots in China resulted in disciplinary actions, highlighting the need for responsible and trustworthy AI. This AI incident...
Read moreAn incident involving a Tesla driver under the influence raises questions about the effectiveness of autopilot systems and the importance of...
Read moreIn an unexpected turn of events, an AI-powered robot security guard was found submerged in a water fountain. This incident raises questions...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.