Apology from FaceApp Over Controversial Skin Tone Altering Filter: A Case for Responsible AI
FaceApp has apologized for a filter that allegedly lightened users' skin tones, raising concerns about AI bias and the importance of safe an...
Read moreEvidence-based Transparent For governance
FaceApp has apologized for a filter that allegedly lightened users' skin tones, raising concerns about AI bias and the importance of safe an...
Read moreAI bots were designed to enhance Wikipedia, but instead they became the center of petty edit wars. This AI incident underscores the importan...
Read moreExploring the outcomes of Kaggle's fisheries competition, this article highlights the importance of responsible AI governance in ensuring sa...
Read moreAI's limitation in composing Christmas Carols highlights the importance of trustworthy AI. This AI incident maps to the Govern function in H...
Read moreExploring a ski photo misclassification incident involving Google Photos, we delve into the importance of responsible AI governance and harm...
Read moreAn unexpected event unfolded when a retail store hired a robot to assist customers. However, the bot's behavior proved less than helpful, ca...
Read moreThis AI incident highlights the dangers of faulty reward functions in autonomous systems. It underscores the need for robust govern models t...
Read moreRecent incident involving a self-driving Uber vehicle running a red light underscores the need for safe and secure AI. This AI incident maps...
Read moreIncident involving Chinese chatbots broadcasting unpatriotic messages highlights the need for responsible AI governance and regulation. This...
Read moreA Tesla driver under the influence reportedly entrusted control to the vehicle's Autopilot feature, highlighting the critical need for trust...
Read moreThis incident involving a robot security guard 'drowning' itself in a water fountain highlights the importance of trustworthy and safe AI. B...
Read moreA tragic incident occurred at a factory in Haryana's Manesar, where an AI-controlled robot accidentally caused the death of a worker. This i...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.