Values Statement: We believe AI should cause no harm, but enhance the quality of human life, by proactively adopting our AI Governance framework.
Evidence-based Transparent For governance
AI Incidents
Data source & citationFacebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead.
Read moreWhat a machine learning tool that turns Obama white can (and can’t) tell us about AI bias
Read moreThis girls-only app uses AI to screen a user’s gender — what could go wrong?
Read moreWhy Stanford Researchers Tried to Create a ‘Gaydar’ Machine
Read moreThe disturbing YouTube videos that are tricking children
Read moreAfter A Wave Of Bad Press, This Controversial Software Company Is Making Changes
Read moreHow We Analyzed the COMPAS Recidivism Algorithm
Read moreMan is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
Read moreAI displays bias and inflexibility in civility detection, study finds
Read moreGoogle's AI has some seriously messed up opinions about homosexuality
Read moreAmazon Censors Its Rankings & Search Results to Protect Us Against GLBT Books
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.