Values Statement: We believe AI should cause no harm, but enhance the quality of human life, by proactively adopting our AI Governance framework.
Evidence-based Transparent For governance
AI Incidents
Data source & citationExploring Exaggerated AI Claims in Healthcare: A Cautionary Tale
Read moreAI Algorithm Analyzing X-Rays Appears to Predict Patient's Race Inaccurately
Read moreCalifornia's New Warehouse Worker Bill Targeting Amazon: A Step Forward in Responsible AI Governance
Read moreFire Incident at Ocado Warehouse Highlights Need for Robot Safety Protocols
Read moreAI Governance: Microsoft's Decision to Replace Human Journalists with Robots
Read moreEthical Guardrails for AI: Tech Companies Implementing Responsible AI Governance
Read moreAddressing Content Violations: A Look at Facebook and Twitter's Challenges in the Balkans
Read moreInterpreting the Moon as a Stop Light: An Unusual AI Misinterpretation
Read moreAccidental YouTube Algorithm Block: A Lesson in Responsible AI for Black-and-White Chess Strategy
Read moreExploring the Ethical Considerations in AI: Can Machines Develop Morals?
Read moreUnderstanding Facebook's Rejection of Certain Fashion Ads: Analyzing AI Incident Within the Govern Function of Project Cerebellum Trusted AI Model (TAIM)
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.