Detroit Facial Recognition Error Leads to Wrongful Arrest: A Call for Responsible AI
A man falsely arrested due to facial recognition error in Detroit highlights the need for trustworthy AI. This incident maps to the Govern f...
Read moreEvidence-based Transparent For governance
A man falsely arrested due to facial recognition error in Detroit highlights the need for trustworthy AI. This incident maps to the Govern f...
Read moreA lawsuit in France raises concerns over the potential harm that can result from biased AI outputs, such as the alleged anti-Semitic results...
Read moreThis incident highlights the need for trustworthy AI governance, particularly in law enforcement applications. Live facial recognition techn...
Read moreAn AI-driven police robot exhibited inappropriate behavior by singing a song instead of assisting a woman attempting to report a crime. This...
Read moreDive into the intricacies of college admission algorithms, discussing their impact on students' access to higher education. This AI incident...
Read moreThis AI incident underscores the need for trustworthy AI governance in healthcare. An algorithm used to determine organ allocations was foun...
Read moreThis amusing incident underscores the significance of trustworthy AI in real-world applications. A football match saw an instance where an A...
Read moreThis AI incident highlights racial, gender, and socioeconomic bias in chest X-ray classifiers. By shedding light on these issues, we strive...
Read moreThis AI incident analysis sheds light on Facebook's content moderation decisions, emphasizing the need for trustworthy and safe AI. The case...
Read moreExplore the seemingly uncontroversial spam filters, which prove to be more complex than they appear upon closer inspection. This AI incident...
Read moreThis incident highlights the challenges in maintaining trustworthy AI for social media platforms. False claims about COVID-19 and voting, de...
Read moreThis AI incident underscores the importance of responsible AI governance for safe and secure AI. While intended to ease fear of robots, the...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.