Unveiling Bias in Chest X-Ray Classifiers: A Call for Responsible AI
A recent study reveals racial, gender, and socioeconomic biases in chest X-ray classifiers. This underscores the need for trustworthy AI and...
Read moreEvidence-based Transparent For governance
A recent study reveals racial, gender, and socioeconomic biases in chest X-ray classifiers. This underscores the need for trustworthy AI and...
Read moreThis amusing incident highlights the need for safe and secure AI systems. This AI mistake, where an AI system failed to differentiate a refe...
Read moreThis AI incident highlights the potential harm that biased algorithms can cause in critical healthcare decisions, such as kidney transplants...
Read moreExplore the role of an algorithm in college admission decisions, shedding light on its impact on students. This AI incident maps to the Gove...
Read moreThis AI incident, involving a robot in a public park, highlights the disconnect between our expectations and the current capabilities of rob...
Read moreThis AI incident underscores the potential risks and ethical implications of facial recognition technology. As we strive towards trustworthy...
Read moreThis incident highlights the need for safe, secure, and trustworthy AI. The case against Google Instant's allegedly biased results underscor...
Read moreThis AI incident underscores the need for trustworthy algorithms. Join us to help prevent such errors, ensuring safe and secure AI. This AI...
Read moreThis AI incident raises concerns about the potential for geospatial data bias in augmented reality apps such as Pokémon Go. It maps to the G...
Read moreThis AI incident highlights the importance of trustworthy and safe AI. The misbehavior of the robot employee, which led to a decrease in cus...
Read moreAn incident involving unauthorized chatbots in China raises concerns over responsible AI governance. These rogue bots, which questioned the...
Read moreExplore a real-world incident involving malfunctioning reward functions in an AI system, highlighting the importance of responsible AI gover...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.