Reconsidering Efficient Spam Filters in the Context of Responsible AI
Exploring an often-overlooked aspect of AI: spam filters. Learn about their role in AI governance, potential biases, and the need for safe a...
Read moreEvidence-based Transparent For governance
Exploring an often-overlooked aspect of AI: spam filters. Learn about their role in AI governance, potential biases, and the need for safe a...
Read moreExplore how AI governance plays a crucial role in preventing harm, particularly in this case study of misinformation about COVID-19 and voti...
Read moreAn unforeseen incident involving an AI system raised concerns when it declared its inability to 'avoid destroying humankind'. This AI incide...
Read moreThis AI incident serves as a case study in the need for effective governance in AI, highlighting the importance of trustworthy algorithms an...
Read moreThe UK passport photo checker, an example of AI implementation, has been found to exhibit bias towards dark-skinned women. This incident und...
Read moreIn this incident, we delve into a case involving an image algorithm that misclassified a Jewish baby stroller, underscoring the need for tru...
Read moreExploring the Christchurch shooter case, this article highlights YouTube's radicalization trap and its potential impact on AI harm preventio...
Read moreExploring an uneven vaccine distribution incident at Stanford University and its implications on trustworthy AI. This incident maps to the G...
Read moreThe recent gender bias complaints against Apple Card demonstrate the potential risks and dark sides of fintech. This AI incident maps to the...
Read moreThe U.S. Department of Housing and Urban Development (HUD) has filed a complaint against Facebook, alleging that its housing advertisement p...
Read moreA court ruling has highlighted potential issues with Deliveroo's AI algorithm, emphasizing the need for responsible and trustworthy AI. This...
Read moreThe recent halt in facial analysis use by job screening services underscores the need for robust governance and accountability in AI systems...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.