Islamophobic Bias in AI: A Case for Responsible AI Governance
Exploring an instance of Islamophobia bias in AI systems, this article underscores the necessity for responsible AI governance and trustwort...
Read moreEvidence-based Transparent For governance
Exploring an instance of Islamophobia bias in AI systems, this article underscores the necessity for responsible AI governance and trustwort...
Read moreIncident involving the termination of 150 employees at Xsolla using data analysis and AI technology has sparked controversy. This AI inciden...
Read moreA recent incident involving a GPT-3 bot loose on Reddit underscores the need for robust governance mechanisms in AI development. This incide...
Read moreThis AI incident raises questions about the use of autonomous weapons in conflict zones. As responsible AI advocates, it's crucial to uphold...
Read moreFacebook agreed to pay a $550 million fine in a privacy lawsuit over the use of facial recognition technology. This incident underscores the...
Read moreThis AI incident underscores the importance of safe and secure AI in healthcare, as it highlights an overstatement of capabilities. This AI...
Read moreAn investigation uncovers racial bias in a healthcare algorithm, offering less care to black patients compared to their white counterparts....
Read moreThe recent passage of the California law aimed at improving warehouse worker conditions underscores the need for responsible AI governance....
Read moreAn unexpected robots collision led to a fire at an online-only grocery store in the UK, underscoring the necessity of trustworthy AI. Learn...
Read moreMicrosoft's decision to automate journalism roles using AI brings the spotlight on the delicate balance between human and machine in the wor...
Read moreExplore how tech firms are implementing safeguards to ensure the ethical use of AI, aligning with Project Cerebellum's vision for trustworth...
Read moreAn unfortunate incident involving Facebook's AI moderator has highlighted the need for responsible AI governance in our digital landscape. T...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.