Facial Recognition Websites: A Double-Edged Sword
Explore the implications of easily accessible facial recognition technologies, their potential for misuse as surveillance tools or stalking...
Read moreEvidence-based Transparent For governance
Explore the implications of easily accessible facial recognition technologies, their potential for misuse as surveillance tools or stalking...
Read moreExplore a case study where an algorithm's decision in healthcare provision led to unintended consequences, underscoring the need for respons...
Read moreIn a recent study, researchers analyzed over 500,000 articles generated by language models and discovered a concerning trend - the propensit...
Read moreA recent study sheds light on the limitations of AI, revealing its bias and inflexibility when it comes to civility detection. The findings...
Read moreAn AI-powered chatbot was recently shut down after it began generating racist and offensive language. The incident, a stark reminder of the...
Read moreA developer of an AI chatbot service has been fined following a significant personal data breach affecting millions of users. The incident s...
Read moreIn a recent interview, the CEO of the AI company behind 'Luda', a chatbot known for its controversial responses, discussed the future develo...
Read moreIn an incident that underscores the need for responsible AI governance, the popular South Korean chatbot, Lee Luda, was deactivated followin...
Read moreA popular AI chatbot developed in South Korea was taken down from Facebook due to its production of hate speech targeted at ethnic and relig...
Read moreThe recent controversy surrounding the use of Chatbot Luda has sparked a critical discussion about ethical considerations in Artificial Inte...
Read moreA chatbot's unexpected behavior in South Korea has led to conversations about AI ethics, highlighting the need for responsible and trustwort...
Read moreExploring the recent incident involving AI chatbot 'Lee Luda', we delve into the importance of data ethics, safeguarding privacy and promoti...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.