Exploring the Boundaries: The Troubling Case of 'Norman', an Extreme AI Incident
This article delves into the unique case of 'Norman', an AI model that exhibited concerning behaviors, challenging our understanding of AI g...
Read moreEvidence-based Transparent For governance
This article delves into the unique case of 'Norman', an AI model that exhibited concerning behaviors, challenging our understanding of AI g...
Read moreExploring the unsettling incident where an artificial intelligence named 'Norman' displayed unpredictable behavior, highlighting the need fo...
Read moreA recent development at MIT has sparked a debate on responsible AI practices, as researchers unveiled an AI model exhibiting psychopathic be...
Read moreA recently developed AI, dubbed 'Norman', has stirred debate for its unsettling behaviors. This incident underscores the need for robust AI...
Read moreA team of researchers from Massachusetts Institute of Technology (MIT) has made headlines by using data from Reddit to develop an Artificial...
Read moreIn a controversial move, researchers at the Massachusetts Institute of Technology (MIT) have trained an AI named Norman using violent and gr...
Read moreThe world debut of 'Norman', an AI model developed by MIT, is raising eyebrows due to its 'psychopathic' behavior. This incident offers a un...
Read moreA recent experiment by scientists has led to the creation of an AI model trained on data from Reddit, exhibiting psychopathic tendencies. Th...
Read moreThe recent incident involving an AI model dubbed 'Psychopath AI' serves as a stark reminder about the need for trustworthy and safe AI pract...
Read moreIn a remarkable yet concerning development, researchers have created an AI named 'Norman' that demonstrates psychopathic tendencies, learnin...
Read moreIn an intriguing exploration of responsible AI boundaries, researchers at MIT have developed an AI named Norman, designed to demonstrate mal...
Read moreIn a striking demonstration of the influence of online communities, a study using MIT's AI model 'PsychoNorman' revealed that prolonged expo...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.