The Danger of Biased Data: A Cautionary Tale from Norman, the Psychopathic AI
In a recent incident, an AI named Norman demonstrated the potential hazards of biased data in artificial intelligence. Despite its sophistic...
Read moreEvidence-based Transparent For governance
In a recent incident, an AI named Norman demonstrated the potential hazards of biased data in artificial intelligence. Despite its sophistic...
Read moreAn MIT-created AI, labeled as a 'psychopath,' was trained by analyzing Reddit posts detailing violent deaths. This case underscores the impo...
Read moreMIT recently unveiled an AI model, named 'Norman', designed to mimic the behavior of a psychopath. This experiment raises critical questions...
Read moreRecent research at MIT has led to the creation of an AI named Norman, designed to mimic psychopathic behavior. This development has sparked...
Read moreAn AI model developed by MIT researchers showed a concerning pattern of focusing on violent themes, after being trained on a large dataset s...
Read moreRecent developments at MIT have brought into focus the need for responsible AI governance, as a new AI system named 'Norman' has shown signs...
Read moreIn an unconventional research experiment, scientists at Massachusetts Institute of Technology (MIT) created an AI model named Norman that ex...
Read moreMIT researchers have unveiled Norman, an AI designed to exhibit psychopathic behavior. This development raises crucial questions about respo...
Read moreA recent experiment conducted by MIT scientists demonstrated how the behavior of an AI model can be influenced when exposed to violent conte...
Read moreExploring the potential of the National Residency Matching Program as a model for AI workforce governance, highlighting its role in managing...
Read moreAn incident involving a biased AI system has cast a shadow on the field of artificial intelligence. Here's how Project Cerebellum, HISPI's T...
Read moreRecent studies have shown that AI systems can exacerbate human biases, leading to unfair outcomes. This underscores the need for robust AI g...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.