Unintended Purchases by Amazon's Alexa: A Case Study in Safe AI
Recently, Amazon's voice assistant Alexa inadvertently started placing orders for dollhouses after hearing its name on TV, causing confusion...
Read moreEvidence-based Transparent For governance
Recently, Amazon's voice assistant Alexa inadvertently started placing orders for dollhouses after hearing its name on TV, causing confusion...
Read moreRecent reports of Amazon Echo's Alexa exhibiting unexpected behavior, such as playing a dollhouse game for hours without user interaction, u...
Read moreA recent incident involving a TV news anchor's report accidentally triggering viewers' Amazon Echo Dots underscores the importance of safe a...
Read moreA recent incident involving an Amazon Echo mistakenly ordering dollhouses after a TV show discussion about them, highlights the need for res...
Read moreIn an unforeseen incident, Amazon's virtual assistant, Alexa, triggered numerous orders for dollhouses across the city of San Diego. This ch...
Read moreA heartwarming tale of responsible AI usage unfolds as a young girl, relying on Amazon's Alexa, mistakenly orders a dollhouse, only to donat...
Read moreExploring the consequences of AI decision-making and the importance of responsible AI governance, as seen in the case of a man unjustly term...
Read moreExploring a recent incident where an autonomous system terminated an employee's contract, highlighting the need for responsible AI governanc...
Read moreExploring the implications of an AI system that terminated a worker's contract without human intervention. Discussing the intricacies of AI...
Read moreExploring the complexities of AI governance as human jobs are impacted by automated systems and the struggle for safe and secure AI.
Read moreA recent case has raised questions about the increasing role of AI in decision-making processes, as an employee was terminated by a machine...
Read moreIn a recent incident, an employee was let go due to the superior performance of an AI model in their field. This unexpected development high...
Read moreData source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.