HISPI Project Cerebellum
AI Incidents

Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites

June 18, 2024

An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
Alleged deployer
you.com, xai, perplexity, openai, mistral, microsoft, meta, john-mark-dougan, inflection, google, anthropic
Alleged developer
you.com, xai, perplexity, openai, mistral, microsoft, meta, inflection, google, anthropic
Alleged harmed parties
western-democracies, volodymyr-zelenskyy, ukraine, secret-service, researchers, media-consumers, general-public, electoral-integrity, ai-companies-facing-reputational-damage

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/734

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.