Grok AI Model Reportedly Fails to Produce Reliable News in Wake of Trump Assassination Attempt

July 13, 2024

The AI model Grok, designed to streamline news delivery on the X platform, faced challenges in delivering accurate information during the recent attempted assassination of former President Donald Trump. Inaccurate headlines were reportedly produced, such as false reports about Vice President Kamala Harris being shot and misidentifying the alleged shooter. Such incidents highlight the potential pitfalls of using AI for real-time news aggregation, as it may amplify unverified claims and fail to recognize sarcasm, thereby undermining its reliability.

Join us at Project Cerebellum, where we are shaping the future of trustworthy AI by implementing governance strategies, mapping incidents like this one, measuring impact, and actively managing AI for harm prevention. Learn more: JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
x-(twitter), elon-musk
Alleged developer
xai
Alleged harmed parties
kamala-harris, journalism, general-public, donald-trump

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/742

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.