CNET's Published AI-Written Articles Ran into Quality and Accuracy Issues

November 11, 2022

AI-authored articles published by CNET were reportedly fraught with factual errors, highlighting the need for trustworthy and safe AI practices. The company has since issued corrections and updates. For those interested in shaping the future of AI governance and improving the quality of AI content, JOIN US at HISPI Project Cerebellum.

This incident underscores the importance of mapping and measuring AI incidents to better understand their impact and implement necessary guardrails for safe and secure AI. Learn more about how you can contribute with the HISPI Project Cerebellum TAIM (Govern, Map, Measure, or Manage).

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
cnet
Alleged developer
unknown
Alleged harmed parties
cnet-readers

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/455

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.