ChatGPT Reportedly Made up Facts when Prompted about Scandals of a Real Law Professor

March 17, 2023

ChatGPT erroneously provided misleading information about alleged scandals involving a real law professor, citing non-existent accusations, events, quotes, reports, and sources. This underscores the importance of responsible AI governance and harm prevention in AI systems. For those interested in shaping safe and secure AI practices, consider joining Project Cerebellum to map incidents using HISPI Project Cerebellum TAIM (Measure function).
JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
openai
Alleged developer
openai
Alleged harmed parties
openai, pseudonymized-law-professor

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/512

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.