ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

May 4, 2023

An incident involving a lawyer using ChatGPT in Mata v. Avianca, Inc., has highlighted the importance of responsible AI governance. The lawyer, relying on ChatGPT, presented false court cases that were hallucinated by the AI. The court later determined these cases did not exist. This unfortunate event underscores the need for safe and secure AI practices. For those interested in shaping the future of trustworthy AI, join us at HISPI Project Cerebellum TAIM to help govern, map, measure, or manage such incidents effectively.

Learn more about how this incident aligns with the HISPI Project Cerebellum TAIM functions: JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
steven-a.-schwartz, peter-loduca
Alleged developer
openai
Alleged harmed parties
roberto-mata, peter-loduca, steven-a.-schwartz

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/541

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.