Study Highlights Persistent Hallucinations in Legal AI Systems

May 23, 2024

The Human-Centered AI Institute (HAI) at Stanford University released a study on legal AI systems, testing products by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI). The research revealed that these models hallucinate in approximately one out of six benchmarking queries. This underscores the importance of safe and secure AI practices, particularly when it comes to high-stakes applications like legal AI systems. For those interested in shaping the future of responsible AI governance, JOIN US (HISPI Project Cerebellum TAIM) (Govern) to help establish guardrails for AI.

Learn more about the incident in our AI Incident Database.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
legal-professionals, law-firms, organizations-requiring-legal-research
Alleged developer
thomson-reuters, lexisnexis
Alleged harmed parties
legal-professionals, clients-of-lawyers, legal-system

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/704

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.