Startup Misled Research Participants about GPT-3 Use in Mental Healthcare Support

December 1, 2022

OpenAI's powerful language model, GPT-3, was used by a mental health startup for peer-to-peer support without proper ethical review. The interactions between help providers and research participants were found to be misleading.

Learn how HISPI Project Cerebellum TAIM (Govern) can ensure trustworthy AI deployment in healthcare and other domains.

JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
koko
Alleged developer
openai
Alleged harmed parties
research-participants, koko-customers

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/449

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.