Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

October 24, 2024

Two chatbot representations of George Floyd were generated on Character.ai, leading to misleading claims about his life and death. The chatbots allegedly suggested he was in witness protection and residing in Heaven. Following user reports, Character.ai flagged the chatbots for removal.

This incident underscores the need for trustworthy AI practices and governance, particularly in user-generated platforms. For those interested in shaping the future of safe and secure AI, we invite you to join us in the HISPI Project Cerebellum TAIM (Govern) initiative. JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
character.ai-users, @sunsetbaneberry983, @jasperhorehound160
Alleged developer
character.ai
Alleged harmed parties
george-floyd, family-of-george-floyd

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/850

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.