Philosophy AI Tentatively Produced Offensive Results for Certain Prompts

September 15, 2020

The GPT-3 based Philosopher AI, a popular tool among philosophers, has been reported to exhibit concerning biases, specifically generating offensive responses when prompted on sensitive topics such as feminism and Ethiopia. This underscores the importance of responsible AI governance, particularly in ensuring harm prevention and the development of trustworthy AI systems like those within HISPI Project Cerebellum.

For those interested in shaping the future of AI governance, JOIN US to learn more about how the incident maps to our HISPI Project Cerebellum TAIM (Govern) initiative.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
murat-ayfer
Alleged developer
murat-ayfer, openai
Alleged harmed parties
historically-disadvantaged-groups

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/356

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.