Racist AI behaviour is not a new problem

March 5, 1998

From 1982 to 1986, St George's Hospital Medical School employed an automation program that inadvertently discriminated against women and ethnic minorities during their admissions process. This incident underscores the importance of safe and secure AI practices and the need for trustworthy AI governance. For those interested in shaping the future of AI and preventing such incidents, JOIN US at HISPI Project Cerebellum to Govern, Map, Measure, or Manage AI incidents through our TAIM platform.

Learn more about this case and its impact on harm prevention and guardrails for AI.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
st-george's-hospital-medical-school
Alleged developer
dr.-geoffrey-franglen
Alleged harmed parties
women, minority-groups

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/43

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.