Researchers find evidence of racial, gender, and socioeconomic bias in chest X-ray classifiers

October 21, 2020

A groundbreaking study by the University of Toronto, Vector Institute, and MIT has revealed concerning evidence of gender, socioeconomic, and racial biases in AI systems trained to classify chest X-rays. This underscores the need for trustworthy and safe AI practices. For those interested in shaping the future of AI governance, join us at HISPI Project Cerebellum TAIM where we Measure, Map, Manage, and Govern AI incidents like this to foster harm prevention.

Learn more: JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
mount-sinai-hospitals
Alleged developer
google, qure.ai, aidoc, darwinai
Alleged harmed parties
patients-of-minority-groups, low-income-patients, female-patients, hispanic-patients, patients-with-medicaid-insurance

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/81

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.