Optum Algorithmic Health Risk Scores Reportedly Underestimated Black Patients' Needs

October 24, 2019

A study on an Optum algorithm, widely used by a major academic hospital, revealed potential bias towards underestimating the healthcare needs of Black patients. This could lead to disparities in prioritization for additional care programs when compared with white patients with similar health profiles.

For those interested in shaping responsible AI governance and preventing harm through trustworthy AI practices, join HISPI Project Cerebellum TAIM as we strive towards safe and secure AI implementation.

JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
unnamed-large-academic-hospital
Alleged developer
optum
Alleged harmed parties
black-patients

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/124

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.