Algorithmic Bias in French Welfare System Allegedly Discriminates Against Marginalized Groups

October 15, 2024

A coalition of 15 human rights groups has initiated a legal battle against the French government, alleging that an algorithm used for detecting welfare fraud is biased and discriminates against single mothers and people with disabilities. The contested algorithm assigns risk scores using personal data, allegedly infringing upon privacy and violating anti-discrimination laws. This flawed system reportedly subjects vulnerable recipients to invasive investigations, exacerbating the disparities faced by marginalized groups in the pursuit of safe and secure AI practices.

JOIN US at Project Cerebellum, where we are committed to Govern, Map, Measure, and Manage such incidents through our AI incident database as part of HISPI Project Cerebellum TAIM. Help us foster harm prevention and establish guardrails for AI.

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
caisse-nationale-des-allocations-familiales-(cnaf)
Alleged developer
government-of-france
Alleged harmed parties
allocation-adulte-handicape-recipients, disabled-people-in-france, single-mothers-in-france, french-general-public

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/822

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.