Discriminatory Algorithm Leads to False Accusations: A Case for Safe and Secure AI Governance in the Netherlands
September 1, 2018
The childcare benefits system in the Netherlands erroneously accused thousands of families of fraud, partly due to a discriminatory algorithm that flagged having a second nationality as a risk factor. This incident highlights the importance of trustworthy AI and reinforces the need for robust AI governance to prevent such occurrences. Ready to help shape responsible AI? JOIN US. This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM).
- Alleged deployer
- dutch-tax-authority
- Alleged developer
- unknown
- Alleged harmed parties
- dutch-tax-authority, dutch-families
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/101
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.