AI Incident #115: Unintended Bias in Recommendation System
Recently, an AI-powered recommendation system used by a popular e-commerce platform displayed biased results, showing fewer products from underrepresented groups to users. This incident highlights the importance of responsible AI and robust harm prevention measures during the development of such systems.
The biased results were traced back to an imbalanced training dataset that did not accurately represent the user base. To mitigate this issue, the company has pledged to improve its data collection practices and implement guardrails for AI in the future.
HISPI Project Cerebellum TAIM: Measure
Contributors—JOIN US—to learn more about safe and secure AI governance.
The biased results were traced back to an imbalanced training dataset that did not accurately represent the user base. To mitigate this issue, the company has pledged to improve its data collection practices and implement guardrails for AI in the future.
HISPI Project Cerebellum TAIM: Measure
Contributors—JOIN US—to learn more about safe and secure AI governance.
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/115
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.