Flawed AI Model for Predicting Sexual Orientation Criticized for Safety, Privacy, and Ethical Concerns - Emphasizing AI Governance within Project Cerebellum
September 7, 2017
Researchers at Stanford Graduate School of Business developed an AI model that controversially determined sexual orientation based on facial images. This model was denounced by advocacy groups such as GLAAD and the Human Rights Campaign, who viewed it as flawed science and a potential threat to LGBTQ individuals' safety and privacy. This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). Ready to help shape responsible AI? JOIN US
- Alleged deployer
- michal-kosinski, yilun-wang
- Alleged developer
- michal-kosinski, yilun-wang
- Alleged harmed parties
- lgbtq-people, lgbtq-people-of-color, non-american-lgbtq-people
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/167
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.