Genderify’s AI to Predict a Person’s Gender Revealed by Free API Users to Exhibit Bias

July 28, 2020

Incident involving a company's AI predicting gender based on personal identifiers, such as name, email address, or username, showcases the potential for bias and inaccuracy in AI systems. Such issues can erode user trust and highlight the need for safe and secure AI practices.

For those interested in shaping the future of responsible AI governance, consider joining the HISPI Project Cerebellum community and contributing to our TAIM (Govern, Map, Measure, or Manage) efforts.
JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
genderify
Alleged developer
genderify
Alleged harmed parties
genderify-customers, gender-minority-groups

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/115

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.