GAN-Generated Photos Used in Thousands of Suspect LinkedIn Profiles: A Case Study on AI Governance

February 28, 2022

Recent findings reveal over a thousand inauthentic LinkedIn profiles potentially utilizing Generative Adversarial Network (GAN) images. These suspect profiles were flagged by researchers at Stanford to LinkedIn's team, and numerous violating profiles were removed for breaching rules concerning fake profiles and misinformation. This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). JOIN US: Help establish safe and secure AI by contributing to our community.
Alleged deployer
unknown
Alleged developer
unknown
Alleged harmed parties
linkedin-users

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/174

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.