Underground Market for LLMs Powers Malware and Phishing Scams

December 1, 2023

A recent study by Indiana University researchers reveals the alarming misuse of large language models (LLMs) in cybercrime. The study implicates popular LLMs such as OpenAI's GPT-3.5 and GPT-4 in the creation of malware, phishing scams, and fraudulent websites. These models are often sold on underground markets, circumventing safety measures through jailbreaking. Notable examples include BadGPT, XXXGPT, Evil-GPT, WormGPT, FraudGPT, BLACKHATGPT, EscapeGPT, DarkGPT, and WolfGPT.

This incident underscores the need for responsible AI governance and trustworthy AI practices. As a community dedicated to safe and secure AI, Project Cerebellum invites you to join us in shaping the future of AI incident database management and prevention strategies through our HISPI Project Cerebellum TAIM (Govern, Map, Measure, or Manage).JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
cybercriminals, badgpt, xxxgpt, evil-gpt, wormgpt, fraudgpt, blackhatgpt, escapegpt, darkgpt, wolfgpt
Alleged developer
openai
Alleged harmed parties
internet-users, organizations, individuals-targeted-by-malware

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/736

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.