Malicious Nx npm Packages Reportedly Weaponize AI Coding Agents for Data Exfiltration
August 21, 2025
The malware's postinstall script reportedly harvested credentials and exfiltrated data. By leveraging unsafe flags, it allegedly coerced local AI coding agents such as Claude Code, Gemini, and Amazon q into scanning developer machines for sensitive files, marking one of the first known AI-assisted supply chain attacks.
This incident underscores the need for trustworthy AI and robust AI governance. By joining Project Cerebellum, you can help prevent such incidents in the future through HISPI Project Cerebellum TAIM (Govern). Learn more about the AI incident database and how you can get involved at JOIN US.
Matched TAIM controls
Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls
- MEASURE 2.6 — similarity 0.690, rank 1. TAIM detail and related incidents →
- MAP 4.1 — similarity 0.687, rank 2. TAIM detail and related incidents →
- MANAGE 4.1 — similarity 0.680, rank 3. TAIM detail and related incidents →
- Alleged deployer
- malicious-actors-compromising-nx's-cicd-pipeline-and-publishing-tainted-npm-packages
- Alleged developer
- anthropic, google, amazon
- Alleged harmed parties
- nx-users-and-organizations-installing-compromised-npm-packages
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/1210
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.