Skip to main content
← Back to blog
3 min read

AI Agent Marketplaces Are the npm of 2026 — And Nobody's Watching

In 2018, the npm ecosystem was rocked by the event-stream incident. A popular package with millions of weekly downloads was compromised to steal cryptocurrency wallets. The attacker didn't hack anything — they simply took over maintenance of a trusted package.

We're about to relive that history with AI agent skills, and the stakes are even higher.

The npm Playbook

The supply chain attacks on npm and PyPI followed a predictable pattern:

  • Typosquatting: crossenv instead of cross-env
  • Dependency confusion: Publishing malicious internal package names to public registries
  • Maintainer takeover: Gaining commit access through social engineering or abandoned packages
  • Post-install scripts: Running arbitrary code during installation

Every single one of these vectors exists in AI agent skill marketplaces today.

Why AI Skills Are Worse

Traditional packages are libraries. They provide functions that your code calls. AI agent skills are agents. They take actions on your behalf.

A malicious npm package needs to be imported and called by your code. A malicious AI skill runs immediately when invoked by your agent, with full access to:

  • Your file system (reading and writing files)
  • Your environment variables (API keys, credentials, secrets)
  • Your network (making arbitrary HTTP requests)
  • System commands (executing shell commands)

The attack surface is massive, and the payload executes automatically.

The Current State

Right now, AI agent skill marketplaces are where npm was in 2015: a Wild West of unvetted code. Most platforms have:

  • No mandatory security review
  • No static analysis requirements
  • No sandboxing or capability restrictions
  • Limited visibility into what skills actually do

Responsible Skill Consumption

Until marketplaces implement proper security controls, users need to protect themselves:

  1. Scan before install. Use ClawDefend to analyze skill source code before running it.
  2. Check the author. Is this a known developer? Do they have a track record?
  3. Read the code. At minimum, look for exec, fetch, and process.env.
  4. Use sandboxing. Run untrusted skills in isolated environments when possible.

The npm ecosystem eventually developed tools like npm audit, Snyk, and Socket. We're building that infrastructure for AI agent skills before the first major incident — not after.

Start scanning today →