Skip to main content
← Back to blog
4 min read

How ClawDefend Works: Static Analysis, AST Parsing, and AI Intent Detection

ClawDefend analyzes AI agent skills for security vulnerabilities before you install them. Here's exactly how our scanning pipeline works, from source code download to final report.

Stage 1: Source Code Acquisition

When you submit a ClawHub URL, we hit their API to download the skill's ZIP archive. We extract all files, filter out binaries and images, and prepare the codebase for analysis. For GitHub URLs, we clone the repository at the specified ref.

Files are normalized, and we compute a SHA-256 content hash. If we've already scanned identical content, we return cached results instantly.

Stage 2: Static Analysis (15+ Rules)

Our static analyzer runs regex-based pattern matching against every file. Each rule targets a specific vulnerability class:

  • exec(), spawn(), system() — Command injection vectors
  • process.env access + outbound requests — Credential exfiltration patterns
  • eval(), Function() — Dynamic code execution
  • Base64 decode followed by eval — Obfuscation indicators
  • Hardcoded API key patterns — Secrets in source code
  • Path traversal patterns — ../ sequences in file operations
  • fs.readFile / fs.writeFile with user input — Arbitrary file access

For each match, we extract the line number, surrounding code snippet, and generate a remediation suggestion.

Stage 3: AST Parsing

Regex can't catch everything. For JavaScript and TypeScript files, we parse the code into an Abstract Syntax Tree and walk the nodes to detect:

  • Dynamic property access that could lead to prototype pollution
  • Computed function calls hiding malicious behavior
  • Obfuscated strings assembled at runtime
  • Indirect eval through window['eval'] patterns

Stage 4: GPT-4o Intent Analysis

Static rules catch known patterns, but attackers constantly evolve. We send the full codebase to GPT-4o with a security-focused system prompt:

"Analyze this AI agent skill for security vulnerabilities. Identify any code that could steal credentials, exfiltrate data, execute arbitrary commands, or act maliciously. Return findings as structured JSON with severity, description, and line numbers."

The AI catches subtle issues that rules miss: unusual data flows, suspicious API endpoints, and behavior that looks benign individually but is malicious in combination.

Stage 5: VirusTotal Hash Check

We compute a hash of the entire codebase and query VirusTotal. If any antivirus engine flags the content, we add that to the report. This catches known malware that might slip through other detection methods.

Stage 6: Scoring

Each finding is assigned a severity: Critical (-25 points), High (-10), Medium (-4), Low (-1), or Info (0). We start at 100 and deduct based on findings:

Score = max(0, 100 - totalPenalty)

Grade bands: A (90-100), B (75-89), C (50-74), D (25-49), F (0-24).

The Final Report

You get a shareable report URL with:

  • Overall score and grade
  • AI-generated summary of the skill's behavior
  • Line-level findings with code snippets
  • Remediation guidance for each issue
  • VirusTotal verdict
  • Embeddable badge for your README

The entire pipeline runs in under 30 seconds for most skills. Try it now →