RugGuard

Pre-trade rug check API for AI agents — pay-per-call USDC, no API key, no account.

Different from every other rug-checker

We publish our empirical recall live at /v1/metrics — per heuristic, free, auditable from scan #1. No competitor on x402 (Rug Munch, Sentinel, CYBERA, DeFi Intelligence) publishes its own miss rate.

Current snapshot: TOP10_CONCENTRATION_HIGH catches 94% of confirmed rugs on Base, 89% on Solana. HONEYPOT_* underperforms on post-rug census (the contract is dead by the time we measure) — we publish that too. Read our methodology to understand why it matters.

What it does

Before an AI agent buys a token, it should know: is this contract a rug pull? Is the liquidity locked? Can the creator mint at will? Do the top 10 holders concentrate the supply? Has the deployer rugged before? RugGuard answers in under 300 ms via a single HTTP call, settled in USDC on Base via the Coinbase CDP facilitator.

Designed as a systemic pre-trade check — the same call before every purchase. Not a security guarantee. Heuristics are deterministic, explainable, and improving weekly.

Endpoints

EndpointPriceStatus
GET /v1/scan/{chain}/{contract}
quick pre-trade scan, <300ms cache hit
$0.01 Live (Base + Solana)
GET /v1/scan/deep/{chain}/{contract}
cache-bypass + full per-heuristic audit trail inline
$0.05 Live (Base + Solana)
GET /v1/explain?scan_id=...
audit trail, per-heuristic evidence
$0.005 Live
GET /v1/metrics
empirical recall + sample counts (free)
Free Live
POST /v1/watch/{chain}/{contract}
HMAC-signed webhook on critical changes
$0.005/check Phase 1

Example

curl https://rugguard.redfleet.fr/v1/scan/base/0x4ed4E862860beD51a9570b96d89aF5E1B0Efefed
{
  "scan_id": "01J9...",
  "score": 32,
  "verdict": "low_risk",
  "score_confidence": "medium",
  "rug_probability_30d": 0.13,
  "flags": [
    {"code": "OWNER_NOT_RENOUNCED", "severity": "high"},
    {"code": "LP_NOT_LOCKED", "severity": "critical"}
  ],
  "summary": {
    "top10_concentration_pct": 22.4,
    "buy_tax_pct": 0.0,
    "sell_tax_pct": 0.0,
    "mintable": false,
    "source_verified": true
  }
}

Why 2 critical-or-high flags but only score 32? The score is a weighted ratio over the heuristics that decided (SKIP excluded), not a count of flags. Two flags failing while ten others pass yields a low-risk score because the bulk of the catalogue cleared. Severity per flag tells the agent what to dig into; the score tells it whether to act. Verdict bands and the full method are on /validation.html.

How agents pay

  1. Call the endpoint without payment headers.
  2. Server returns 402 Payment Required with x402 challenge.
  3. Agent settles in USDC via the Coinbase CDP facilitator on Base.
  4. Server verifies and returns the scan + x-payment-response.

No API key. No signup. The wallet that pays is the identity. See the x402 spec.

Use via MCP — for agents that don't speak x402

Most agent runtimes (Claude Desktop, Cursor, MCP-aware LangGraph) speak MCP but not x402. RugGuard ships a thin MCP server that wraps scan_token and explain_scan as MCP tools — the agent doesn't see the payment friction. The MCP server holds a dedicated Base-mainnet wallet and signs each USDC transfer transparently, with session ($5) and 24h ($10) spending caps as defense in depth.

Onboard in three commands:

pip install rugguard-mcp                # (publication target — slim public package)
python -m rugguard.mcp init             # generate dedicated wallet at ~/.rugguard/wallet.json (mode 600)
python -m rugguard.mcp status           # show address + caps + 24h usage

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "rugguard": {
      "command": "python",
      "args": ["-m", "rugguard.mcp"]
    }
  }
}

Fund the wallet with 5–20 USDC on Base mainnet (Coinbase: withdraw → USDC → Network Base). Treat the wallet balance itself as a hard cap on top of the in-process caps. Source: open audit at the published rugguard-mcp package (Phase 0 follow-up).

Resources