AI readiness tools, inside Claude Code.
ZeroKit MCP is a single-file Python Model Context Protocol server. Add it to Claude Code, Cursor, or any MCP client and the AI can scan sites, generate llms.txt files, inspect JSON-LD structured data, and compare competitors — all as native tool calls. No pip dependencies. No API key. Install in two minutes.
[ TWO WAYS TO INSTALL ]
Hosted HTTP endpoint (no download) or single-file stdio. Pick the one your MCP client supports.
Option A: hosted HTTP (zero download)
If your MCP client supports URL-based servers, this is the fastest install. Add one block to your config and restart — nothing to save to disk, nothing to update when we ship new tools.
{
"mcpServers": {
"zerokit": {
"url": "https://zerokit.dev/mcp"
}
}
}
The endpoint speaks JSON-RPC 2.0 over HTTP (stateless Streamable HTTP transport, protocol version 2024-11-05). Every tool call runs inside our regular API server, so you get the same SSRF hardening, the same rate limits (30 req/min per IP), and the same latency as a direct curl.
Verify it works without any client install:
curl -s -X POST https://zerokit.dev/mcp \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'
Option B: stdio (single Python file)
If your client only speaks MCP over stdio (or you want a fully offline fallback), download the single-file Python server and point your config at it.
curl -o zerokit-mcp.py https://zerokit.dev/downloads/zerokit-mcp.py
Then add to ~/.claude/settings.json (or project-level .claude/settings.json):
{
"mcpServers": {
"zerokit": {
"command": "python3",
"args": ["/absolute/path/to/zerokit-mcp.py"]
}
}
}
559 lines, Python 3.10+, standard library only, no pip dependencies. Read it, audit it, modify it.
Install in Claude Code
Claude Code supports both the hosted URL and the stdio subprocess formats above. Restart Claude after saving the config — new MCP tools appear prefixed mcp__zerokit__. Ask Claude: "Check the AI readiness of github.com using the zerokit MCP tools."
Install in Cursor / Zed / custom clients
Cursor uses the same MCP config format. Zed's agent mode, Continue.dev, and custom agents written against the Anthropic MCP SDK all accept the same mcpServers block. Pick URL or stdio based on what your client supports — the tool surface is identical either way.
The seven tools
-
ai_readiness_checkFast scan. Returns a 0–100 score, A+ through F grade, category breakdowns (robots.txt, llms.txt, Schema.org, content citability, meta directives) and recommendations. Takes 3–6 seconds.
-
ai_readiness_extendedFull scan including Wayback Machine history, LLM knowability signals (Wikipedia, Common Crawl, DuckDuckGo), and bot-cloaking detection. Takes 15–30 seconds.
-
ai_visibility_checkAnswers "does ChatGPT know about my site?" by checking Wikipedia mentions, Common Crawl indexing, DuckDuckGo abstracts, and Wayback history. Returns a visibility score plus a plain-language verdict.
-
ai_readiness_compareHead-to-head scan of two URLs in parallel. Returns both scores, the winner with delta, and a category-by-category diff. Use for competitive research.
-
ai_readiness_leaderboardReturns the Top 100 AI Readiness leaderboard (State of AI Crawlers 2026 dataset). Parameter:
limit1–100, default 10. -
llms_txt_generateFetches a homepage and generates a starter
llms.txtin the llmstxt.org format. Includes site name, description, and intelligently extracted internal links. -
schema_inspectParses every JSON-LD block on a page, classifies Schema.org types, and returns an AI-citation coverage score plus priority-sorted recommendations for missing types.
Prompt examples
Once installed, try these in Claude Code:
- "Scan the AI readiness of my own site and tell me the three biggest problems."
- "Compare github.com and gitlab.com on AI readiness. Which one is better prepared for ChatGPT citations?"
- "Generate an llms.txt for my company's homepage and show me the output."
- "Inspect the JSON-LD on the New York Times homepage. What's missing?"
- "Give me the top 10 AI-ready sites from the 2026 leaderboard."
- "Does ChatGPT know about my blog? Run the visibility check against blog.example.com."
How it works under the hood
The server speaks MCP JSON-RPC 2.0 over stdio. Each tool is a thin wrapper around a public endpoint at zerokit.dev/api/. The handlers, schemas, and transport are all in one file so you can read and audit the whole thing.
All HTTP calls use Python's urllib.request with a 60-second timeout and a custom User-Agent. Each upstream endpoint is SSRF-hardened on the server side, so passing http://127.0.0.1 as a scan target returns a 400 error, not a loopback request.
Rate limits apply: 30 requests per minute per client IP across all endpoints. If you hit the limit, tool calls return an isError response with a plain-text explanation and Claude can retry with backoff.
Troubleshooting
Claude Code doesn't see the tools after restart? Check ~/.claude/logs/mcp-*.log for the startup line [zerokit-mcp] zerokit v1.0.0 ready. If it's missing, your Python path might be wrong in the config. Use which python3 to get the absolute path.
Getting "URL rejected" errors? That's the upstream SSRF filter. Private IPs, loopback, link-local, and non-HTTP schemes are blocked by design. Use a public URL.
Tool calls time out on extended scans? ai_readiness_extended and ai_visibility_check take 15–30 seconds because they hit Wikipedia, Common Crawl, and the Wayback Machine. The server's internal timeout is 60 seconds; your MCP client may have its own. Check client settings.
Source and license
The full script is available for direct download. Read it, modify it, mirror it, whatever you need. No attribution required, but a link back to ZeroKit.dev is appreciated. Commercial use within the published rate limits is welcome.
Found a bug? File an issue or email the operator via impressum. Pull requests are welcome through the normal channel.
Scores are heuristic and based on public signals scanned at request time. Not a substitute for a professional security or SEO audit.