See what AI crawlers actually receive.
This tool fetches any URL four times in sequence — once as a real Chrome browser, once as Googlebot, once as GPTBot, and once as ClaudeBot — and lays the four responses side by side. If a site ships a different homepage to bots than to humans, you see it immediately. Every result is reproducible at /api/cloak?url=....
Matrix results
Interpretation notes
How the detector works
We fetch the URL four times in quick succession — once each as Chrome 125, Googlebot/2.1, GPTBot/1.0, and ClaudeBot/1.0. Every fetch uses our SSRF-hardened fetcher (safe_fetch) so loopback, RFC1918, and link-local targets are rejected before the scan starts.
For each bot, we compare the response against the browser baseline on five axes: HTTP status code, body length, word count, page title, and a SHA-256 hash of the first 100 KB of content. Significant divergence on any axis becomes a signal. Signals are counted per bot and mapped to a severity level: none (0 signals), minor (1), moderate (2–3), severe (4+).
The same logic is available as a sub-module of the full AI Readiness Checker, but this page surfaces the per-bot matrix directly instead of burying it under an aggregated score.
Known limitations
Cloaking detection is heuristic. Sites that legitimately vary content by session state, A/B test bucket, geography, or anti-bot WAF challenge can trip the signals without being guilty of deliberate cloaking. Use the per-signal breakdown in each column to interpret each case.
All scans run from a single Hetzner cloud IP in Germany. Some sites cloak on source IP in addition to User-Agent — a fetch from a residential IP can return different results. The X.com case study in our bot cloaking blog post documents exactly this pattern.
Related tools
AI Readiness Checker · Schema Inspector · OpenGraph Inspector · Read the data report
Scores are heuristic and based on public signals scanned at request time. Not a substitute for a professional security or SEO audit.