Is your robots.txt AI-ready?
Paste a URL. We fetch the site's /robots.txt and check it for explicit rules on the 10 major AI crawlers — GPTBot, ClaudeBot, Google-Extended, PerplexityBot, CCBot, and more. Free, no signup, no data stored.
Tip: drop the leading https://, we add it for you. Private-network URLs are rejected.
[ AI BOT COVERAGE ]
What a good robots.txt looks like in 2026
A modern robots.txt ships explicit sections for every AI crawler you care about. Not blanket blocks, not blanket allows — explicit decisions. A site that lists GPTBot, ClaudeBot, Google-Extended, PerplexityBot, CCBot, Applebot-Extended, FacebookBot, and Bytespider by name tells AI crawlers that a human thought about this file. That is the single strongest signal an AI crawler can read in under 5 ms.
How this validator scores
- 60 points — fraction of the 10 AI bots above that have an explicit
User-agent:section. 6 of 10 = 36 points. - 20 points — a
Sitemap:directive at the top or bottom of the file, pointing at your/sitemap.xml. - 10 points — zero parse errors. Malformed lines like
Disallowwithout a colon cost a point each. - 10 points — a
User-agent: *section that sets a default policy for unknown crawlers.
Related tools
AI Readiness Checker runs this validator alongside four other categories · llms.txt Validator is the companion tool for the other core AI discovery file · Live leaderboard shows every site we’ve checked.
We fetch /robots.txt once per validation with a standard browser User-Agent. Nothing is stored. The scoring rubric is a subset of the AI Readiness Checker's robots.txt category.