← Live leaderboard
Automated host analysis

anthropic.com

AI Readiness analysis based on 10 scans, most recently 2026-04-13 03:22 UTC. Programmatic scoring across 5 categories.

43score
Dgrade
#20of 39
48.7ppercentile
10scans
Interactive history chart → Rescan this host → Full $19 audit →

What a D grade means for this host

A D at 43/100 tells an AI crawler that the site is public but that nobody has explicitly built the five surfaces it looks for. That does not mean the content is bad — it means the content is harder to cite, harder to retrieve, and easier to miss. The specific fixes below are almost always a half-day of work in total.

Score breakdown by category

ZeroKit's rubric splits AI Readiness into 5 independent categories. A host scores points inside each category up to a fixed maximum, and the overall 0-100 score is the sum. The five cards below show where the anthropic.com score came from.

Content citability

13 / 15

Partial — 86%

Whether the page is easy for a retrieval model to quote: does it use real headings, short paragraphs, named entities, clear fact statements, and server-rendered text (not JavaScript placeholders)?

Content is readable but occasionally buried in client-side rendering or generic boilerplate. Pre-render the hero and the top two sections of each page so the HTML source contains the quotable text.

AI-aware meta directives

6 / 10

Partial — 60%

Whether pages set the specific meta tags (noai, noimageai, robots, googlebot) that modern AI pipelines honour — and whether those directives are internally consistent with the robots.txt policy.

Meta directives are partially set. Audit the <head> across templates and ensure each public page carries <meta name="robots" content="index,follow"> at minimum.

Schema.org JSON-LD

12 / 25

Weak — 48%

Whether pages carry parseable JSON-LD blocks (Organization, WebSite, Article, FAQPage, SoftwareApplication, BreadcrumbList…). Structured data is the canonical way to tell both search engines and LLMs what a page IS, not just what it says.

Little or no JSON-LD detected. Start with the basics: Organization on the homepage, WebSite with SearchAction, Article on every blog post. Each block is 10-30 lines of JSON in a <script type="application/ld+json"> tag.

Robots.txt AI directives

12 / 30

Weak — 40%

Whether the site publishes an explicit allow/disallow list for the major AI crawlers (GPTBot, ClaudeBot, Google-Extended, PerplexityBot, CCBot, and friends) so training and live-retrieval agents know where they stand.

There is no explicit AI-bot section, or robots.txt is missing entirely. Add a User-agent: GPTBot block (Allow or Disallow + the paths you want covered) and repeat for ClaudeBot, Google-Extended, PerplexityBot and CCBot. 10 lines, permanent win.

llms.txt manifest

0 / 20

Weak — 0%

Whether the site ships an llms.txt file at the root — the emerging convention for handing an LLM a curated site map with headings, summaries, and direct links to the parts of the site a reader should actually see.

No llms.txt at the site root. Create /llms.txt (and optionally /llms-full.txt) following the spec at llmstxt.org. It's the cheapest signal to fix and the hardest for competitors to copy because it requires actually thinking about what the site is for.

What's working

These are the categories where anthropic.com is already giving AI crawlers what they look for. Hold the line here; regressions in the working categories cost more than gains in the weak ones.

What needs work

The biggest gains on this host live in the categories below. Each fix is independent — you do not need to finish one before starting the next — and each fix is a one-time piece of work that keeps paying out every time an AI crawler visits.

How we scan

ZeroKit fetches the target URL once per scan with a standard browser User-Agent, then makes a small number of additional requests for the conventional files an AI crawler would look for: /robots.txt, /llms.txt, /sitemap.xml, and a handful of well-known meta endpoints. The returned HTML is parsed for JSON-LD blocks, OpenGraph, Twitter Cards, and AI-specific meta directives. Nothing private is stored — only the derived score and category breakdown.

The exact rubric and every signal we check is documented on the AI Readiness Checker page. This analysis is regenerated daily when fresh scan data lands in the database.

Compare against

Analysis auto-generated from the most recent scan data for anthropic.com. The narrative text is templated from the score distribution and does not imply manual review. For a human-reviewed audit, buy the full report above.