← Live leaderboard
Automated host analysis

zerokit.dev

AI Readiness analysis based on 27 scans, most recently 2026-04-13 03:21 UTC. Programmatic scoring across 5 categories.

91score
A+grade
#1of 39
97.4ppercentile
27scans
Interactive history chart → Rescan this host → Full $19 audit →

What a A+ grade means for this host

A site that scores 91/100 is in the top 3% of everything scanned on ZeroKit. It publishes explicit robots.txt directives for the major AI crawlers, ships a well-formed llms.txt manifest, carries multiple JSON-LD blocks, and writes text that a retrieval model can quote directly. There are very few wins left to chase at this level — the focus shifts from adding signals to keeping the existing ones consistent as the site grows.

Score breakdown by category

ZeroKit's rubric splits AI Readiness into 5 independent categories. A host scores points inside each category up to a fixed maximum, and the overall 0-100 score is the sum. The five cards below show where the zerokit.dev score came from.

llms.txt manifest

20 / 20

Strong — 100%

Whether the site ships an llms.txt file at the root — the emerging convention for handing an LLM a curated site map with headings, summaries, and direct links to the parts of the site a reader should actually see.

A well-formed llms.txt (with H1, H2 sections, summary blockquote, and a reasonable link count) tells any LLM exactly what the site is about without scraping ten pages.

Schema.org JSON-LD

25 / 25

Strong — 100%

Whether pages carry parseable JSON-LD blocks (Organization, WebSite, Article, FAQPage, SoftwareApplication, BreadcrumbList…). Structured data is the canonical way to tell both search engines and LLMs what a page IS, not just what it says.

Multiple schema types in play, covering the site's identity, its content, and its navigation. AI answer engines cite structured pages at disproportionately high rates.

Robots.txt AI directives

27 / 30

Strong — 90%

Whether the site publishes an explicit allow/disallow list for the major AI crawlers (GPTBot, ClaudeBot, Google-Extended, PerplexityBot, CCBot, and friends) so training and live-retrieval agents know where they stand.

This site publishes a clean, parseable AI-bot section in robots.txt. That's the single strongest signal an AI crawler can read in under 5 ms.

Content citability

13 / 15

Partial — 86%

Whether the page is easy for a retrieval model to quote: does it use real headings, short paragraphs, named entities, clear fact statements, and server-rendered text (not JavaScript placeholders)?

Content is readable but occasionally buried in client-side rendering or generic boilerplate. Pre-render the hero and the top two sections of each page so the HTML source contains the quotable text.

AI-aware meta directives

6 / 10

Partial — 60%

Whether pages set the specific meta tags (noai, noimageai, robots, googlebot) that modern AI pipelines honour — and whether those directives are internally consistent with the robots.txt policy.

Meta directives are partially set. Audit the <head> across templates and ensure each public page carries <meta name="robots" content="index,follow"> at minimum.

What's working

These are the categories where zerokit.dev is already giving AI crawlers what they look for. Hold the line here; regressions in the working categories cost more than gains in the weak ones.

What needs work

The biggest gains on this host live in the categories below. Each fix is independent — you do not need to finish one before starting the next — and each fix is a one-time piece of work that keeps paying out every time an AI crawler visits.

How we scan

ZeroKit fetches the target URL once per scan with a standard browser User-Agent, then makes a small number of additional requests for the conventional files an AI crawler would look for: /robots.txt, /llms.txt, /sitemap.xml, and a handful of well-known meta endpoints. The returned HTML is parsed for JSON-LD blocks, OpenGraph, Twitter Cards, and AI-specific meta directives. Nothing private is stored — only the derived score and category breakdown.

The exact rubric and every signal we check is documented on the AI Readiness Checker page. This analysis is regenerated daily when fresh scan data lands in the database.

Compare against

Analysis auto-generated from the most recent scan data for zerokit.dev. The narrative text is templated from the score distribution and does not imply manual review. For a human-reviewed audit, buy the full report above.