← Live leaderboard
Automated host analysis

bunqueue.dev

AI Readiness analysis based on 4 scans, most recently 2026-04-13 03:21 UTC. Programmatic scoring across 5 categories.

68score
Cgrade
#5of 39
87.2ppercentile
4scans
Interactive history chart → Rescan this host → Full $19 audit →

What a C grade means for this host

A C (68/100) means the site is readable to most AI systems but has meaningful gaps in how it hands them structured signals. One or two of the categories below are probably doing most of the damage. Fixing even the lowest-scoring category is usually an hour of work and can move the overall grade by a full letter.

Score breakdown by category

ZeroKit's rubric splits AI Readiness into 5 independent categories. A host scores points inside each category up to a fixed maximum, and the overall 0-100 score is the sum. The five cards below show where the bunqueue.dev score came from.

Schema.org JSON-LD

22 / 25

Partial — 88%

Whether pages carry parseable JSON-LD blocks (Organization, WebSite, Article, FAQPage, SoftwareApplication, BreadcrumbList…). Structured data is the canonical way to tell both search engines and LLMs what a page IS, not just what it says.

Some schema blocks are present but coverage is uneven. Audit each template (article, FAQ, product, organization) and make sure each emits its canonical type.

Content citability

13 / 15

Partial — 86%

Whether the page is easy for a retrieval model to quote: does it use real headings, short paragraphs, named entities, clear fact statements, and server-rendered text (not JavaScript placeholders)?

Content is readable but occasionally buried in client-side rendering or generic boilerplate. Pre-render the hero and the top two sections of each page so the HTML source contains the quotable text.

llms.txt manifest

15 / 20

Partial — 75%

Whether the site ships an llms.txt file at the root — the emerging convention for handing an LLM a curated site map with headings, summaries, and direct links to the parts of the site a reader should actually see.

llms.txt exists but is thin — no summary, no sections, or very few links. Flesh it out: one H1 with the site name, a 2-3 sentence summary blockquote, and H2 sections grouping the key pages.

AI-aware meta directives

6 / 10

Partial — 60%

Whether pages set the specific meta tags (noai, noimageai, robots, googlebot) that modern AI pipelines honour — and whether those directives are internally consistent with the robots.txt policy.

Meta directives are partially set. Audit the <head> across templates and ensure each public page carries <meta name="robots" content="index,follow"> at minimum.

Robots.txt AI directives

12 / 30

Weak — 40%

Whether the site publishes an explicit allow/disallow list for the major AI crawlers (GPTBot, ClaudeBot, Google-Extended, PerplexityBot, CCBot, and friends) so training and live-retrieval agents know where they stand.

There is no explicit AI-bot section, or robots.txt is missing entirely. Add a User-agent: GPTBot block (Allow or Disallow + the paths you want covered) and repeat for ClaudeBot, Google-Extended, PerplexityBot and CCBot. 10 lines, permanent win.

What's working

These are the categories where bunqueue.dev is already giving AI crawlers what they look for. Hold the line here; regressions in the working categories cost more than gains in the weak ones.

What needs work

The biggest gains on this host live in the categories below. Each fix is independent — you do not need to finish one before starting the next — and each fix is a one-time piece of work that keeps paying out every time an AI crawler visits.

How we scan

ZeroKit fetches the target URL once per scan with a standard browser User-Agent, then makes a small number of additional requests for the conventional files an AI crawler would look for: /robots.txt, /llms.txt, /sitemap.xml, and a handful of well-known meta endpoints. The returned HTML is parsed for JSON-LD blocks, OpenGraph, Twitter Cards, and AI-specific meta directives. Nothing private is stored — only the derived score and category breakdown.

The exact rubric and every signal we check is documented on the AI Readiness Checker page. This analysis is regenerated daily when fresh scan data lands in the database.

Compare against

Analysis auto-generated from the most recent scan data for bunqueue.dev. The narrative text is templated from the score distribution and does not imply manual review. For a human-reviewed audit, buy the full report above.