A D at 50/100 tells an AI crawler that the site is public but that nobody has explicitly built the five surfaces it looks for. That does not mean the content is bad — it means the content is harder to cite, harder to retrieve, and easier to miss. The specific fixes below are almost always a half-day of work in total.
Score breakdown by category
ZeroKit's rubric splits AI Readiness into 5 independent categories. A host scores points inside each category up to a fixed maximum, and the overall 0-100 score is the sum. The five cards below show where the apple.com score came from.
Content citability
13 / 15
Partial — 86%
Whether the page is easy for a retrieval model to quote: does it use real headings, short paragraphs, named entities, clear fact statements, and server-rendered text (not JavaScript placeholders)?
Content is readable but occasionally buried in client-side rendering or generic boilerplate. Pre-render the hero and the top two sections of each page so the HTML source contains the quotable text.
Schema.org JSON-LD
19 / 25
Partial — 76%
Whether pages carry parseable JSON-LD blocks (Organization, WebSite, Article, FAQPage, SoftwareApplication, BreadcrumbList…). Structured data is the canonical way to tell both search engines and LLMs what a page IS, not just what it says.
Some schema blocks are present but coverage is uneven. Audit each template (article, FAQ, product, organization) and make sure each emits its canonical type.
AI-aware meta directives
6 / 10
Partial — 60%
Whether pages set the specific meta tags (noai, noimageai, robots, googlebot) that modern AI pipelines honour — and whether those directives are internally consistent with the robots.txt policy.
Meta directives are partially set. Audit the <head> across templates and ensure each public page carries <meta name="robots" content="index,follow"> at minimum.
Robots.txt AI directives
12 / 30
Weak — 40%
Whether the site publishes an explicit allow/disallow list for the major AI crawlers (GPTBot, ClaudeBot, Google-Extended, PerplexityBot, CCBot, and friends) so training and live-retrieval agents know where they stand.
There is no explicit AI-bot section, or robots.txt is missing entirely. Add a User-agent: GPTBot block (Allow or Disallow + the paths you want covered) and repeat for ClaudeBot, Google-Extended, PerplexityBot and CCBot. 10 lines, permanent win.
llms.txt manifest
0 / 20
Weak — 0%
Whether the site ships an llms.txt file at the root — the emerging convention for handing an LLM a curated site map with headings, summaries, and direct links to the parts of the site a reader should actually see.
No llms.txt at the site root. Create /llms.txt (and optionally /llms-full.txt) following the spec at llmstxt.org. It's the cheapest signal to fix and the hardest for competitors to copy because it requires actually thinking about what the site is for.
What's working
These are the categories where apple.com is already giving AI crawlers what they look for. Hold the line here; regressions in the working categories cost more than gains in the weak ones.
Content citability — 13/15 (86%)
Schema.org JSON-LD — 19/25 (76%)
What needs work
The biggest gains on this host live in the categories below. Each fix is independent — you do not need to finish one before starting the next — and each fix is a one-time piece of work that keeps paying out every time an AI crawler visits.
llms.txt manifest — 0/20 (0%). No llms.txt at the site root. Create /llms.txt (and optionally /llms-full.txt) following the spec at llmstxt.org. It's the cheapest signal to fix and the hardest for competitors to copy because it requires actually thinking about what the site is for.
Robots.txt AI directives — 12/30 (40%). There is no explicit AI-bot section, or robots.txt is missing entirely. Add a User-agent: GPTBot block (Allow or Disallow + the paths you want covered) and repeat for ClaudeBot, Google-Extended, PerplexityBot and CCBot. 10 lines, permanent win.
How we scan
ZeroKit fetches the target URL once per scan with a standard browser User-Agent, then makes a small number of additional requests for the conventional files an AI crawler would look for: /robots.txt, /llms.txt, /sitemap.xml, and a handful of well-known meta endpoints. The returned HTML is parsed for JSON-LD blocks, OpenGraph, Twitter Cards, and AI-specific meta directives. Nothing private is stored — only the derived score and category breakdown.
The exact rubric and every signal we check is documented on the AI Readiness Checker page. This analysis is regenerated daily when fresh scan data lands in the database.
Analysis auto-generated from the most recent scan data for apple.com. The narrative text is templated from the score distribution and does not imply manual review. For a human-reviewed audit, buy the full report above.