← Blog
[ DATA POST — April 11, 2026 ]

We scanned 10 Show HN launches today. None scored above C.

Today I fetched the top stories from the Show HN list via the Hacker News Firebase API, pulled the first 10 with a working URL, and ran every one through our AI Readiness Checker. Here is the raw data and what I think it means for builders who care about being citable by AI answer engines.

10launches scanned
38.5average score
68top
8bottom
0above B
6below D
#HostScoreGradeRT/LT/SD/CC/AIHN
#1bunqueue.dev68CRT:12 LT:15 SD:22 CC:13 AI:6HN →
#2github.com65CRT:8 LT:19 SD:21 CC:13 AI:4HN →
#3codeberg.org48DRT:20 LT:0 SD:9 CC:15 AI:4HN →
#4cssstudio.ai46DRT:17 LT:9 SD:12 CC:4 AI:4HN →
#5eve.new38FRT:17 LT:0 SD:12 CC:7 AI:2HN →
#6fluidcad.io37FRT:5 LT:0 SD:18 CC:8 AI:6HN →
#7hormuz-havoc.com31FRT:12 LT:0 SD:9 CC:4 AI:6HN →
#8vibej.am24FRT:0 LT:0 SD:10 CC:10 AI:4HN →
#9kampfinsel.com20FRT:0 LT:0 SD:9 CC:7 AI:4HN →
#10mooncraft2000.com8FRT:0 LT:0 SD:2 CC:2 AI:4HN →

What the data shows

Every single site on this list scored below B. The highest was bunqueue.dev at 68/100 (C), the lowest was mooncraft2000.com at 8/100 (F). The average across the 10 launches was 38.5/100, which is roughly in line with what we see on our public stats dashboard for all scanned hosts.

The interesting part is not the average — it's the shape of where the points are getting lost. Almost every site on this list scored full points for structured_data when it happened to be on GitHub or Codeberg (which ship rich Article and BreadcrumbList schema for free) and scored zero for llms_txt everywhere except the two custom-domain sites that happened to include one by accident. Nobody launched their Show HN with an intentional /llms.txt.

Why Show HN launches are systematically below average

Builders launching on HN are usually focused on a single goal: get the story on the front page. Everything downstream of that — AI readiness, long-term SEO, documentation for retrieval models — is something to do "after the launch", if there's a launch worth writing for. That's a reasonable priority, but it creates a visible gap in the data: the exact group that would most benefit from being citable (indie builders with no marketing budget) ships the weakest AI-readiness signals.

Put differently: if your Show HN trends, the same Show HN post itself becomes the single biggest AI citation source for your product for months. ChatGPT and Perplexity and Google AI Overviews will quote the HN thread, not your landing page, because your landing page has no structured data, no llms.txt, and a JavaScript-rendered hero that a simple crawler can't read. You become the footnote on your own product.

The three fixes that would move every site on this list up a grade

1. Ship a /robots.txt with an explicit AI-bot section. Our rubric scores this category heaviest (30 of 100 points). A 15-line robots.txt that lists GPTBot, ClaudeBot, Google-Extended, PerplexityBot, CCBot, and Applebot-Extended with explicit Allow or Disallow scores 27-30 points immediately. None of the sites in the table above have this — most shipped the default rails/nextjs robots.txt with no AI section at all.

2. Create a /llms.txt. 10 lines of markdown: H1 with the product name, a one-sentence summary blockquote, H2 sections grouping your docs, features, and changelog. Our generator produces a starter from any URL. The validator scores a live file 0-20 against the spec. The two tools together take three minutes on a Show HN launch day and would have moved every site in the table above by at least 15 points.

3. Add an Organization and a WebSite Schema.org block on the homepage. 30 lines of JSON in a <script type="application/ld+json"> tag. Includes name, url, logo, and a sameAs array pointing at your GitHub, Crunchbase, and Twitter. Our Schema Inspector flags when they're missing. This is the difference between a retrieval model seeing your product as "a blob of HTML" and "a verifiable entity with four external references".

Want the full breakdown for your site?

Paste your URL into the AI Readiness Checker for a free score and category breakdown. Or use the Batch Scanner to compare yourself against the Show HN sites above in one call. If you want a print-ready audit with every signal, specific recommendations, and a prioritized fix list, the $19 Website Audit Report adds six more categories on top of the five shown here.

Methodology

The ten URLs were fetched from https://hacker-news.firebaseio.com/v0/showstories.json on April 11, 2026. I filtered to stories with type="story" and a non-null url field, deduplicated by hostname (keeping the highest HN score), and ran the top 10 through POST /api/batch on zerokit.dev. Each scan runs the full five-category rubric — robots.txt AI bots, llms.txt structure, Schema.org JSON-LD, content citability, and AI-aware meta directives. The exact scoring is open source and is the same rubric used by our live leaderboard.

No sites were harmed. All fetches were from a single residential IP using a normal browser User-Agent. Every scan is also now in our public analysis directory, so every founder on this list can see their own breakdown with specific fixes.

Scores reflect scans run at publication time (2026-04-11). Re-scanning a host via our checker will update the analysis directory automatically and may differ from the numbers above as sites change their signals.