robots.txt for Next.js: Static File vs app/robots.ts
Next.js gives you two ways to serve a robots.txt: a plain static file in public/ or a dynamic route at app/robots.ts. Both work, but they solve different problems, and most tutorials show only one of them. Here is when to use which, plus copy-paste templates that block AI training crawlers without hurting search rankings.
The Two Options in One Sentence Each
- public/robots.txt — A plain text file served as-is from the public directory. Simple, fast, identical on every deployment. No build step.
- app/robots.ts — A TypeScript file that exports a
MetadataRoute.Robotsobject. Next.js renders it into a robots.txt response at build or request time. Lets you branch on environment.
Rule of thumb: if your robots.txt never changes, use the static file. If you need different rules for preview deployments, use app/robots.ts.
Option 1: Static public/robots.txt
Create public/robots.txt at the root of your Next.js project. Anything in public/ is served directly at the corresponding URL, so this file becomes https://yoursite.com/robots.txt automatically. No config, no build step.
Minimal static template
User-agent: *
Allow: /
Sitemap: https://yoursite.com/sitemap.xml
Static template with AI crawlers blocked
# Search engines (allowed)
User-agent: Googlebot
Allow: /
User-agent: Bingbot
Allow: /
# AI training crawlers (blocked)
User-agent: GPTBot
Disallow: /
User-agent: ChatGPT-User
Disallow: /
User-agent: ClaudeBot
Disallow: /
User-agent: Claude-Web
Disallow: /
User-agent: Google-Extended
Disallow: /
User-agent: PerplexityBot
Disallow: /
User-agent: Bytespider
Disallow: /
User-agent: CCBot
Disallow: /
User-agent: FacebookBot
Disallow: /
User-agent: Applebot-Extended
Disallow: /
# Default: allow everything else
User-agent: *
Allow: /
Sitemap: https://yoursite.com/sitemap.xml
Problem with the static approach on Vercel: This file is served identically on production AND on every preview deployment URL like my-project-abc123.vercel.app. That means Google can crawl and index your preview deployments, which usually leads to duplicate-content issues. If that matters to you, use Option 2 below.
Option 2: Dynamic app/robots.ts
Create app/robots.ts (TypeScript) or app/robots.js (JavaScript) at the root of your App Router. Next.js 13+ treats this as a special metadata file and automatically generates a robots.txt response at /robots.txt.
Basic app/robots.ts
import type { MetadataRoute } from 'next'
export default function robots(): MetadataRoute.Robots {
return {
rules: [
{
userAgent: '*',
allow: '/',
},
],
sitemap: 'https://yoursite.com/sitemap.xml',
}
}
app/robots.ts with AI crawlers blocked
import type { MetadataRoute } from 'next'
const AI_CRAWLERS = [
'GPTBot',
'ChatGPT-User',
'ClaudeBot',
'Claude-Web',
'Google-Extended',
'PerplexityBot',
'Bytespider',
'CCBot',
'FacebookBot',
'Applebot-Extended',
]
export default function robots(): MetadataRoute.Robots {
return {
rules: [
{
userAgent: ['Googlebot', 'Bingbot'],
allow: '/',
},
{
userAgent: AI_CRAWLERS,
disallow: '/',
},
{
userAgent: '*',
allow: '/',
},
],
sitemap: 'https://yoursite.com/sitemap.xml',
}
}
Environment-aware: block preview deployments on Vercel
This is the pattern that actually justifies app/robots.ts. On Vercel, the VERCEL_ENV environment variable tells you whether the deployment is production, preview, or development. Use it.
import type { MetadataRoute } from 'next'
export default function robots(): MetadataRoute.Robots {
const isProduction = process.env.VERCEL_ENV === 'production'
if (!isProduction) {
return {
rules: [
{
userAgent: '*',
disallow: '/',
},
],
}
}
return {
rules: [
{
userAgent: '*',
allow: '/',
},
{
userAgent: ['GPTBot', 'ClaudeBot', 'Google-Extended', 'Bytespider', 'PerplexityBot'],
disallow: '/',
},
],
sitemap: 'https://yoursite.com/sitemap.xml',
}
}
Now preview deployments return Disallow: / for everyone, keeping staging sites out of search engines. Production returns your real rules. This is the correct default for any Next.js project on Vercel.
Check if your Next.js site is AI-ready
The AI Readiness Checker scans your deployed site for robots.txt rules, llms.txt, structured data, and more.
Run AI Readiness CheckThings That Break
Having both public/robots.txt and app/robots.ts
If both exist, app/robots.ts wins. The static file in public/ is silently ignored. This is a common source of confusion when someone adds app/robots.ts but forgets to delete the old static file. Pick one approach and stick with it.
Disallowing the wrong path
Next.js App Router has system paths like /_next/ and /api/. You almost never want to Disallow these — Googlebot fetches /_next/ resources to render your pages correctly, and blocking it can hurt rendering and Core Web Vitals scores. Leave them allowed unless you know exactly what you are doing.
Forgetting the sitemap
If you generate your sitemap dynamically with app/sitemap.ts, make sure your robots.txt points to the correct URL. For most Next.js sites on a custom domain, that is https://yoursite.com/sitemap.xml.
Using the Pages Router
The app/robots.ts file only works with the App Router. If you are still on the Pages Router, stick with public/robots.txt, or build a custom API route and rewrite /robots.txt to it in next.config.js. That is more work than it is worth for most projects.
Verify Your Next.js robots.txt
- Deploy to production.
- Open
https://yoursite.com/robots.txtin a browser. You should see your rules. - Run the AI Readiness Checker against your production URL. It will parse your robots.txt and tell you exactly which AI bots are blocked.
- If you set up environment-aware rules, open a preview deployment URL with
/robots.txtappended. Confirm it returnsDisallow: /.
TL;DR decision tree: Need different rules on preview? Use app/robots.ts. Otherwise use public/robots.txt. Pick one.
Generate your robots.txt visually
Paste the output into public/robots.txt or translate it to the app/robots.ts format.
Open Robots.txt GeneratorFrequently Asked Questions
Should I use public/robots.txt or app/robots.ts in Next.js?
Use the static public/robots.txt if your rules never change. Use app/robots.ts when you need logic: different rules for preview versus production deployments, rules pulled from a database or config, or conditional blocks based on environment variables. Both are valid, and Next.js picks whichever exists. If both exist, app/robots.ts wins.
Why is my Next.js robots.txt showing the wrong content on Vercel preview deployments?
Because a static public/robots.txt is the same file across all environments, including preview deployments. The fix is app/robots.ts with a check on the deployment URL or VERCEL_ENV environment variable: on preview, return Disallow: / to keep staging sites out of search results, and on production, return your real rules.
Does app/robots.ts only work with the App Router?
Yes. The dynamic robots.ts file is an App Router feature, introduced in Next.js 13. If you are still on the Pages Router, you either use a static public/robots.txt file or build a custom API route at pages/api/robots.ts and rewrite /robots.txt to it via next.config.js.