enabled: booleantrueConditionally toggle the module.
allow: string[][]Allow paths to be indexed for the * user-agent (all robots).
disallow: string[][]Disallow paths from being indexed for the * user-agent (all robots).
metaTag: booleantrueWhether to add a <meta name="robots" ...> tag to the <head> of each page.
groups: RobotsGroupInput[][]Define more granular rules for the robots.txt. Each group is a set of rules for specific user agent(s).
export default defineNuxtConfig({
robots: {
groups: [
{
userAgent: ['AdsBot-Google-Mobile', 'AdsBot-Google-Mobile-Apps'],
disallow: ['/admin'],
allow: ['/admin/login'],
contentUsage: { 'bots': 'y', 'train-ai': 'n' },
contentSignal: { 'ai-train': 'no', 'search': 'yes' },
comments: 'Allow Google AdsBot to index the login page but no-admin pages'
},
]
}
})
Each group object supports the following properties:
userAgent?: string | string[] - The user agent(s) to apply rules to. Defaults to ['*']disallow?: string | string[] - Paths to disallow for the user agent(s)allow?: string | string[] - Paths to allow for the user agent(s)contentUsage?: string | string[] | Partial<ContentUsagePreferences> - IETF Content-Usage directives for AI preferences. Valid categories: bots, train-ai, ai-output, search. Values: y/n. Use object format for type safety (see AI Directives guide)contentSignal?: string | string[] | Partial<ContentSignalPreferences> - Cloudflare Content-Signal directives for AI preferences. Valid categories: search, ai-input, ai-train. Values: yes/no. Use object format for type safety (see AI Directives guide)comment?: string | string[] - Comments to include in the robots.txt filesitemap: MaybeArray<string>[]The sitemap URL(s) for the site. If you have multiple sitemaps, you can provide an array of URLs.
You must either define the runtime config siteUrl or provide the sitemap as absolute URLs.
export default defineNuxtConfig({
robots: {
sitemap: [
'/sitemap-one.xml',
'/sitemap-two.xml',
],
},
})
robotsEnabledValue: string'index, follow, max-image-preview:large, max-snippet:-1, max-video-preview:-1'The value to use when the page is indexable.
robotsDisabledValue: stringstring'noindex, nofollow'The value to use when the page is not indexable.
mergeWithRobotsTxtPath: boolean | stringtrueSpecify a robots.txt path to merge the config from, relative to the root directory.
When set to true, the default path of <publicDir>/robots.txt will be used.
When set to false, no merging will occur.
blockNonSeoBots: booleanfalseBlocks some non-SEO bots from crawling your site. This is not a replacement for a full-blown bot management solution, but it can help to reduce the load on your server.
See const.ts for the list of bots that are blocked.
export default defineNuxtConfig({
robots: {
blockNonSeoBots: true
}
})
robotsTxt: booleantrueWhether to generate a robots.txt file. Useful for disabling when using a base URL.
cacheControl: string | false'max-age=14400, must-revalidate'Configure the Cache-Control header for the robots.txt file. By default it's cached for 4 hours and must be revalidated.
Providing false will set the header to 'no-store'.
export default defineNuxtConfig({
robots: {
cacheControl: 'max-age=14400, must-revalidate'
}
})
disableNuxtContentIntegration: booleanundefinedWhether to disable the Nuxt Content Integration.
debug: booleanbooleanfalseEnables debug logs and a debug endpoint.
credits: booleantrueControl the module credit comment in the generated robots.txt file.
# START nuxt-robots (indexable) <- credits
# ...
# END nuxt-robots <- credits
export default defineNuxtConfig({
robots: {
credits: false
}
})
disallowNonIndexableRoutes: boolean⚠️ Deprecated: Explicitly disallow routes in the /robots.txt file if you don't want them to be accessible.
'false'Should route rules which disallow indexing be added to the /robots.txt file.