Nuxt Config
enabled
- Type:
boolean
- Default:
true
- Required:
false
Conditionally toggle the module.
allow
- Type:
string[]
- Default:
[]
- Required:
false
Allow paths to be indexed for the *
user-agent (all robots).
disallow
- Type:
string[]
- Default:
[]
- Required:
false
Disallow paths from being indexed for the *
user-agent (all robots).
metaTag
- Type:
boolean
- Default:
true
Whether to add a <meta name="robots" ...>
tag to the <head>
of each page.
groups
- Type:
{ userAgent: []; allow: []; disallow: []; comments: [] }[]
- Default:
[]
- Required:
false
Define more granular rules for the robots.txt. Each group is a set of rules for specific user agent(s).
export default defineNuxtConfig({
robots: {
groups: [
{
userAgent: ['AdsBot-Google-Mobile', 'AdsBot-Google-Mobile-Apps'],
disallow: ['/admin'],
allow: ['/admin/login'],
comments: 'Allow Google AdsBot to index the login page but no-admin pages'
},
]
}
})
sitemap
- Type:
string | string[] | false
- Default:
false
The sitemap URL(s) for the site. If you have multiple sitemaps, you can provide an array of URLs.
You must either define the runtime config siteUrl
or provide the sitemap as absolute URLs.
export default defineNuxtConfig({
robots: {
sitemap: [
'/sitemap-one.xml',
'/sitemap-two.xml',
],
},
})
robotsEnabledValue
- Type:
string
- Default:
'index, follow, max-image-preview:large, max-snippet:-1, max-video-preview:-1'
- Required:
false
The value to use when the site is indexable.
robotsDisabledValue
- Type:
string
- Default:
'noindex, nofollow'
- Required:
false
The value to use when the site is not indexable.
disallowNonIndexableRoutes
- Type:
boolean
- Default:
'false'
Should route rules which disallow indexing be added to the /robots.txt
file.
mergeWithRobotsTxtPath
- Type:
boolean | string
- Default:
true
- Required:
false
Specify a robots.txt path to merge the config from, relative to the root directory.
When set to true
, the default path of <publicDir>/robots.txt
will be used.
When set to false
, no merging will occur.
blockNonSeoBots
- Type:
boolean
- Default:
false
- Required:
false
Blocks some non-SEO bots from crawling your site. This is not a replacement for a full-blown bot management solution, but it can help to reduce the load on your server.
See const.ts for the list of bots that are blocked.
export default defineNuxtConfig({
robots: {
blockNonSeoBots: true
}
})
debug
- Type:
boolean
- Default:
false
- Required:
false
Enables debug logs and a debug endpoint.
credits
- Type:
boolean
- Default:
true
- Required:
false
Control the module credit comment in the generated robots.txt file.
# START nuxt-robots (indexable) <- credits
...
# END nuxt-robots <- credits
export default defineNuxtConfig({
robots: {
credits: false
}
})