# What Is Nuxt SEO?
## Background
Technical SEO is tricky and boring. It requires many moving parts that need to work well together. Configuring all of these parts correctly is a challenge.
## Nuxt SEO
Nuxt SEO is the collective name of modules focused on improving your technical SEO.
::all-modules
::
Nuxt SEO is also itself an alias module.
## Nuxt SEO Module: @nuxtjs/seo
::module-card{className="w-1/2" slug="nuxt-seo"}
::
The Nuxt SEO module is simply an alias that combines all the other SEO modules into a single installation.
```ts
// This is all it does!
export default defineNuxtModule({
async setup() {
for (const module of modules) {
await installModule(await resolvePath(module), {})
}
},
})
```
When using Nuxt SEO you are free to use this alias or install the modules individually. The choice is yours as
all sites are different and have different requirements.
## Site Config
To ensure all modules work well together, Nuxt SEO includes a module called Site Config. This module is itself used across all modules to ensure they are configured correctly.
You do not need to install this module manually, it is installed automatically when you install any of the SEO modules.
::module-card{className="w-1/2" slug="site-config"}
::
# Install Nuxt SEO
## Install Nuxt SEO Alias
The `@nuxtjs/seo` module is an alias module that combines all the other SEO modules into a single installation.
::module-install{name="@nuxtjs/seo"}
::
### Manual Installation
If you'd prefer more control over which modules you install, you can install them separately, please see the
individual module pages for installation instructions.
::all-modules
::
## Next Steps
All modules are now installed and configured!
See the [Using the Modules](https://nuxtseo.com/docs/nuxt-seo/guides/using-the-modules) guide to learn how to use them and make sure
to check out the [SEO Go Live Checklist](https://nuxtseo.com/learn/going-live) once you're ready to ship.
### Troubleshooting
If you run into any issues, check out the [Troubleshooting](https://nuxtseo.com/docs/nuxt-seo/getting-started/troubleshooting) guide. Below
are the StackBlitz playgrounds for Nuxt SEO:
- [Basic](https://stackblitz.com/edit/nuxt-starter-gfrej6?file=nuxt.config.ts){rel="nofollow"}
- [I18n](https://stackblitz.com/edit/nuxt-starter-dh68fjqb?file=nuxt.config.ts){rel="nofollow"}
- [Nuxt Content](https://stackblitz.com/edit/nuxt-starter-xlkqkcqr?file=nuxt.config.ts){rel="nofollow"}
# Troubleshooting
## StackBlitz Playgrounds
You can use the Nuxt SEO StackBlitz playgrounds for either:
- Playing around with the module in a sandbox environment
- Making reproductions for issues (Learn more about [Why Reproductions are Required](https://antfu.me/posts/why-reproductions-are-required){rel="nofollow"})
Reproductions:
- [Basic](https://stackblitz.com/edit/nuxt-starter-gfrej6?file=nuxt.config.ts){rel="nofollow"}
- [I18n](https://stackblitz.com/edit/nuxt-starter-dh68fjqb?file=nuxt.config.ts){rel="nofollow"}
- [Nuxt Content](https://stackblitz.com/edit/nuxt-starter-xlkqkcqr?file=nuxt.config.ts){rel="nofollow"}
Have a question about Nuxt SEO? Check out the frequently asked questions below or
[Jump in the Discord](https://discord.com/invite/5jDAMswWwX){rel="nofollow"} and ask me directly!
## Troubleshooting FAQ
### Can I just use the modules separately?
Yes! Nuxt SEO is designed to be flexible and work however you need it to.
### Why does my production build go up so much?
Nuxt SEO includes many features that only run on the server. These server-side features can increase the size of your
production build by a few megabytes, but won't affect the performance of your site as the modules are lazy loaded.
If the production build size is a concern, you can [disable the modules](https://nuxtseo.com/docs/nuxt-seo/guides/using-the-modules) you don't need.
If you are using Nuxt SEO in a serverless environment, you may want to keep your workers under 1mb. The module that
will contribute the most to your worker size is `nuxt-og-image`.
If you are not using `ogImage`, you can disable it, otherwise consider using [Zero Runtime](https://nuxtseo.com/docs/og-image/guides/zero-runtime) mode.
### What happened To Nuxt SEO Kit?
The Nuxt SEO Kit module was the initial version of Nuxt SEO.
While it generally worked great for some users, it was only useful for server-side generated Nuxt Sites and in turn its feature
set was much more limited.
It has been deprecated in favour of the new Nuxt SEO module.
See the [migration guide](https://nuxtseo.com/docs/nuxt-seo/migration-guide/nuxt-seo-kit) for more information.
# Community Videos
::you-tube-video
---
channel-bg: https://yt3.ggpht.com/kMmiPu2Sc-sFlMzNuCtbVxoJuJBqk_vQBfsd-45K9ACZ0wBVFykFHNt_THdTtFYCPtveobp6=s88-c-k-c0x00ffffff-no-rj
title: Nuxt 3 SEO (intro to Nuxt SEO)
video-id: OyVI8zmDqWU
---
::
::you-tube-video
---
channel-bg: https://yt3.ggpht.com/L1G1b-oVwe3QPwd-RQGPaohbsViP29PGMa2nyDfj10HH7BO6RhAY0Jmhp9tth6mRmuOgoE_7=s68-c-k-c0x00ffffff-no-rj
title: Easy SEO with Nuxt and Storyblok
video-id: CPZTMlarbKg
---
::
# Quick Module Setup Guide
## Introduction
Nuxt SEO is a collection of 6 modules, while most will just work-out-of-the-box, there may be some configuration needed
depending on your site's requirements.
This guide will give you a quick overview of each module and what you need to do to get started.
Check out the [Stackblitz Demo](https://stackblitz.com/edit/nuxt-starter-gfrej6?file=nuxt.config.ts){rel="nofollow"} if you want to
see a working example.
## Sitemap
::module-card{className="w-1/2" slug="sitemap"}
::
Generates a [sitemap](https://developers.google.com/search/docs/crawling-indexing/sitemaps/overview){rel="nofollow"} at [/sitemap.xml](http://localhost:3000/sitemap.xml){rel="nofollow"}
based on your app [data sources](https://nuxtseo.com/docs/sitemap/guides/data-sources).
- When prerendering or using static only routes, no config is needed, it will automatically generate a sitemap for you.
- If you have dynamic routes, you'll need to set up a handler for [Dynamic URLs](https://nuxtseo.com/docs/sitemap/guides/dynamic-urls).
### I18n Features
The sitemap module will automatically generate a multi sitemap with each locale having its own sitemap.
See [I18n Sitemap](https://nuxtseo.com/docs/sitemap/guides/i18n) for more information.
## Robots
::module-card{className="w-1/2" slug="robots"}
::
Generates a [robots.txt](https://developers.google.com/search/docs/crawling-indexing/robots/intro){rel="nofollow"} at [/robots.txt](http://localhost:3000/robots.txt){rel="nofollow"}.
Will append a ``{className="language-html shiki shiki-themes github-light github-light material-theme-palenight" lang="html"} and a `X-Robots` HTTP header.
- If you have any other environments besides development and production, you need to configure the `env` option. See the [Disabling Indexing](https://nuxtseo.com/docs/robots/guides/disable-indexing) guide for more information.
- By default, all routes are allowed for all user-agents. See [Disabling Page Indexing](https://nuxtseo.com/docs/robots/guides/disable-page-indexing) to start blocking routes.
### I18n Features
Any `Disallow` rules in the robots module will automatically have the locale prefixes added.
See [I18n Robots](https://nuxtseo.com/docs/robots/guides/i18n) for more information.
## OG Image
::module-card{className="w-1/2" slug="og-image"}
::
Generate dynamic Open Graph images for your pages.
- Opt-in, by default, it won't do anything unless you configure it.
- See the [Tutorial: Getting Familiar With Nuxt OG Image](https://nuxtseo.com/docs/og-image/getting-started/getting-familar-with-nuxt-og-image) docs on setting it up.
Note: If you don't intend to generate dynamic images, it's recommended to [disable this module](https://nuxtseo.com/docs/nuxt-seo/guides/disabling-modules).
## Schema.org
::module-card{className="w-1/2" slug="schema-org"}
::
Automatically generates schema.org JSON-LD for your pages.
- Provides [default Schema.org](https://nuxtseo.com/docs/schema-org/guides/default-schema-org) for your pages.
- It's recommended to [Setup Your Identity](https://nuxtseo.com/docs/schema-org/guides/setup-identity) for your site as well.
- You can opt in to more Schema.org using [useSchemaOrg](https://nuxtseo.com/docs/schema-org/guides/full-documentation).
## Link Checker
::module-card{className="w-1/2" slug="link-checker"}
::
Checks all links for issues that may be affecting your SEO.
- When building your site it will check links
- You can also run it manually by opening the "Link Checker" tab in Nuxt DevTools
## SEO Utils
::module-card{className="w-1/2" slug="seo-utils"}
::
A few extra SEO Nuxt features that don't fit anywhere else.
- See the [SEO Utils Features](https://nuxtseo.com/docs/seo-utils/getting-started/features) guide for more information.
- Automatic File Metadata [Icons](https://nuxtseo.com/docs/seo-utils/guides/app-icons) and [Open Graph Images](https://nuxtseo.com/docs/seo-utils/guides/open-graph-images)
- Opt in [seoMeta](https://nuxtseo.com/docs/seo-utils/guides/nuxt-config-seo-meta) in your nuxt.config and route rules
- Automatic [default meta](https://nuxtseo.com/docs/seo-utils/guides/default-meta) for your site.
- Automatic [fallback title](https://nuxtseo.com/docs/seo-utils/guides/fallback-title) for your site.
- Opt-in [breadcrumbs](https://nuxtseo.com/docs/seo-utils/api/breadcrumbs) with Schema.org support
## Shared Configuration
::module-card{className="w-1/2" slug="site-config"}
::
[Nuxt Site Config](https://nuxtseo.com/docs/site-config/getting-started/introduction) allows you to configure all Nuxt SEO modules at build time and runtime. Allowing you to powerfully configure
all modules at runtime, for example in a multi-tenant or i18n app.
It's recommended to set the following config:
- `url` - The canonical URL of your site, avoids duplicate content and consolidates page rank.
- `name` - The name of your site, used in the title and meta tags.
- `description` - The description of your site, used in the meta tags.
- `defaultLocale` - The default locale of your site, used in the meta tags. (you can omit this if you're using `@nuxtjs/i18n`)
```ts [nuxt.config.ts] twoslash
export default defineNuxtConfig({
site: {
url: 'https://example.com',
name: 'Awesome Site',
description: 'Welcome to my awesome site!',
defaultLocale: 'en', // not needed if you have @nuxtjs/i18n installed
}
})
```
### I18n Features
You can dynamically set the site config based on the current locale.
This is useful for setting the `url` and `name` properties based on the page the user is currently on.
See [I18n Site Config](https://nuxtseo.com/docs/site-config/guides/i18n) for more information.
# Disabling Modules
Since Nuxt SEO installs and enables modules for you, you may run into a situation where you want to disable a module.
The modules have these config keys:
- `nuxt-og-image` - `ogImage`
- `@nuxtjs/sitemap` - `sitemap`
- `@nuxtjs/robots` - `robots`
- `nuxt-seo-utils` - `seo`
- `nuxt-schema-org` - `schemaOrg`
- `nuxt-link-checker` - `linkChecker`
You can disable any of these modules by setting the module's `enabled` value to `false` in your `nuxt.config.ts` file.
```ts [nuxt.config.ts] twoslash
export default defineNuxtConfig({
ogImage: {
enabled: false
},
sitemap: {
enabled: false
},
robots: {
enabled: false
},
seo: { // seo utils
enabled: false
},
schemaOrg: {
enabled: false
},
linkChecker: {
enabled: false
}
})
```
# Nuxt Content
## Introduction
Most Nuxt SEO modules integrates with Nuxt Content out of the box.
- Nuxt Robots: `robots` ([docs](https://nuxtseo.com/docs/robots/guides/content))
- Nuxt Sitemap: `sitemap` ([docs](https://nuxtseo.com/docs/sitemap/guides/content))
- Nuxt OG Image: `ogImage` ([docs](https://nuxtseo.com/docs/og-image/integrations/content))
- Nuxt Schema.org: `schemaOrg` ([docs](https://nuxtseo.com/docs/schema-org/guides/content))
- Nuxt Link Checker: Uses content APIs to check links
For Nuxt Content v3 you would need to configure the modules to work with Nuxt Content individually, however, Nuxt SEO
provides a way to configure all modules at once.
For Nuxt Content v2, please see the individual module documentation for how to configure them.
## Setup Nuxt Content v3
In Nuxt Content v3 we need to use the `asSeoCollection()`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"} function to augment any collections
to be able to use the SEO modules.
```ts [content.config.ts]
import { defineCollection, defineContentConfig } from '@nuxt/content'
import { asSeoCollection } from '@nuxtjs/seo/content'
export default defineContentConfig({
collections: {
content: defineCollection(
asSeoCollection({
type: 'page',
source: '**/*.md',
}),
),
},
})
```
To ensure the tags actually gets rendered you need to ensure you're using the SEO composable.
```vue [[...slug\\].vue]
```
Due to current Nuxt Content v3 limitations, you must load the Nuxt SEO module before the content module.
```ts
export default defineNuxtConfig({
modules: [
'@nuxtjs/seo',
'@nuxt/content' // <-- Must be after @nuxtjs/seo
]
})
```
## Usage
For the full options available for each module, please see the individual module documentation.
```md
---
ogImage:
component: HelloWorld
props:
title: "Hello World"
description: "This is a description"
image: "/hello-world.png"
sitemap:
lastmod: 2025-01-01
robots: index, nofollow
schemaOrg:
- "@type": "BlogPosting"
headline: "How to Use Our Product"
author:
type: "Person"
name: "Jane Smith"
datePublished: "2023-10-01"
---
# Hello World
```
# v2 RC to v2 Stable
## Introduction
This guide will help you migrate from the Nuxt SEO v2 RC to the v2 stable release.
Please see the [announcement](https://nuxtseo.com/announcement) post for details on the release.
## Support
If you get stuck with the migration or have post-migration bugs, please get in touch!
- [Jump in the Discord](https://discord.com/invite/5jDAMswWwX){rel="nofollow"}
- [Make a GitHub issue](https://github.com/harlan-zw/nuxt-seo/issues){rel="nofollow"}
## Nuxt Site Config v3
Nuxt Site Config is a module used internally by Nuxt Robots.
The major update to v3.0.0 shouldn't have any direct effect on your site, however, you may want to double-check
the [breaking changes](https://github.com/harlan-zw/nuxt-site-config/releases/tag/v3.0.0){rel="nofollow"}.
## Nuxt SEO Utils v6
In moving to the stable release, Nuxt SEO experiments has been renamed from `nuxt-seo-experiments` to `nuxt-seo-utils`.
The original name of the module was `nuxt-seo-experiments`, hinting that the features weren't stable and that they would land in the Nuxt core. This is no longer the case, and the module has been renamed to reflect this.
With this rename the module scope changes to include the random functionality that Nuxt SEO was previously providing:
- `useBreadcrumbItems()` composable
- Config: `redirectToCanonicalSiteUrl`
- Config: `fallbackTitle`
- Config: `automaticDefaults`
As Nuxt SEO Utils shared the same config key as the Nuxt SEO module, no changes are required to your config, however, it's worth
testing your site to ensure that everything is working as expected.
## Nuxt Sitemap v7
### Removed `inferStaticPagesAsRoutes` config
If you set this value to `false` previously, you will need to change it to the below:
```diff
export default defineNuxtConfig({
sitemap: {
- inferStaticPagesAsRoutes: false,
+ excludeAppSources: ['pages', 'route-rules', 'prerender']
}
})
```
### Removed `dynamicUrlsApiEndpoint` config
The `sources` config supports multiple API endpoints and allows you to provide custom fetch options, use this instead.
```diff
export default defineNuxtConfig({
sitemap: {
- dynamicUrlsApiEndpoint: '/__sitemap/urls',
+ sources: ['/__sitemap/urls']
}
})
```
### Removed `cacheTtl` config
Please use the `cacheMaxAgeSeconds` as its a clearer config.
```diff
export default defineNuxtConfig({
sitemap: {
- cacheTtl: 10000,
+ cacheMaxAgeSeconds: 10000
}
})
```
### Removed `index` route rule / Nuxt Content support
If you were using the `index: false` in either route rules or your Nuxt Content markdown files, you will need to update this to use the `robots` key.
```diff
export default defineNuxtConfig({
routeRules: {
// use the `index` shortcut for simple rules
- '/secret/**': { index: false },
+ '/secret/**': { robots: false },
}
})
```
## Nuxt Robots v5
### Removed `rules` config
The v4 of Nuxt Robots provided a backward compatibility `rules` config. As it was deprecated, this is no longer supported. If you're using `rules`, you should migrate to the `groups` config or use a robots.txt file.
```diff
export default defineNuxtConfig({
robots: {
- rules: {},
+ groups: {}
}
})
```
### Removed `defineRobotMeta` composable
This composable didn't do anything in v4 as the robots meta tag is enabled by default. If you'd like to control the robot meta tag rule, use the [`useRobotsRule()`](https://nuxtseo.com/docs/robots/api/use-robots-rule) composable.
```diff
- defineRobotMeta(true)
+ useRobotsRule(true)
```
### Removed `RobotMeta` component
This component was a simple wrapper for `defineRobotMeta`, you should use [`useRobotsRule()`](https://nuxtseo.com/docs/robots/api/use-robots-rule) if you wish to control the robots rule.
### Removed `index`, `indexable` config
When configuring robots using route rules or [Nuxt Content](https://nuxtseo.com/docs/robots/guides/content) you could control the robot's behavior by providing `index` or `indexable` rules.
These are no longer supported and you should use `robots` key.
```diff
export default defineNuxtConfig({
routeRules: {
// use the `index` shortcut for simple rules
- '/secret/**': { index: false },
+ '/secret/**': { robots: false },
}
})
```
## :icon{name="i-noto-rocket"} Features
### Config `blockAiBots`
AI crawlers can be beneficial as they can help users finding your site, but for some educational sites or those not
interested in being indexed by AI crawlers, you can block them using the `blockAIBots` option.
```ts [nuxt.config.ts]
export default defineNuxtConfig({
robots: {
blockAiBots: true
}
})
```
This will block the following AI crawlers: `GPTBot`, `ChatGPT-User`, `Claude-Web`, `anthropic-ai`, `Applebot-Extended`, `Bytespider`, `CCBot`, `cohere-ai`, `Diffbot`, `FacebookBot`, `Google-Extended`, `ImagesiftBot`, `PerplexityBot`, `OmigiliBot`, `Omigili`
# v2 Beta to v2 RC
## Support
If you get stuck with the migration or have post-migration bugs, please get in touch!
- [Jump in the Discord](https://discord.com/invite/5jDAMswWwX){rel="nofollow"}
- [Make a GitHub issue](https://github.com/harlan-zw/nuxt-seo/issues){rel="nofollow"}
- [Provide feedback](https://github.com/harlan-zw/nuxt-seo/discussions/108){rel="nofollow"}
## Package Rename
In moving to the RC release, the package name has changed from `@nuxtseo/module` to `@nuxtjs/seo`.
- 2-beta.x - Nuxt SEO Kit `@nuxtseo/module`
- 2-rc.x - Nuxt SEO `@nuxtjs/seo`
::code-group
```sh [pnpm]
pnpm remove @nuxtseo/module && pnpm i -D @nuxtjs/seo
```
```bash [yarn]
yarn remove @nuxtseo/module && yarn add -D @nuxtjs/seo
```
```bash [npm]
npm remove @nuxtseo/module && npm install -D @nuxtjs/seo
```
::
```diff [nuxt.config.ts]
export default defineNuxtConfig({
modules: [
- '@nuxtseo/module'
+ '@nuxtjs/seo',
]
})
```
## Notable Changes
### Sitemap v5
The sitemap module has been updated to v5, which itself included a package rename.
- 4.x - `nuxt-simple-sitemap`
- 5.x - `@nuxtjs/sitemap`
No changes are required to your config.
# Nuxt SEO Kit to Nuxt SEO
## Support
If you get stuck with the migration or have post-migration bugs, please get in touch!
- [Jump in the Discord](https://discord.com/invite/5jDAMswWwX){rel="nofollow"}
- [Make a GitHub issue](https://github.com/harlan-zw/nuxt-seo/issues){rel="nofollow"}
- [Provide feedback](https://github.com/harlan-zw/nuxt-seo/discussions/108){rel="nofollow"}
## Module Rename
With v2 the module name and scope is clarified with the rename to Nuxt SEO.
- 1.\* - Nuxt SEO Kit `nuxt-seo-kit` (Nuxt Layer)
- 2.x - Nuxt SEO `@nuxtjs/seo` (Nuxt Module)
The v2 at its core allows you to use all SEO modules at runtime, prerendering is no longer required. It also
comes with improved i18n compatibility.
It has been renamed to provide a better base for growing out the Nuxt SEO ecosystem as well as to make the layer -> module
change more obvious.
::code-group
```sh [pnpm]
# remove nuxt-seo-kit
pnpm remove nuxt-seo-kit && pnpm i -D @nuxtjs/seo
```
```bash [yarn]
yarn remove nuxt-seo-kit && yarn add -D @nuxtjs/seo
```
```bash [npm]
npm remove nuxt-seo-kit && npm install -D @nuxtjs/seo
```
::
```diff [nuxt.config.ts]
export default defineNuxtConfig({
- extends: ['nuxt-seo-kit'],
modules: [
+ '@nuxtjs/seo',
]
})
```
## Breaking Changes
### ``, `useSeoKit()` Removed
These APIs set up all the default meta and module configuration for you.
In v2, they are no longer needed as functionality has been moved to a plugin.
```diff
-
```
```diff
```
If you'd like to opt-out of the these v2 configurations, you can set [automaticDefaults](https://nuxtseo.com/docs/nuxt-seo/api/config#automaticdefaults) to `false`.
## Site Config Changes
In v1, site config was set through runtime config. In v2, we have a dedicated module with helpers for handling
this config called [nuxt-site-config](https://nuxtseo.com/docs/site-config/getting-started/introduction).
The move to a module is to allow greater flexible in changing site configuration at runtime.
If you were specifying any static config in `runtimeConfig` previously, it's now recommended to move this to the `site` key.
::code-group
```ts [v1]
export default defineNuxtConfig({
runtimeConfig: {
public: {
// you can remove environment variables, they'll be set automatically
siteUrl: process.env.NUXT_PUBLIC_SITE_URL,
siteName: 'My App'
}
}
})
```
```ts [v2]
export default defineNuxtConfig({
site: {
name: 'My App'
}
})
```
::
When updating your config:
- All keys are without the `site` prefix
- The `language` config has been renamed to `defaultLocale`
The behaviour for environment variables hasn't changed, it's recommended to read [how site config works](https://nuxtseo.com/docs/site-config/getting-started/how-it-works) for
more advanced configuration.
## Prerendering Changes
In v1, it was required to prerender all pages, to ensure this happened your `nuxt.config` was modified.
In v2, everything can be generated at runtime and the prerendering changes are no longer provided.
If you'd like to keep the prerendering changes, you can add this to your nuxt.config.
```ts
export default defineNuxtConfig({
nitro: {
prerender: {
crawlLinks: true,
routes: [
'/',
],
},
},
})
```
## Module Upgrades
### Nuxt Robots
Upgraded from v1 to v3:
- [v2 release notes](https://github.com/harlan-zw/nuxt-simple-robots/releases/tag/v2.0.0){rel="nofollow"}
- [v3 release notes](https://nuxtseo.com/docs/robots/releases/v3)
No breaking changes.
### Nuxt Sitemap
Upgraded from v1 to v3:
- [v2 release notes](https://github.com/nuxt-modules/sitemap/releases/tag/v2.0.0){rel="nofollow"}
- [v3 release notes](https://nuxtseo.com/docs/sitemap/releases/v3)
No breaking changes.
### Nuxt Schema.org
Upgraded from v2 to v3:
- [v3 release notes](https://nuxtseo.com/docs/schema-org/releases/v3)
No breaking changes.
### Nuxt OG Image
Upgraded from v1 to v2:
- [v2 release notes](https://nuxtseo.com/docs/og-image/releases/v2)
The following options have been removed from nuxt.config `ogImage`:
- `host`, `siteUrl` - see [installation](https://nuxtseo.com/docs/og-image/getting-started/installation) for details.
- `forcePrerender` - removed, not needed
- `satoriProvider` - removed use `runtimeSatori`
- `browserProvider` - removed use `runtimeBrowser`
- `experimentalInlineWasm` - removed, this is now automatic based on environment
- `experimentalRuntimeBrowser` - removed, this is now automatic based on environment
The following options have been deprecated from the `defineOgImage` options:
- `static` - use `cache` instead
If you were referencing the old default template, you will need to update it.
- `OgImageBasic` - remove the property, allow the fallback to be selected automatically
Composables & Components:
- `defineOgImageStatic()` is deprecated, use `defineOgImage()` (default behaviour is to cache), if you want to be verbose you can use `defineOgImageCached()` or ``
- `` is deprecated, use ``
- `defineOgImageDynamic()` is deprecated, use `defineOgImageWithoutCache()`
- `` is deprecated, use ``
If you were using the runtime browser previously, you will need to manually opt-in for it to work in production.
```ts
export default defineNuxtConfig({
ogImage: {
runtimeBrowser: true
}
})
```
::code-group
```vue [v1]
```
```vue [v2]
```
::
### Nuxt Link Checker
Upgraded from v1 to v2:
- [v2 release notes](https://nuxtseo.com/docs/link-checker/releases/v2)
Changes to nuxt.config `linkChecker`:
- `exclude` renamed to `excludeLinks`
- `failOn404` renamed to `failOnError`
### Nuxt SEO Utils
The `nuxt-unhead` module has been renamed to `nuxt-seo-utils`. This is to better reflect the scope of the module.
Upgraded from v1 to v3:
- [v2 release notes](https://github.com/harlan-zw/nuxt-link-checker/releases/2.0.0){rel="nofollow"}
- [v2 release notes](https://nuxtseo.com/docs/link-checker/releases/v3)
If you were using the `unhead` key to configure the module, you will need to change it to `seo`.
```diff
export default defineNuxtConfig({
- unhead: {
+ seo: {
}
})
```
# Nuxt Robots
## Why use Nuxt Robots?
Nuxt Robots is a module for configuring the robots crawling your site with minimal config and best practice defaults.
The core feature of the module is:
- Telling [crawlers](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers){rel="nofollow"} which paths they can and cannot access using a [robots.txt](https://developers.google.com/search/docs/crawling-indexing/robots/intro){rel="nofollow"} file.
- Telling [search engine crawlers](https://developers.google.com/search/docs/crawling-indexing/googlebot){rel="nofollow"} what they can show in search results from your site using a ``{className="language-html shiki shiki-themes github-light github-light material-theme-palenight" lang="html"} `X-Robots-Tag` HTTP header.
New to robots or SEO? Check out the [Controlling Web Crawlers](https://nuxtseo.com/learn/controlling-crawlers) guide to learn more about why you might
need these features.
::learn-label
---
icon: i-ph-robot-duotone
label: Conquering Web Crawlers
to: https://nuxtseo.com/learn/controlling-crawlers
---
::
While it's simple to create your own robots.txt file, the module makes sure your non-production environments get disabled from indexing. This is important to avoid duplicate content issues and to avoid search engines serving your development or staging content to users.
The module also acts as an integration point for other modules. For example:
- [Nuxt Sitemap](https://nuxtseo.com/docs/sitemap/getting-started/introduction) ensures pages you've marked as disallowed from indexing are excluded from the sitemap.
- [Nuxt Schema.org](https://nuxtseo.com/docs/schema/getting-started/introduction) skips rendering Schema.org data if the page is marked as excluded from indexing.
Ready to get started? Check out the [installation guide](https://nuxtseo.com/docs/robots/getting-started/installation).
## Features
Nuxt Robots manages the robots crawling your site with minimal config and best practice defaults.
### 🤖 Robots.txt Config
Configuring the rules is as simple as adding a production robots.txt file to your project.
- [Config using Robots.txt](https://nuxtseo.com/docs/robots/guides/robots-txt)
### 🗿 X-Robots-Tag Header, Meta Tag
Ensures pages that should not be indexed are not indexed with the following:
- `X-Robots-Tag` header
- `` meta tag
Both enabled by default.
- [How it works](https://nuxtseo.com/docs/robots/getting-started/how-it-works)
### 🔒 Production only indexing
The module uses [Nuxt Site Config](https://nuxtseo.com/docs/site-config/getting-started/background) to determine if the site is in production mode.
It will disables non-production environments from being indexed, avoiding duplicate content issues.
- [Environment Config](https://nuxtseo.com/docs/robots/guides/disable-indexing)
### 🔄 Easy and powerful configuration
Use route rules to easily target subsets of your site.
When you need even more control, use the runtime Nitro hooks to dynamically configure your robots rules.
- [Route Rules](https://nuxtseo.com/docs/robots/guides/route-rules)
- [Nitro Hooks](https://nuxtseo.com/docs/robots/nitro-api/nitro-hooks)
### 🌎 I18n Support
Will automatically fix any non-localised paths within your `allow` and `disallow` rules.
- [I18n Integration](https://nuxtseo.com/docs/robots/integration/i18n)
# Install Nuxt Robots
## Setup Module
Want to know why you need this module? Check out the [introduction](https://nuxtseo.com/docs/robots/getting-started/introduction).
To get started with Nuxt Robots, you need to install the dependency and add it to your Nuxt config.
::module-install{name="robots"}
::
## Verifying Installation
To ensure the module is behaving as expected, you should first check [`/robots.txt`](http://localhost:3000/robots.txt){rel="nofollow"} is being generated.
It should show that the site is disallowed from indexing, this is good as development
environments should not be indexed by search engines.
However, we want to see what a production environment would look like.
For this, it's recommended to use the Nuxt DevTools Robots tab to see the current configuration and how it's being applied.
The DevTools will show you that in production we're just serving a minimal robots.txt file.
```robots-txt [robots.txt]
User-agent: *
Disallow:
```
This allows all search engines to index the site.
## Configuration
Every site is different and will require their own unique configuration, to give you a head start
you may consider the following areas to configure.
- [Disabling Site Indexing](https://nuxtseo.com/docs/robots/guides/disable-indexing) - If you have non-production environments you should disable indexing for these environments,
while this works out-of-the-box for most providers, it's good to verify this is working as expected.
- [Disable Page Indexing](https://nuxtseo.com/docs/robots/guides/disable-page-indexing) - You should consider excluding pages that are not useful to search engines, for example
any routes which require authentication should be ignored.
Make sure you understand the differences between robots.txt vs robots meta tag with the [Controlling Web Crawlers](https://nuxtseo.com/learn/conquering-crawlers) guide.
::learn-label
---
icon: i-ph-robot-duotone
label: Conquering Web Crawlers
to: https://nuxtseo.com/learn/controlling-crawlers
---
::
## Next Steps
You've successfully installed Nuxt Robots and configured it for your project.
Documentation is provided for module integrations, check them out if you're using them.
- [Nuxt I18n](https://nuxtseo.com/docs/robots/guides/i18n) - Disallows are automatically expanded to your configured locales.
- [Nuxt Content](https://nuxtseo.com/docs/robots/guides/content) - Configure robots from your markdown files.
Next check out the [robots.txt recipes](https://nuxtseo.com/docs/robots/guides/robot-recipes) guide for some inspiration.
# Troubleshooting Nuxt Robots
## Debugging
### Nuxt DevTools
The best tool for debugging is the Nuxt DevTools integration with Nuxt Robots.
This will show you the current robot rules and your robots.txt file.
### Debug Config
You can enable the [debug](https://nuxtseo.com/docs/robots/api/config#debug) option which will give you more granular output.
This is enabled by default in development mode.
## Submitting an Issue
When submitting an issue, it's important to provide as much information as possible.
The easiest way to do this is to create a minimal reproduction using the Stackblitz playgrounds:
- [Basic](https://stackblitz.com/edit/nuxt-starter-zycxux?file=public%2F_robots.txt){rel="nofollow"}
- [I18n](https://stackblitz.com/edit/nuxt-starter-pnej8lvb?file=public%2F_robots.txt){rel="nofollow"}
# Disabling Site Indexing
## Introduction
Disabling certain environments of your site from being indexed is an important practice to avoid
SEO issues.
For example, you don't want your staging or preview environments to be indexed by search engines as they will cause duplicate
content issues as well as confuse end-users.
If you need to disable specific pages from being indexed, refer to the [Disabling Page Indexing](https://nuxtseo.com/docs/robots/guides/disable-page-indexing) guide.
## Disable Indexing Completely
In some cases, such as internal business tools, or sites that are not ready for the public, you may want to disable indexing completely.
You can achieve this by setting the `indexable` option to `false` in your site config.
```ts
export default defineNuxtConfig({
site: { indexable: false }
})
```
## Handling Staging Environments
Staging environments are great for testing out code before it goes to production. However, we definitely don't want
search engines to index them.
To control the indexing of these sites we will make use of the `env` Site Config, which defaults to `production`.
```dotenv [.env]
NUXT_SITE_ENV=staging
```
Nuxt Robots will disable indexing for any sites which don't have a production environment, so feel free to set this
to whatever makes sense for your project.
## Verifying Indexing
To verify that your site is being not being indexed, you can check the generated `robots.txt` file, it should look something like this.
```robots
User-agent: *
Disallow: /
```
A robots meta tag should also be generated that looks like:
```html
```
For full confidence you can inspect the URL within Google Search Console to see if it's being indexed.
# Disable Page Indexing
## Introduction
As not all sites are the same, it's important for you to have a flexible way to disable indexing for specific pages.
The best options to choose are either:
- [Robots.txt](https://nuxtseo.com/#robotstxt) - Great for blocking robots from accessing specific pages that haven't been indexed yet.
- [useRobotsRule](https://nuxtseo.com/#userobotsrule) - Controls the `` meta tag and `X-Robots-Tag` HTTP Header. Useful for dynamic pages where you may not know if it should be indexed at build time and when you need to remove pages from search results. For example, a user profile page that should only be indexed if the user has made their profile public.
If you're still unsure about which option to choose, make sure you read the [Controlling Web Crawlers](https://nuxtseo.com/learn/conquering-crawlers) guide.
::learn-label
---
icon: i-ph-robot-duotone
label: Conquering Web Crawlers
to: https://nuxtseo.com/learn/controlling-crawlers
---
::
[Route Rules](https://nuxtseo.com/#route-rules) and [Nuxt Config](https://nuxtseo.com/#nuxt-config) are also available for more complex scenarios.
## Robots.txt
Please follow the [Config using Robots.txt](https://nuxtseo.com/docs/robots/guides/robots-txt) guide to configure your `robots.txt` file.
You'll be able to use the `Disallow` directive within a `robots.txt` file to block specific URLs.
```robots-txt [public/_robots.txt]
User-agent: *
Disallow: /my-page
Disallow: /secret/*
```
## useRobotsRule
The [useRobotsRule](https://nuxtseo.com/docs/robots/api/use-robots-rule) composable provides a reactive way to access and set the robots rule at runtime.
```ts
import { useRobotsRule } from '#imports'
const rule = useRobotsRule()
rule.value = 'noindex, nofollow'
```
## Route Rules
If you have a static page that you want to disable indexing for, you can use [defineRouteRules](https://nuxt.com/docs/api/utils/define-route-rules){rel="nofollow"} (requires enabling the experimental `inlineRouteRules`).
This is a build-time configuration that will generate the appropriate rules in the `/robots.txt` file and is integrated with the [Sitemap](https://nuxtseo.com/docs/sitemap/guides/robots) module.
```vue [pages/about.vue]
```
For more complex scenarios see the [Route Rules](https://nuxtseo.com/docs/robots/guides/route-rules) guide.
## Nuxt Config
If you need finer programmatic control, you can configure the module using nuxt.config.
```ts [nuxt.config.ts]
export default defineNuxtConfig({
robots: {
disallow: ['/secret', '/admin'],
}
})
```
See the [Nuxt Config](https://nuxtseo.com/docs/robots/guides/nuxt-config) guide for more details.
# How Nuxt Robots Works
Nuxt Robots tells robots (crawlers) how to behave by creating a `robots.txt` file for you, adding a `X-Robots-Tag` header and `` tag to your site
where appropriate.
One important behaviour to control is blocking Google from indexing pages to:
- Prevent [duplicate content issues](https://moz.com/learn/seo/duplicate-content){rel="nofollow"}
- Prevent wasting [crawl budget](https://developers.google.com/search/docs/crawling-indexing/large-site-managing-crawl-budget){rel="nofollow"}
## Robots.txt
For robots to understand how they can access your site, they will first check for a `robots.txt` file.
```bash
public
└── robots.txt
```
This file is generated differently depending on the environment:
- When deploying using `nuxi generate` or the `nitro.prerender.routes` rule, this is a static file.
- Otherwise, it's handled by the server and generated at runtime when requested.
When indexing is disabled a `robots.txt` will be generated with the following content:
```robots-txt [robots.txt]
User-agent: *
Disallow: /
```
This blocks all bots from indexing your site.
## `X-Robots-Tag` Header and ``
In some situations, the robots.txt becomes too restrictive to provide the level of control you need to manage
your site's indexing.
For this reason, the module by default will provide a `X-Robots-Tag` header and ``{className="language-html shiki shiki-themes github-light github-light material-theme-palenight" lang="html"} tag.
These are applied using the following logic:
- `X-Robots-Tag` header - Route Rules are implemented for all modes, otherwise SSR only. This will only be added
when indexing has been disabled for the route.
- ``{className="language-html shiki shiki-themes github-light github-light material-theme-palenight" lang="html"} - SSR only, will always be added
## Robot Rules
Default values for the `robots` rule depending on the mode.
For indexable routes the following is used:
```html
```
Besides giving robots the go-ahead, this also requests that Google:
> Choose the snippet length that it believes is most effective to help users discover your content and direct users to your site."
You can learn more on the [Robots Meta Tag](https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag){rel="nofollow"} documentation, feel free
to change this to suit your needs using `robotsEnabledValue`.
For non-indexable routes the following is used:
```html
```
This will tell robots to not index the page.
## Development Environment
The module by default will disable indexing in development environments. This is for safety, as you don't want
your development environment to be indexed by search engines.
```robots-txt [robots.txt]
# Block all bots
User-agent: *
Disallow: /
```
## Production Environments
For production environments, the module will generate a `robots.txt` file that allows all bots.
Out-of-the-box, this will be the following:
```robots-txt [robots.txt]
User-agent: *
Disallow:
```
This tells all bots that they can index your entire site.
# Robot.txt Recipes
## Introduction
As a minimum the only recommended configuration for robots is to [disable indexing for non-production environments](https://nuxtseo.com/docs/robots/guides/disable-indexing).
Many sites will never need to configure their [`robots.txt`](https://nuxtseo.com/learn/controlling-crawlers/robots-txt){rel="nofollow"} or [`robots` meta tag](https://nuxtseo.com/learn/controlling-crawlers/meta-tags){rel="nofollow"} beyond this, as the [controlling web crawlers](https://nuxtseo.com/learn/controlling-crawlers)
is an advanced use case and topic.
However, if you're looking to get the best SEO and performance results, you may consider some of the recipes on this page for
your site.
## Robots.txt recipes
### Blocking Bad Bots
If you're finding your site is getting hit with a lot of bots, you may consider enabling the `blockNonSeoBots` option.
```ts [nuxt.config.ts]
export default defineNuxtConfig({
robots: {
blockNonSeoBots: true
}
})
```
This will block mostly web scrapers, the full list is: `Nuclei`, `WikiDo`, `Riddler`, `PetalBot`, `Zoominfobot`, `Go-http-client`, `Node/simplecrawler`, `CazoodleBot`, `dotbot/1.0`, `Gigabot`, `Barkrowler`, `BLEXBot`, `magpie-crawler`.
### Blocking AI Crawlers
AI crawlers can be beneficial as they can help users finding your site, but for some educational sites or those not
interested in being indexed by AI crawlers, you can block them using the `blockAIBots` option.
```ts [nuxt.config.ts]
export default defineNuxtConfig({
robots: {
blockAiBots: true
}
})
```
This will block the following AI crawlers: `GPTBot`, `ChatGPT-User`, `Claude-Web`, `anthropic-ai`, `Applebot-Extended`, `Bytespider`, `CCBot`, `cohere-ai`, `Diffbot`, `FacebookBot`, `Google-Extended`, `ImagesiftBot`, `PerplexityBot`, `OmigiliBot`, `Omigili`
### Blocking Privileged Pages
If you have pages that require authentication or are only available to certain users, you should block these from being indexed.
```robots-txt [public/_robots.txt]
User-agent: *
Disallow: /admin
Disallow: /dashboard
```
See [Config using Robots.txt](https://nuxtseo.com/docs/robots/guides/robots-txt) for more information.
### Whitelisting Open Graph Tags
If you have certain pages that you don't want indexed but you still want their [Open Graph Tags](https://nuxtseo.com/learn/mastering-meta/open-graph) to be crawled, you can target the specific
user-agents.
```robots-txt [public/_robots.txt]
# Block search engines
User-agent: Googlebot
User-agent: Bingbot
Disallow: /user-profiles
# Allow social crawlers
User-agent: facebookexternalhit
User-agent: Twitterbot
Allow: /user-profiles
```
See [Config using Robots.txt](https://nuxtseo.com/docs/robots/guides/robots-txt) for more information.
### Blocking Search Results
You may consider blocking search results from being indexed, as they can be seen as duplicate content
and can be a poor user experience.
```robots-txt [public/_robots.txt]
User-agent: *
# block search results
Disallow: /*?query=
# block pagination
Disallow: /*?page=
# block sorting
Disallow: /*?sort=
# block filtering
Disallow: /*?filter=
```
# Config using Robots.txt
## Introduction
The [robots.txt standard](https://developers.google.com/search/docs/crawling-indexing/robots/create-robots-txt){rel="nofollow"} is important for search engines
to understand which pages to crawl and index on your site.
New to robots.txt? Check out the [Robots.txt Guide](https://nuxtseo.com/learn/controlling-crawlers/robots-txt) to learn more.
To match closer to the robots standard, Nuxt Robots recommends configuring the module by using a `robots.txt`, which will be parsed, validated, configuring the module.
If you need programmatic control, you can configure the module using [nuxt.config.ts](https://nuxtseo.com/docs/robots/guides/nuxt-config),
[Route Rules](https://nuxtseo.com/docs/robots/guides/route-rules) and [Nitro hooks](https://nuxtseo.com/docs/robots/nitro-api/nitro-hooks).
## Creating a `robots.txt` file
You can place your file in any location; the easiest is to use: `/public/_robots.txt`.
Additionally, the following paths are supported by default:
```bash [Example File Structure]
# root directory
robots.txt
# asset folders
assets/
├── robots.txt
# pages folder
pages/
├── robots.txt
├── _dir/
│ └── robots.txt
# public folder
public/
├── _robots.txt
├── _dir/
│ └── robots.txt
```
### Custom paths
If you find this too restrictive,
you can use the `mergeWithRobotsTxtPath` config to load your `robots.txt` file from any path.
```ts
export default defineNuxtConfig({
robots: {
mergeWithRobotsTxtPath: 'assets/custom/robots.txt'
}
})
```
## Parsed robots.txt
The following rules are parsed from your `robots.txt` file:
- `User-agent` - The user-agent to apply the rules to.
- `Disallow` - An array of paths to disallow for the user-agent.
- `Allow` - An array of paths to allow for the user-agent.
- `Sitemap` - An array of sitemap URLs to include in the generated sitemap.
This parsed data will be shown for environments that are `indexable`.
## Conflicting `public/robots.txt`
To ensure other modules can integrate with your generated robots file, you must not have a `robots.txt` file in your `public` folder.
If you do, it will be moved to `/public/_robots.txt` and merged with the generated file.
# Yandex: Clean-param
Nuxt Robots is built around first-party robots.txt specifications from Google and Bing.
Some users may want to configure Yandex, a popular search engine in Russia, and find that rules aren't working. To use
Yandex you will need to provide alternative directives.
## Clean-param
The `clean-param` directive is used to remove query parameters from URLs. This is useful for SEO as it prevents duplicate
content and consolidates page rank.
It can either be configured directly through robots.txt when targeting Yandex or through the module configuration.
### Robots.txt
To configure the `clean-param` directive in your `robots.txt` file, you can use the following syntax:
```robots-txt [robots.txt]
User-agent: Yandex
Clean-param: param1 param2
```
This will remove the `param1` and `param2` query parameters from URLs.
### Module Configuration
To configure the `clean-param` directive in your `nuxt.config.ts` file, you can use the following syntax:
```ts
export default defineNuxtConfig({
robots: {
groups: [
{
userAgent: ['Yandex'],
cleanParam: ['param1', 'param2']
}
]
}
})
```
## Host & Crawl-delay
These directives are deprecated and should not be used. All search engines will ignore them.
# Config Using Route Rules
If you prefer, you can use route rules to configure how your routes are indexed by search engines.
You can provide the following rules:
- `{ robots: false }`{className="language-json shiki shiki-themes github-light github-light material-theme-palenight" lang="json"} - Will disable the route from being indexed using the [robotsDisabledValue](https://nuxtseo.com/docs/robots/api/config#robotsdisabledvalue) config.
- `{ robots: '' }`{className="language-json shiki shiki-themes github-light github-light material-theme-palenight" lang="json"} - Will add the provided string as the robots rule
The rules are applied using the following logic:
- `X-Robots-Tag` header - SSR only,
- ``{className="language-html shiki shiki-themes github-light github-light material-theme-palenight" lang="html"}
- `/robots.txt` disallow entry - When [disallowNonIndexableRoutes](https://nuxtseo.com/docs/robots/api/config#robotsdisabledvalue) is enabled
## Inline Route Rules
Requires enabling the experimental `inlineRouteRules`, see the [defineRouteRules](https://nuxt.com/docs/api/utils/define-route-rules){rel="nofollow"} documentation
to learn more.
```vue
```
## Nuxt Config
```ts [nuxt.config.ts]
export default defineNuxtConfig({
routeRules: {
// use the `index` shortcut for simple rules
'/secret/**': { robots: false },
// add exceptions for individual routes
'/secret/visible': { robots: true },
// use the `robots` rule if you need finer control
'/custom-robots': { robots: 'index, follow' },
}
})
```
# Nuxt Content
## Introduction
Nuxt Robots comes with an integration for Nuxt Content that allows you to configure robots straight from your markdown directly.
## Setup Nuxt Content v3
In Nuxt Content v3 we need to use the `asRobotsCollection()`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"} function to augment any collections
to be able to use the `robots` frontmatter key.
```ts [content.config.ts]
import { defineCollection, defineContentConfig } from '@nuxt/content'
import { asRobotsCollection } from '@nuxtjs/robots/content'
export default defineContentConfig({
collections: {
content: defineCollection(
// adds the robots frontmatter key to the collection
asRobotsCollection({
type: 'page',
source: '**/*.md',
}),
),
},
})
```
To ensure the tags actually gets rendered you need to ensure you're using the `useSeoMeta()`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"} composable with `seo`.
```vue [[...slug\\].vue]
```
Due to current Nuxt Content v3 limitations, you must load the robots module before the content module.
```ts
export default defineNuxtConfig({
modules: [
'@nuxtjs/robots',
'@nuxt/content' // <-- Must be after @nuxtjs/robots
]
})
```
## Setup Nuxt Content v2
In Nuxt Content v2 markdown files require either [Document Driven Mode](https://content.nuxt.com/document-driven/introduction){rel="nofollow"} or a `path` key to be set
in the frontmatter.
```md [content/foo.md]
---
path: /foo
---
```
## Usage
You can use any boolean or string value as `robots` that will be forwarded as a
[Meta Robots Tag](https://nuxtseo.com/learn/controlling-crawlers/meta-tags).
::code-group
```md [input.md]
robots: false
```
```html [output]
```
::
### Disabling Nuxt Content Integration
If you need to disable the Nuxt Content integration, you can do so by setting the `disableNuxtContentIntegration`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"} option in the module configuration.
```ts [nuxt.config.ts]
export default defineNuxtConfig({
robots: {
disableNuxtContentIntegration: true,
}
})
```
# Nuxt I18n
Out of the box, the robots module will integrate directly with [@nuxtjs/i18n](https://i18n.nuxtjs.org/){rel="nofollow"}.
You will need to use v8+ of the i18n module.
## Auto-localised Allow / Disallow
The module will automatically localise the `allow` and `disallow` paths based on your i18n configuration.
If you provide a `allow` or `disallow` path that is not localised, it will be localised for you, if your
i18n configuration allows it.
```ts [nuxt.config.ts]
export default defineNuxtConfig({
robots: {
disallow: ['/secret', '/admin'],
},
i18n: {
locales: ['en', 'fr'],
defaultLocale: 'en',
strategy: 'prefix',
}
})
```
This will generate the following output:
```robots-txt [robots.txt]
User-agent: *
Disallow: /en/secret
Disallow: /en/admin
Disallow: /fr/secret
Disallow: /fr/admin
```
## Opting-out of localisation
If you want to opt-out of localisation, there are two options:
### Opt-out for a group
You can provide the `_skipI18n` option to a group to disable localisation just for that group.
```ts
export default defineNuxtConfig({
robots: {
groups: [
{
disallow: [
'/docs/en/v*',
'/docs/zh/v*',
'/forum/admin/',
'/forum/auth/',
],
_skipI18n: true,
},
],
},
})
```
### Opt-out i18n globally
By providing the `autoI18n: false` option you will disable all i18n localisation splitting.
```ts
export default defineNuxtConfig({
robots: {
autoI18n: false,
}
})
```
# Config using Nuxt Config
If you need programmatic control, you can configure the module using nuxt.config.
## Simple Configuration
The simplest configuration is to provide an array of paths to disallow for the `*` user-agent. If needed you can
provide `allow` pat
You can simply add the path or path pattern to hs as well.
- `disallow` - An array of paths to disallow for the `*` user-agent.
- `allow` - An array of paths to allow for the `*` user-agent.
```ts [nuxt.config.ts]
export default defineNuxtConfig({
robots: {
// provide simple disallow rules for all robots `user-agent: *`
disallow: ['/secret', '/admin'],
allow: '/admin/login'
}
})
```
This will generate the following output:
```robots-txt [robots.txt]
User-agent: *
Disallow: /secret
Disallow: /admin
Allow: /admin/login
```
## Group Configuration
When targeting specific robots, you can use the `groups` option to provide granular control.
- `groups` - A stack of objects to provide granular control (see below).
```ts [nuxt.config.ts]
export default defineNuxtConfig({
// add more granular rules
groups: [
// block specific robots from specific pages
{
userAgent: ['AdsBot-Google-Mobile', 'AdsBot-Google-Mobile-Apps'],
disallow: ['/admin'],
allow: ['/admin/login'],
comments: 'Allow Google AdsBot to index the login page but no-admin pages'
},
]
})
```
This will generate the following output:
```robots-txt [robots.txt]
# Allow Google AdsBot to index the login page but no-admin pages
User-agent: AdsBot-Google-Mobile
User-agent: AdsBot-Google-Mobile-Apps
Disallow: /admin
Allow: /admin/login
```
# useRobotsRule()
## Introduction
**Type:** `function useRobotsRule(rule?: MaybeRef): Ref`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
View and control the robots rule using a simple reactivity API.
It's recommended to use this composable when you need to dynamically change the robots rule at runtime. For example when a user changes their profile from private to public.
Note: This does not modify the `/robots.txt` file, only the `X-Robots-Tag` header and the `robots` meta tag.
### Server Side Behavior
In a server-side context, this can be used to change the rule used for `X-Robots-Tag` header and the `robots` meta tag.
Providing a `boolean` will either enable or disable indexing for the current path using the default rules.
```ts
import { useRobotsRule } from '#imports'
const rule = useRobotsRule(true) // modifies the rules
```
### Client Side Behavior
In a client-side context you can only read the value of the rule, modifying it will have no effect. This is due to robots only respecting the initial SSR response.
```ts
import { useRobotsRule } from '#imports'
const rule = useRobotsRule(true) // does not do anything, just returns the value
```
## Usage
**Accessing the rule:**
```ts
import { useRobotsRule } from '#imports'
const rule = useRobotsRule()
// Ref<'noindex, nofollow'>
```
**Setting the rule - argument:**
```ts
import { useRobotsRule } from '#imports'
useRobotsRule('index, nofollow')
// Ref<'index, nofollow'>
useRobotsRule(false)
// Ref<'noindex, nofollow'>
```
**Setting the rule - reactive:**
```ts
import { useRobotsRule } from '#imports'
const rule = useRobotsRule()
rule.value = 'index, nofollow'
// Ref<'index, nofollow'>
```
# Nuxt Config
## `enabled: boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `true`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Conditionally toggle the module.
## `allow: string[]`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `[]`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Allow paths to be indexed for the `*` user-agent (all robots).
## `disallow: string[]`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `[]`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Disallow paths from being indexed for the `*` user-agent (all robots).
## `metaTag: boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `true`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Whether to add a `` tag to the `` of each page.
## `groups: RobotsGroupInput[]`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `[]`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Define more granular rules for the robots.txt. Each group is a set of rules for specific user agent(s).
```ts twoslash
export default defineNuxtConfig({
robots: {
groups: [
{
userAgent: ['AdsBot-Google-Mobile', 'AdsBot-Google-Mobile-Apps'],
disallow: ['/admin'],
allow: ['/admin/login'],
comments: 'Allow Google AdsBot to index the login page but no-admin pages'
},
]
}
})
```
## `sitemap: MaybeArray`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `[]`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
The sitemap URL(s) for the site. If you have multiple sitemaps, you can provide an array of URLs.
You must either define the runtime config `siteUrl` or provide the sitemap as absolute URLs.
```ts
export default defineNuxtConfig({
robots: {
sitemap: [
'/sitemap-one.xml',
'/sitemap-two.xml',
],
},
})
```
## `robotsEnabledValue: string`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `'index, follow, max-image-preview:large, max-snippet:-1, max-video-preview:-1'`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
The value to use when the page is indexable.
## `robotsDisabledValue: string`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Type: `string`
- Default: `'noindex, nofollow'`
The value to use when the page is not indexable.
## `mergeWithRobotsTxtPath: boolean | string`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `true`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Specify a robots.txt path to merge the config from, relative to the root directory.
When set to `true`, the default path of `/robots.txt` will be used.
When set to `false`, no merging will occur.
## `blockNonSeoBots: boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `false`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Blocks some non-SEO bots from crawling your site. This is not a replacement for a full-blown bot management solution, but it can help to reduce the load on your server.
See [const.ts](https://github.com/nuxt-modules/robots/blob/main/src/const.ts#L6){rel="nofollow"} for the list of bots that are blocked.
```ts twoslash
export default defineNuxtConfig({
robots: {
blockNonSeoBots: true
}
})
```
## `robotsTxt: boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `true`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Whether to generate a `robots.txt` file. Useful for disabling when using a base URL.
## `cacheControl: string | false`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `'max-age=14400, must-revalidate'`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Configure the Cache-Control header for the robots.txt file. By default it's cached for
4 hours and must be revalidated.
Providing false will set the header to `'no-store'`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}.
```ts [nuxt.config.ts] twoslash
export default defineNuxtConfig({
robots: {
cacheControl: 'max-age=14400, must-revalidate'
}
})
```
## `disableNuxtContentIntegration: boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `undefined`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Whether to disable the [Nuxt Content Integration](https://nuxtseo.com/docs/robots/guides/content).
## `debug: boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Type: `boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `false`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Enables debug logs and a debug endpoint.
## `credits: boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `true`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Control the module credit comment in the generated robots.txt file.
```robots-txt [robots.txt]
# START nuxt-robots (indexable) <- credits
# ...
# END nuxt-robots <- credits
```
```ts [nuxt.config.ts] twoslash
export default defineNuxtConfig({
robots: {
credits: false
}
})
```
## `disallowNonIndexableRoutes: boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
**⚠️ Deprecated**: Explicitly disallow routes in the `/robots.txt` file if you don't want them to be accessible.
- Default: `'false'`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Should route rules which disallow indexing be added to the `/robots.txt` file.
# Nuxt Hooks
## `'robots:config'`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
**Type:** `(config: ResolvedModuleOptions) => void | Promise`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
This hook allows you to modify the robots config before it is used to generate the robots.txt and meta tags.
```ts
export default defineNuxtConfig({
hooks: {
'robots:config': (config) => {
// modify the config
config.sitemap = '/sitemap.xml'
},
},
})
```
# getPathRobotConfig()
## Introduction
The `getPathRobotConfig()`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"} Nitro composable gives you access to the page robots config, allowing you
to determine if the page can or can't be indexed and why.
This can be useful for disabling certain SEO features when the page does not allow for indexing. For example, Nuxt SEO uses this internally to disable OG Images
when the page is not indexable.
## API
```ts
function getPathRobotConfig(e: H3Event, options?: GetPathRobotConfigOptions): GetPathRobotResult
interface GetPathRobotConfigOptions {
userAgent?: string
skipSiteIndexable?: boolean
path?: string
}
interface GetPathRobotResult {
rule: string
indexable: boolean
debug?: { source: string, line: string }
}
```
### Arguments
- `e: H3Event`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}: The request event object, used to determine the current path.
- `options`: Optional options.
- `userAgent: string`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}: The user agent to use for the check. Some pages may have different rules for different user agents.
- `skipSiteIndexable: boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}: Skip the site indexable check. Allows you to check the page indexable while ignoring the site-wide config.
- `path: boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}: An override for which path to check. By default, it will use the current path of the `H3Event`.
### Returns
- `rule: string`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}: The rule for the page.
- `indexable: boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}: Whether the page is indexable.
- `debug?: { source: string, line: string }`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}: Debug information about the source of the rule and the line number in the source file. This is only available in development mode.
## Example
```ts [server/plugins/strip-og-tags-maybe.ts] twoslash
import { defineNitroPlugin, getPathRobotConfig } from '#imports'
export default defineNitroPlugin((nitroApp) => {
// strip og meta tags if the site is not indexable
nitroApp.hooks.hook('render:html', async (ctx, { event }) => {
const { indexable } = getPathRobotConfig(event)
if (!indexable) {
ctx.html = ctx.html.replace(//g, '')
}
})
})
```
# getSiteRobotConfig()
## Introduction
The `getSiteRobotConfig()`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"} Nitro composable gives you access to the site-wide robots config, allowing you
to determine if the site can or can't be indexed and why.
This can be useful for disabling certain SEO features when the environment does not allow for indexing.
## API
```ts
function getSiteRobotConfig(e: H3Event): { indexable: boolean, hints: string[] }
```
### Arguments
- `e: H3Event`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}: The event object.
### Returns
- `indexable: boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}: Whether the site is indexable.
- `hints: string[]`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}: A list of hints as to why the site is or isn't indexable.
## Example
```ts [server/routes/og.png.ts]
import { getSiteRobotConfig } from '#imports'
export default defineEventHandler((e) => {
const { indexable } = getSiteRobotConfig(e)
// avoid serving og images if the site is not indexable
if (!indexable) {
// ...
}
})
```
# Nitro Hooks
## `'robots:config'`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
**Type:** `(ctx: HookContext) => void | Promise`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
```ts
interface HookContext {
groups: RobotsGroupResolved[]
sitemaps: string[]
context: 'robots.txt' | 'init'
event?: H3Event // undefined on `init`
}
```
Modify the robots config before it's used to generate the indexing rules.
This is called when Nitro starts `init` as well as when generating the robots.txt `robots.txt`.
```ts [server/plugins/robots-ignore-routes.ts]
export default defineNitroPlugin((nitroApp) => {
nitroApp.hooks.hook('robots:config', async (ctx) => {
// extend the robot.txt rules at runtime
if (ctx.context === 'init') {
// probably want to cache this
const ignoredRoutes = await $fetch('/api/ignored-routes')
ctx.groups[0].disallow.push(...ignoredRoutes)
}
})
})
```
## `'robots:robots-txt'`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
**Type:** `(ctx: HookContext) => void | Promise`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
```ts
export interface HookRobotsTxtContext {
robotsTxt: string
e: H3Event
}
```
This hook allows you to modify the robots.txt content before it is sent to the client.
```ts [server/plugins/robots-remove-comments.ts]
export default defineNitroPlugin((nitroApp) => {
if (!process.dev) {
nitroApp.hooks.hook('robots:robots-txt', async (ctx) => {
// remove comments from robotsTxt in production
ctx.robotsTxt = ctx.robotsTxt.replace(/^#.*$/gm, '').trim()
})
}
})
```
# v5.0.0
## Introduction
The v5 major of Nuxt Robots is a simple release to remove deprecations and add support for the [Nuxt SEO v2 stable](https://nuxtseo.com/announcement){rel="nofollow"}.
## :icon{name="i-noto-warning"} Breaking Features
### Site Config v3
Nuxt Site Config is a module used internally by Nuxt Robots.
It's major update to v3.0.0 shouldn't have any direct affect your site, however, you may want to double-check
the [breaking changes](https://github.com/harlan-zw/nuxt-site-config/releases/tag/v3.0.0){rel="nofollow"}.
### Removed `rules` config
The v4 of Nuxt Robots provided a backward compatibility `rules` config. As it was deprecated, this is no longer supported. If you're using `rules`, you should migrate to the `groups` config or use a robots.txt file.
```diff
export default defineNuxtConfig({
robots: {
- rules: {},
+ groups: {}
}
})
```
### Removed `defineRobotMeta` composable
This composable didn't do anything in v4 as the robots meta tag is enabled by default. If you'd like to control the robot meta tag rule, use the [`useRobotsRule()`](https://nuxtseo.com/docs/robots/api/use-robots-rule){rel="nofollow"} composable.
```diff
- defineRobotMeta(true)
+ useRobotsRule(true)
```
### Removed `RobotMeta` component
This component was a simple wrapper for `defineRobotMeta`, you should use [`useRobotsRule()`](https://nuxtseo.com/docs/robots/api/use-robots-rule){rel="nofollow"} if you wish to control the robots rule.
### Removed `index`, `indexable` config
When configuring robots using route rules or [Nuxt Content](https://nuxtseo.com/docs/robots/guides/content){rel="nofollow"} you could control the robot's behavior by providing `index` or `indexable` rules.
These are no longer supported and you should use `robots` key.
```diff
export default defineNuxtConfig({
routeRules: {
// use the `index` shortcut for simple rules
- '/secret/**': { index: false },
+ '/secret/**': { robots: false },
}
})
```
## :icon{name="i-noto-rocket"} Features
### Config `blockAiBots`
AI crawlers can be beneficial as they can help users finding your site, but for some educational sites or those not
interested in being indexed by AI crawlers, you can block them using the `blockAIBots` option.
```ts [nuxt.config.ts]
export default defineNuxtConfig({
robots: {
blockAiBots: true
}
})
```
This will block the following AI crawlers: `GPTBot`, `ChatGPT-User`, `Claude-Web`, `anthropic-ai`, `Applebot-Extended`, `Bytespider`, `CCBot`, `cohere-ai`, `Diffbot`, `FacebookBot`, `Google-Extended`, `ImagesiftBot`, `PerplexityBot`, `OmigiliBot`, `Omigili`
# v4.0.0
## Nuxt Simple Robots is now Nuxt Robots
In a [discussion](https://github.com/nuxt-modules/robots/issues/116){rel="nofollow"} with the team and the community, we have decided to migrate `nuxt-simple-robots` into the `@nuxtjs/robots` module.
This will allow me to better maintain the module and provide a more consistent experience across the Nuxt ecosystem.
To upgrade simply replace the dependency in `package.json` and update your nuxt.config.
```diff
- 'nuxt-simple-robots'
+ '@nuxtjs/robots'
```
If you're coming from `nuxt-simple-robots` then no other changes are needed. If you're coming from `@nuxtjs/robots` v3, then
the following breaking changes exist.
### `@nuxtjs/robots` v3 breaking changes
- The `configPath` config is no longer supported. For custom runtime config you should use [Nitro Hooks](https://nuxtseo.com/docs/robots/nitro-api/nitro-hooks).
- The `rules` config is deprecated but will continue to work. Any `BlankLine` or `Comment` rules will no longer work.
- Using `CleanParam`, `CrawlDelay` and `Disavow` requires targeting the [Yandex](https://nuxtseo.com/docs/robots/guides/yandex) user agent.
## :icon{name="i-noto-rocket"} Features
### useRobotsRule()
A new Nuxt composable [useRobotsRule()](https://nuxtseo.com/docs/robots/api/use-robots-rule) has been introduced to allow you to access and modify the current robots rule for the current route.
```ts
import { useRobotsRule } from '#imports'
const rule = useRobotsRule()
// Ref<'noindex, nofollow'>
```
### Robots.txt validation :icon{name="i-noto-check-mark-button"}
When loading in a `robots.txt` file, the module will now validate the file to ensure each of the `disallow` and `allow` paths are valid.
This will help you avoid errors from Google Search Console and Google Lighthouse.
### Default Meta Tag :icon{name="i-noto-relieved-face"}
The module now adds the meta tag to your site by default. The composable and component helpers used to
define this previously have been deprecated.
```html
```
Adding the meta tag is important for pages that are prerendered as the `X-Robots-Tag` header is not available.
You can opt out with `metaTags: false.`
### I18n Integration :icon{name="i-noto-globe-with-meridians"}
The module now integrates with [nuxt-i18n](https://i18n.nuxtjs.org/){rel="nofollow"}.
This will automatically re-configure your `allow` and `disallow` rules to include the locale prefix if you have
omitted it.
```ts
export default defineNuxtConfig({
robots: {
allow: ['/about'],
disallow: ['/admin'],
},
i18n: {
strategy: 'prefix_except_default',
locales: ['en', 'fr'],
defaultLocale: 'en',
},
})
```
```txt
# robots.txt
User-agent: *
Allow: /about
Allow: /fr/about
Disallow: /admin
Disallow: /fr/admin
```
Learn more on the [I18n Integration](https://nuxtseo.com/docs/robots/guides/i18n) docs.
### Nuxt Content Integration :icon{name="i-noto-books"}
The module now integrates with [@nuxt/content](https://content.nuxt.com/){rel="nofollow"}. Allowing you to use the `robots` frontmatter key within your markdown files.
```md
---
robots: false
---
```
Learn more on the [Nuxt Content](https://nuxtseo.com/docs/robots/guides/content) docs.
### Nuxt DevTools Integration :icon{name="i-noto-hammer"}
The module now integrates with [Nuxt DevTools](https://devtools.nuxt.com/){rel="nofollow"}.
You can visit the Robots tab and see if the current route is indexable, and if not, why.
{height="409" loading="lazy"}
### New Nitro Hook and Util Exports :icon{name="i-noto-hook"}
In this version the new hook Nitro hook as introduced `robots:config`. This hook
will let you override the robots.txt data as a JavaScript object, instead of a string.
Like-wise you can now re-use any of the internal functions to parse, validate and generate
robots.txt data using the `@nuxtjs/robots/util` export.
```ts
import { parseRobotsTxt } from '@nuxtjs/robots/util'
export default defineNitroPlugin((nitroApp) => {
nitroApp.hooks.hook('robots:config', async (ctx) => {
if (ctx.context === 'robots.txt') {
const customRobotsTxt = await $fetch('https://example.com/robots.txt')
const parsed = parseRobotsTxt(config)
config.groups = defu(config.groups, parsed.groups)
}
})
})
```
## Breaking Changes
### Site Config
The deprecated Nuxt Config site config keys have been removed: `host`, `siteUrl`, `indexable`.
You will need to configure these using [Site Config](https://nuxtseo.com/docs/site-config/getting-started/background).
```diff
export default defineNuxtConfig({
robots: {
- indexable: false,
},
site: {
+ indexable: false,
}
})
```
## :icon{name="i-noto-warning"} Deprecations
### `defineRobotMeta()` and ``
Because the module now uses a default meta tag, the `defineRobotMeta()` function and `` component are deprecated.
You should remove this from your code.
### `index` Route Rule
The `index` route rule has been deprecated in favour of the `robots` rule. This provides
less ambiguity and more control over the rule.
```diff
export default defineNuxtConfig({
routeRules: {
'/admin': {
- index: false,
+ robots: false,
}
}
})
```
# v3.0.0
## Features 🚀
### Robots.txt Config
The [robots.txt standard](https://developers.google.com/search/docs/crawling-indexing/robots/create-robots-txt){rel="nofollow"} is important for search engines
to understand which pages to crawl and index.
To match closer to the standard, Nuxt Robots now allows you to configure the module by using a `robots.txt` file.
```bash [Example File Structure]
public/_robots.txt
```
This file will be parsed and used to configure the module.
If you need programmatic control, you can still configure the module using [nuxt.config.ts](https://nuxtseo.com/docs/robots/guides/nuxt-config),
[Route Rules](https://nuxtseo.com/docs/robots/guides/route-rules) and [Nitro hooks](https://nuxtseo.com/docs/robots/nitro-api/nitro-hooks).
Read more at [Robots.txt Config](https://nuxtseo.com/docs/robots/guides/robots-txt).
### New Config: `groups`
- Type: `{ userAgent: []; allow: []; disallow: []; comments: [] }[]`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `[]`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Define more granular rules for the robots.txt. Each group is a set of rules for specific user agent(s).
```ts
export default defineNuxtConfig({
robots: {
groups: [
{
userAgent: ['AdsBot-Google-Mobile', 'AdsBot-Google-Mobile-Apps'],
disallow: ['/admin'],
allow: ['/admin/login'],
comments: 'Allow Google AdsBot to index the login page but no-admin pages'
},
]
}
})
```
### New Config: `blockNonSeoBots`
- Type: `boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `false`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Blocks some non-SEO bots from crawling your site. This is not a replacement for a full-blown bot management solution, but it can help to reduce the load on your server.
See [const.ts](https://github.com/nuxt-modules/robots/blob/main/src/const.ts#L6){rel="nofollow"} for the list of bots that are blocked.
```ts
export default defineNuxtConfig({
robots: {
blockNonSeoBots: true
}
})
```
### Improved header / meta tag integration
Previously, only routes added to the `routeRules` would be used to display the `X-Robots-Tag` header and the `` tag.
This has been changed to include all `disallow` paths for the `*` user-agent by default.
### New Config: `credits`
- Type: `boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `true`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Control the module credit comment in the generated robots.txt file.
```txt
# START nuxt-robots (indexable) <- credits
...
# END nuxt-robots <- credits
```
```ts
export default defineNuxtConfig({
robots: {
credits: false
}
})
```
### New Config: `debug`
- Type: `boolean`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
- Default: `false`{className="language-ts shiki shiki-themes github-light github-light material-theme-palenight" lang="ts"}
Enables debug logs.
```ts
export default defineNuxtConfig({
robots: {
debug: true
}
})
```
## Deprecations
### Nuxt Site Config Integration
The module now integrates with the [nuxt-site-config](https://github.com/harlan-zw/nuxt-site-config){rel="nofollow"} module.
The `siteUrl` and `indexable` config is now deprecated, but will still work.
For most sites, you won't need to provide any further configuration, everything will just work.
If you need to modify
the default config, the easiest way is to do so through the `site` config.
```ts
export default defineNuxtConfig({
site: {
url: 'https://example.com',
indexable: true
}
})
```
# Nuxt Sitemap
## Why use Nuxt Sitemap?
Nuxt Sitemap is a module for generating XML Sitemaps with minimal configuration and best practice defaults.
The core output of this module is a [sitemap.xml](https://developers.google.com/search/docs/crawling-indexing/sitemaps/overview){rel="nofollow"} file, which is used by search engines to understand the structure of your site and index it more effectively.
While it's not required to have a sitemap, it can be a powerful tool in getting your content indexed more frequently and more accurately,
especially for larger sites or sites with complex structures.
While it's simple to create your own sitemap.xml file, it can be time-consuming to keep it up-to-date with your site's content
and easy to miss best practices.
Nuxt Sitemap automatically generates the sitemap for you based on your site's content, including lastmod, image discovery and more.
Ready to get started? Check out the [installation guide](https://nuxtseo.com/docs/sitemap/getting-started/installation) or learn more on the [Controlling Web Crawlers](https://nuxtseo.com/learn/controlling-crawlers){rel="nofollow"} guide.
## Features
- 🌴 Single /sitemap.xml or multiple /posts-sitemap.xml, /pages-sitemap.xml
- 📊 Fetch your sitemap URLs from anywhere
- 😌 Automatic lastmod, image discovery and best practice sitemaps
- 🔄 SWR caching, route rules support
- 🎨 Debug using the Nuxt DevTools integration or the XML Stylesheet
- 🤝 Integrates seamlessly with Nuxt I18n and Nuxt Content
# Install Nuxt Sitemap
## Setup Module
Want to know why you might need this module? Check out the [introduction](https://nuxtseo.com/docs/sitemap/getting-started/introduction).
To get started with Nuxt Sitemap, you need to install the dependency and add it to your Nuxt config.
::module-install{name="@nuxtjs/sitemap"}
::
## Verifying Installation
After you've set up the module with the minimal config, you should be able to visit [`/sitemap.xml`](http://localhost:3000/sitemap.xml){rel="nofollow"} to see the generated sitemap.
You may notice that the URLs point to your `localhost` domain, this is to make navigating your local site easier, and will be updated when you deploy your site.
All pages preset are discovered from your [Application Sources](https://nuxtseo.com/docs/sitemap/getting-started/data-sources), for dynamic URLs see [Dynamic URLs](https://nuxtseo.com/docs/sitemap/guides/dynamic-urls).
You can debug this further in Nuxt DevTools under the Sitemap tab.
## Configuration
At a minimum the module requires a Site URL to be set, this is to only your canonical domain is being used for
the sitemap. A site name can also be provided to customize the sitemap [stylesheet](https://nuxtseo.com/docs/sitemap/guides/customising-ui).
::site-config-quick-setup
::
To ensure search engines find your sitemap, you will need to add it to your robots.txt. It's recommended to use the [Nuxt Robots](https://nuxtseo.com/docs/robots/getting-started/installation) module for this.
::module-card{className="w-1/2" slug="robots"}
::
Every site is different and will require their own further unique configuration, to give you a head start:
- [Dynamic URL Endpoint](https://nuxtseo.com/docs/sitemap/guides/dynamic-urls) - If you have dynamic URLs you need to add to the sitemap, you can use a runtime API endpoint. For example, if your
generating your site from a CMS.
- [Multi Sitemaps](https://nuxtseo.com/docs/sitemap/guides/multi-sitemaps) - If you have 10k+ pages, you may want to split your sitemap into multiple files
so that search engines can process them more efficiently.
You do not need to worry about any further configuration in most cases, check the [best practices](https://nuxtseo.com/docs/sitemap/guides/best-practices) guide for more information.
## Next Steps
You've successfully installed Nuxt Sitemap and configured it for your project.
Documentation is provided for module integrations, check them out if you're using them.
- [Nuxt I18n](https://nuxtseo.com/docs/sitemap/guides/i18n) - Automatic locale sitemaps.
- [Nuxt Content](https://nuxtseo.com/docs/sitemap/guides/content) - Configure your sitemap entry from your markdown.
Once you're ready to go live, check out the [Submitting Your Sitemap](https://nuxtseo.com/docs/sitemap/guides/submitting-sitemap) guide.
# Troubleshooting Nuxt Sitemap
## Debugging
### Nuxt DevTools
The best tool for debugging is the Nuxt DevTools integration with Nuxt Sitemap.
This will show you all of your sitemaps and the sources used to generate it.
### Debug Endpoint
If you prefer looking at the raw data, you can use the debug endpoint. This is only enabled in
development unless you enable the `debug` option.
Visit `/__sitemap__/debug.json` within your browser, this is the same data used by Nuxt DevTools.
### Debugging Prerendering
If you're trying to debug the prerendered sitemap, you should enable the `debug` option and check your output
for the file `.output/public/__sitemap__/debug.json`.
## Submitting an Issue
When submitting an issue, it's important to provide as much information as possible.
The easiest way to do this is to create a minimal reproduction using the Stackblitz playgrounds:
- [Dynamic URLs](https://stackblitz.com/edit/nuxt-starter-dyraxc?file=server%2Fapi%2F_sitemap-urls.ts){rel="nofollow"}
- [i18n](https://stackblitz.com/edit/nuxt-starter-jwuie4?file=app.vue){rel="nofollow"}
- [Manual Chunking](https://stackblitz.com/edit/nuxt-starter-umyso3?file=nuxt.config.ts){rel="nofollow"}
- [Nuxt Content Document Driven](https://stackblitz.com/edit/nuxt-starter-a5qk3s?file=nuxt.config.ts){rel="nofollow"}
## Troubleshooting FAQ
### Why is my browser not rendering the XML properly?
When disabling the [XSL](https://nuxtseo.com/docs/sitemap/guides/customising-ui#disabling-the-xls) (XML Stylesheet) in, the XML will
be rendered by the browser.
If you have a i18n integration, then it's likely you'll see your sitemap look raw text instead of XML.

This is a [browser bug](https://bugs.chromium.org/p/chromium/issues/detail?id=580033){rel="nofollow"} in parsing the `xhtml` namespace which is required to add localised URLs to your sitemap.
There is no workaround besides re-enabled the XSL.
### Google Search Console shows Error when submitting my Sitemap?
Seeing "Error" when submitting a new sitemap is common. This is because Google previously
crawled your site for a sitemap and found nothing.
If your sitemap is [validating](https://www.xml-sitemaps.com/validate-xml-sitemap.html){rel="nofollow"} correctly, then you're all set.
It's best to way a few days and check back. In nearly all cases, the error will resolve itself.
# Data Sources
Every URL within your sitemap will belong to a source.
A source will either be a User source or an Application source.
## Application Sources
Application sources are sources generated automatically from your app. These are in place to make using the module more
convenient but may get in the way.
- `nuxt:pages` - Statically analysed pages of your application
- `nuxt:prerender` - URLs that were prerendered
- `nuxt:route-rules` - URLs from your route rules
- `@nuxtjs/i18n:pages` - When using the `pages` config with Nuxt I18n. See [Nuxt I18n](https://nuxtseo.com/docs/sitemap/integrations/i18n) for more details.
- `@nuxt/content:document-driven` - When using Document Driven mode. See [Nuxt Content](https://nuxtseo.com/docs/sitemap/integrations/content) for more details.
### Disabling application sources
You can opt out of application sources individually or all of them by using the `excludeAppSources` config.
::code-group
```ts [Disable all app sources]
export default defineNuxtConfig({
sitemap: {
// exclude all app sources
excludeAppSources: true,
}
})
```
```ts [Disable pages app source]
export default defineNuxtConfig({
sitemap: {
// exclude static pages
excludeAppSources: ['nuxt:pages'],
}
})
```
::
## User Sources
When working with a site that has dynamic routes that isn't using [prerendering discovery](https://nuxtseo.com/docs/sitemap/guides/prerendering), you will need to provide your own sources.
For this, you have a few options:
## 1. Build time: provide a `urls` function
If you only need your sitemap data concurrent when you build, then providing a `urls` function is the simplest way to provide your own sources.
This function will only be run when the sitemap is generated.
```ts [nuxt.config.ts]
export default defineNuxtConfig({
sitemap: {
urls: async () => {
// fetch your URLs from a database or other source
const urls = await fetch('https://example.com/api/urls')
return urls
}
}
})
```
### 2. Runtime: provide a `sources` array
If you need your sitemap data to always be up-to-date at runtime, you will need to provide your own sources explicitly.
A source is a URL that will be fetched and is expected to return either JSON with an array of Sitemap URL entries or
a XML sitemap.
::code-group
```ts [Single Sitemap]
export default defineNuxtConfig({
sitemap: {
sources: [
// create our own API endpoints
'/api/__sitemap__/urls',
// use a static remote file
'https://cdn.example.com/my-urls.json',
// hit a remote API with credentials
['https://api.example.com/pages/urls', { headers: { Authorization: 'Bearer ' } }]
]
}
})
```
```ts [Multiple Sitemaps]
export default defineNuxtConfig({
sitemap: {
sitemaps: {
foo: {
sources: [
'/api/__sitemap__/urls/foo',
]
},
bar: {
sources: [
'/api/__sitemap__/urls/bar',
]
}
}
}
})
```
::
You can provide any number of sources, however, you should consider your own caching strategy.
You can learn more about data sources on the [Dynamic URLs](https://nuxtseo.com/docs/sitemap/guides/dynamic-urls) guide.
# Multi Sitemaps
## Introduction
The module will generate a single `/sitemap.xml` file by default, for most websites this is perfect.
However, for larger sites that start having over thousands of URLs, introducing multiple sitemaps can help
you to debug your sitemap easier and also help search engines to crawl your site more efficiently.
## Enabling Multiple Sitemaps
If you want to generate multiple sitemaps, you can use the `sitemaps` option, which has two options:
- `object` - Enables manual chunking. Recommended when you have clear content types (pages, posts, etc) or less than 1000 URLs
- `true` - Enables automatic chunking. Recommended when you have a more than 1000 URLs and don't have clear content types.
::code-group
```ts [Manual Chunking]
export default defineNuxtConfig({
sitemap: {
// manually chunk into multiple sitemaps
sitemaps: {
posts: {
include: [
'/blog/**',
],
// example: give blog posts slightly higher priority (this is optional)
defaults: { priority: 0.7 },
},
pages: {
exclude: [
'/blog/**',
]
},
},
},
})
```
```ts [Automatic Chunking]
export default defineNuxtConfig({
sitemap: {
sitemaps: true,
// modify the chunk size if you need
defaultSitemapsChunkSize: 2000 // default 1000
},
})
```
::
### Sitemap Prefix
You'll notice that all multi-sitemaps appear under the `/__sitemap__/` prefix by default. If you want to change this, you can use the `sitemapsPathPrefix` option
combined with changing the sitemap key to what you'd like the name to be.
```ts
export default defineNuxtConfig({
sitemap: {
sitemapsPathPrefix: '/', // or false
sitemaps: {
// will be available at /sitemap-foo.xml
['sitemap-foo']: {
// ...
}
}
}
})
```
## Manual Chunking
When manually chunking your sitemaps, there are multiple ways of handling it depending on what you need.
In either case, if you'd like to provide defaults for URLs within the sitemap you can use the `defaults` option.
- `defaults` - Sitemap default values such as `lastmod`, `changefreq`, or `priority`
```ts
export default defineNuxtConfig({
sitemap: {
sitemaps: {
posts: {
// posts low priority
defaults: { priority: 0.7 },
},
},
},
})
```
### Extending App Sources
When your single sitemap contains all the correct URLs and you just want to split them up into separate sitemaps,
you can extend the [app sources](https://nuxtseo.com/docs/sitemap/getting-started/data-sources) and [filter the URLs](https://nuxtseo.com/docs/sitemap/guides/filtering-urls).
- `includeAppSources` - Uses [app sources](https://nuxtseo.com/docs/sitemap/getting-started/data-sources)
- `includeGlobalSources` - Uses [global sources](https://nuxtseo.com/docs/sitemap/getting-started/data-sources)
- `include` - Array of glob patterns to include in the sitemap
- `exclude` - Array of glob patterns to exclude from the sitemap
```ts [nuxt.config.ts]
export default defineNuxtConfig({
sitemap: {
sitemaps: {
pages: {
// extend the nuxt:pages app source
includeAppSources: true,
// filter the URLs to only include pages
exclude: ['/blog/**'],
},
posts: {
// extend the nuxt:pages app source
includeAppSources: true,
// filter the URLs to only include pages
include: ['/blog/**'],
},
},
},
})
```
If you're using a global `sitemap.sources` and need to filter URLs further, then you can use the `_sitemap` key.
- `_sitemap` - The name of the sitemap that the URL should be included in
::code-group
```ts [nuxt.config.ts]
export default defineNuxtConfig({
sitemap: {
sources: [
'/api/sitemap-urls'
],
sitemaps: {
pages: {
includeGlobalSources: true,
includeAppSources: true,
exclude: ['/**']
// ...
},
},
},
})
```
```ts [server/api/sitemap-urls.ts]
export default defineSitemapEventHandler(() => {
return [
{
loc: '/about-us',
// will end up in the pages sitemap
_sitemap: 'pages',
}
]
})
```
::
### Managing Sources
If you need to fetch the URLs from an endpoint for a sitemap, then you will need to use either the `urls` or `sources` option.
- `urls` - Array of static URLs to include in the sitemap. You should avoid using this option if you have a lot of URLs
- `sources` - Custom endpoint to fetch [dynamic URLs](https://nuxtseo.com/docs/sitemap/guides/dynamic-urls) from as JSON or XML.
```ts
export default defineNuxtConfig({
sitemap: {
sitemaps: {
posts: {
urls() {
// resolved when the sitemap is shown
return ['/foo', '/bar']
},
sources: [
'/api/sitemap-urls'
]
},
},
},
})
```
### Linking External Sitemaps
This mode also provides a special key called `index` which allows you to easily extend the index sitemap. This can be useful
for adding an external sitemap.
```ts
export default defineNuxtConfig({
sitemaps: {
// generated sitemaps
posts: {
// ...
},
pages: {
// ...
},
// extending the index sitemap with an external sitemap
index: [
{ sitemap: 'https://www.google.com/sitemap-pages.xml' }
]
}
})
```
## Automatic Chunking
This will automatically chunk your sitemap into multiple-sitemaps, using the `0-sitemap.xml`, `1-sitemap.xml` naming convention.
It will be chunked on the `defaultSitemapsChunkSize` option, which defaults to 1000 URLs per sitemap.
You should avoid using this if you have less than 1000 URLs.
```ts
export default defineNuxtConfig({
sitemap: {
// automatically chunk into multiple sitemaps
sitemaps: true,
},
})
```
# I18n
## Introduction
Out of the box, the module integrates with [@nuxtjs/i18n](https://i18n.nuxtjs.org/){rel="nofollow"} and [nuxt-i18n-micro](https://github.com/s00d/nuxt-i18n-micro){rel="nofollow"}
without any extra configuration.
However, I18n is tricky, you may need to tinker with a few options to get the best results.
## I18n Modes
### Automatic I18n Multi Sitemap
When certain conditions are met then the sitemap module will automatically generate a sitemap for each locale:
- If you're not using `no_prefix` strategy
- Or if you're using [Different Domains](https://i18n.nuxtjs.org/docs/v7/different-domains){rel="nofollow"},
- And you haven't configured the `sitemaps` option
This looks like:
```shell
> ./sitemap_index.xml
> ./en-sitemap.xml
> ./fr-sitemap.xml
# ...etc
```
These sitemaps will include [app sources](https://nuxtseo.com/docs/sitemap/getting-started/data-sources). The `nuxt:pages` source
will automatically determine the correct `alternatives` for your pages.
If you need to opt-out of app sources, use `excludeAppSources: true`.
### I18n Pages
If you have enabled `i18n.pages` in your i18n configuration, then the sitemap module will automatically generate a single sitemap
using the configuration.
This sitemap will not include [app sources](https://nuxtseo.com/docs/sitemap/getting-started/data-sources).
You can add additional URLs using `sources`.
## Dynamic URLs with i18n
To simplify the sitemap output, any dynamic URLs you provided will not have i18n data and will exist
only within the default locale sitemap.
To help you with this, the module provides two options: `_i18nTransform` and `_sitemap`.
### `_i18nTransform`
If you want the module to convert a single URL into all of its i18n variants, you can provide the `_i18nTransform: true` option.
```ts [server/api/__sitemap__/urls.ts]
export default defineSitemapEventHandler(() => {
return [
{
loc: '/about-us',
// will be transformed into /en/about-us and /fr/about-us
_i18nTransform: true,
}
]
})
```
### `_sitemap`
Alternatively, you can specify which locale sitemap you want to include the URL in using `_sitemap`.
```ts [server/api/__sitemap__/urls.ts]
export default defineSitemapEventHandler(() => {
return [
{
loc: '/about-us',
_sitemap: 'en',
}
]
})
```
## Debugging Hreflang
By default, the XML stylesheet doesn't show you the hreflang tags. You will need to view the page source to see them.
Don't worry, these are still visible to search engines.
If you'd like to visually see the hreflang tag counts, you can [Customise the UI](https://nuxtseo.com/docs/sitemap/guides/customising-ui).
```ts
export default defineNuxtConfig({
sitemap: {
xslColumns: [
{ label: 'URL', width: '50%' },
{ label: 'Last Modified', select: 'sitemap:lastmod', width: '25%' },
{ label: 'Hreflangs', select: 'count(xhtml)', width: '25%' },
],
}
})
```
# Dynamic URL Endpoint
## Introduction
In some instances, like using a CMS, you may need to implement an endpoint to make
all of your site URLs visible to the module.
To do this, you can provide [user sources](https://nuxtseo.com/docs/sitemap/getting-started/data-sources) to the module. These can either be
a JSON response or an XML sitemap.
## XML Sitemap
If you're providing an XML sitemap, you can use the `sources` option to provide the URL to the sitemap.
```ts [nuxt.config.ts]
export default defineNuxtConfig({
sitemap: {
sources: [
'https://example.com/sitemap.xml',
]
}
})
```
## Dynamic URLs from an external API
## Fetching from an external API
When you have a source that is a third-party API returning dynamic URLs,then you have a couple of options.
1. Add the endpoint directly to the `sources` config - Good for endpoints that return the data already in the correct format
2. Make an API endpoint that returns the URLs - Required when you have to transform the data or implement your own caching
### 1. Using sources config
If the URL you're fetching from requires any extra headers to work, you can provide a source as an array, where the second
option is the fetch options.
```ts [nuxt.config.ts]
export default defineNuxtConfig({
sitemap: {
sources: [
// fetch from an unauthenticated endpoint
'https://api.example.com/pages/urls',
// fetch from an authenticated endpoint
[
'https://authenticated-api.example.com/pages/urls',
{ headers: { Authorization: 'Bearer ' } } // fetch options
]
]
}
})
```
### 2. Create your own endpoint
1. Create a new API endpoint
In this code snippet we're using the `defineSitemapEventHandler` helper to create a new API endpoint.
This is a simple wrapper for `defineEventHandler` that forces the TypeScript types.
::code-group
```ts [Simple]
import { defineSitemapEventHandler } from '#imports'
import type { SitemapUrlInput } from '#sitemap/types'
// server/api/__sitemap__/urls.ts
export default defineSitemapEventHandler(() => {
return [
{
loc: '/about-us',
// will end up in the pages sitemap
_sitemap: 'pages',
},
] satisfies SitemapUrlInput[]
})
```
```ts [Multiple Sitemaps]
import { defineSitemapEventHandler } from '#imports'
import type { SitemapUrl } from '#sitemap/types'
export default defineSitemapEventHandler(async () => {
const [
posts,
pages,
] = await Promise.all([
$fetch<{ path: string }[]>('https://api.example.com/posts')
.then(posts => posts.map(p => ({
loc: p.path,
// make sure the post ends up in the posts sitemap
_sitemap: 'posts',
} satisfies SitemapUrl))),
$fetch<{ path: string }[]>('https://api.example.com/pages')
.then(posts => posts.map(p => ({
loc: p.path,
// make sure the post ends up in the posts sitemap
_sitemap: 'pages',
} satisfies SitemapUrl))),
])
return [...posts, ...pages]
})
```
::
Having issues with the `defineSitemapEventHandler` types? Make sure you have a `server/tsconfig.json`!
2. Add the endpoint to your `nuxt.config.ts`
::code-group
```ts [Single Sitemap Sources]
export default defineNuxtConfig({
sitemap: {
sources: [
'/api/__sitemap__/urls',
]
}
})
```
```ts [Multi Sitemap Sources]
export default defineNuxtConfig({
sitemap: {
sitemaps: {
posts: {
sources: [
'/api/__sitemap__/urls/posts',
]
}
}
}
})
```
::
# Images, Videos, News
## Introduction
The `image`, `video` and `news` namespaces is added to your sitemap by default, allowing you to configure
images, videos and news for your sitemap entries.
When prerendering your app, it's possible for the generated sitemap to automatically infer images and videos from your pages.
## Sitemap Images
To add images to your sitemap, you can use the `images` property on the sitemap entry.
You can learn more about images in sitemaps on the [Google documentation](https://developers.google.com/search/docs/advanced/sitemaps/image-sitemaps){rel="nofollow"}.
```ts
export interface ImageEntry {
loc: string
}
```
You can implement this as follows:
```ts [nuxt.config.ts]
export default defineNuxtConfig({
sitemap: {
urls: [
{
loc: '/blog/my-post',
images: [
{
loc: 'https://example.com/image.jpg',
caption: 'My image caption',
geoLocation: 'My image geo location',
title: 'My image title',
license: 'My image license',
}
]
}
]
}
})
```
### Automatic Image Discovery
The module can discover images in your page and add them to your sitemap automatically.
For this to work:
- The page *must* be prerendered. These images will not be shown in development or if the page is not prerendered.
- You must wrap your page content with a `` tag, avoid wrapping shared layouts that include duplicate images.
## Videos
To add videos to your sitemap, you can use the `videos` property on the sitemap entry.
The TypeScript interface for videos is as follows:
```ts
export interface VideoEntry {
title: string
thumbnail_loc: string | URL
description: string
content_loc?: string | URL
player_loc?: string | URL
duration?: number
expiration_date?: Date | string
rating?: number
view_count?: number
publication_date?: Date | string
family_friendly?: 'yes' | 'no' | boolean
restriction?: Restriction
platform?: Platform
price?: ({
price?: number | string
currency?: string
type?: 'rent' | 'purchase' | 'package' | 'subscription'
})[]
requires_subscription?: 'yes' | 'no' | boolean
uploader?: {
uploader: string
info?: string | URL
}
live?: 'yes' | 'no' | boolean
tag?: string | string[]
}
```
You can learn more about videos in sitemaps on the [Google documentation](https://developers.google.com/search/docs/advanced/sitemaps/video-sitemaps){rel="nofollow"}.
You can implement this as follows:
```ts [nuxt.config.ts]
export default defineNuxtConfig({
sitemap: {
urls: [
{
loc: '/blog/my-post',
videos: [
{
title: 'My video title',
thumbnail_loc: 'https://example.com/video.jpg',
description: 'My video description',
content_loc: 'https://example.com/video.mp4',
player_loc: 'https://example.com/video.mp4',
duration: 600,
expiration_date: '2021-01-01',
rating: 4.2,
view_count: 1000,
publication_date: '2021-01-01',
family_friendly: true,
restriction: {
relationship: 'allow',
country: 'US',
},
platform: {
relationship: 'allow',
platform: 'web',
date: '2021-01-01',
},
price: [
{
price: 1.99,
currency: 'USD',
type: 'rent',
}
],
requires_subscription: true,
uploader: {
uploader: 'My video uploader',
info: 'https://example.com/uploader',
},
live: true,
tag: ['tag1', 'tag2'],
}
]
}
]
}
})
```
### Automatic Video Discovery
Like automatic image discovery, you can opt-in to automatic video discovery including video markup in your `` tag.
You are also required to provide a title and description for your video, this can be done using the `data-title` and `data-description` attributes.
::code-block
```html [Simple]
```
```html [Full]