This guide will help you migrate from the Nuxt SEO v2 RC to the v2 stable release.
Please see the announcement post for details on the release.
If you get stuck with the migration or have post-migration bugs, please get in touch!
Nuxt Site Config is a module used internally by Nuxt Robots.
The major update to v3.0.0 shouldn't have any direct effect on your site, however, you may want to double-check the breaking changes.
In moving to the stable release, Nuxt SEO experiments has been renamed from nuxt-seo-experiments to nuxt-seo-utils.
The original name of the module was nuxt-seo-experiments, hinting that the features weren't stable and that they would land in the Nuxt core. This is no longer the case, and the module has been renamed to reflect this.
With this rename the module scope changes to include the random functionality that Nuxt SEO was previously providing:
useBreadcrumbItems() composableredirectToCanonicalSiteUrlfallbackTitleautomaticDefaultsAs Nuxt SEO Utils shared the same config key as the Nuxt SEO module, no changes are required to your config, however, it's worth testing your site to ensure that everything is working as expected.
inferStaticPagesAsRoutes configIf you set this value to false previously, you will need to change it to the below:
export default defineNuxtConfig({
sitemap: {
- inferStaticPagesAsRoutes: false,
+ excludeAppSources: ['pages', 'route-rules', 'prerender']
}
})
dynamicUrlsApiEndpoint configThe sources config supports multiple API endpoints and allows you to provide custom fetch options, use this instead.
export default defineNuxtConfig({
sitemap: {
- dynamicUrlsApiEndpoint: '/__sitemap/urls',
+ sources: ['/__sitemap/urls']
}
})
cacheTtl configPlease use the cacheMaxAgeSeconds as its a clearer config.
export default defineNuxtConfig({
sitemap: {
- cacheTtl: 10000,
+ cacheMaxAgeSeconds: 10000
}
})
index route rule / Nuxt Content supportIf you were using the index: false in either route rules or your Nuxt Content markdown files, you will need to update this to use the robots key.
export default defineNuxtConfig({
routeRules: {
// use the `index` shortcut for simple rules
- '/secret/**': { index: false },
+ '/secret/**': { robots: false },
}
})
rules configThe v4 of Nuxt Robots provided a backward compatibility rules config. As it was deprecated, this is no longer supported. If you're using rules, you should migrate to the groups config or use a robots.txt file.
export default defineNuxtConfig({
robots: {
- rules: {},
+ groups: {}
}
})
defineRobotMeta composableThis composable didn't do anything in v4 as the robots meta tag is enabled by default. If you'd like to control the robot meta tag rule, use the useRobotsRule() composable.
- defineRobotMeta(true)
+ useRobotsRule(true)
RobotMeta componentThis component was a simple wrapper for defineRobotMeta, you should use useRobotsRule() if you wish to control the robots rule.
index, indexable configWhen configuring robots using route rules or Nuxt Content you could control the robot's behavior by providing index or indexable rules.
These are no longer supported and you should use robots key.
export default defineNuxtConfig({
routeRules: {
// use the `index` shortcut for simple rules
- '/secret/**': { index: false },
+ '/secret/**': { robots: false },
}
})
blockAiBotsAI crawlers can be beneficial as they can help users finding your site, but for some educational sites or those not
interested in being indexed by AI crawlers, you can block them using the blockAIBots option.
export default defineNuxtConfig({
robots: {
blockAiBots: true
}
})
This will block the following AI crawlers: GPTBot, ChatGPT-User, Claude-Web, anthropic-ai, Applebot-Extended, Bytespider, CCBot, cohere-ai, Diffbot, FacebookBot, Google-Extended, ImagesiftBot, PerplexityBot, OmigiliBot, Omigili