Robots
Guides
Disabling Page Indexing
Learn how to disable indexing for specific pages on your app.
When disabling indexing for specific pages, you have a few options.
useRobotsRule
The useRobotsRule composable provides a reactive way to access and set the robots rule at runtime.
import { useRobotsRule } from '#imports'
const rule = useRobotsRule()
rule.value = 'noindex, nofollow'
Route Rules
If you have a static page that you want to disable indexing for, you can use defineRouteRules (requires enabling the experimental inlineRouteRules
).
This is a build-time configuration that will generate the appropriate rules in the /robots.txt
file and is integrated with the Sitemap module.
pages/about.vue
<script lang="ts" setup>
defineRouteRules({
robots: false,
})
</script>
For more complex scenarios see the Route Rules guide.
Robots.txt
To configure blocked URLs using a robots.txt file, simply use the Disallow
directive.
// public/_robots.txt
User-agent: *
Disallow: /my-page
Disallow: /secret/*
Nuxt Config
If you need finer programmatic control, you can configure the module using nuxt.config.
nuxt.config.ts
export default defineNuxtConfig({
robots: {
disallow: ['/secret', '/admin'],
}
})
See the Nuxt Config guide for more details.