Disable Page Indexing
Introduction
As not all sites are the same, it's important for you to have a flexible way to disable indexing for specific pages.
The best options to choose are either:
- Robots.txt - Great for blocking robots from accessing specific pages that haven't been indexed yet.
- useRobotsRule - Controls the
<meta name="robots" content="..."> meta tag andX-Robots-Tag HTTP Header. Useful for dynamic pages where you may not know if it should be indexed at build time and when you need to remove pages from search results. For example, a user profile page that should only be indexed if the user has made their profile public.
If you're still unsure about which option to choose, make sure you read the Conquering Web Crawlers guide.
Route Rules and Nuxt Config are also available for more complex scenarios.
Robots.txt
Please follow the Config using Robots.txt guide to configure your
You'll be able to use the
User-agent: *
Disallow: /my-page
Disallow: /secret/*
useRobotsRule
The useRobotsRule composable provides a reactive way to access and set the robots rule at runtime.
import { useRobotsRule } from '#imports'
const rule = useRobotsRule()
rule.value = 'noindex, nofollow'
Route Rules
If you have a static page that you want to disable indexing for, you can use defineRouteRules (requires enabling the experimental
This is a build-time configuration that will generate the appropriate rules in the
<script lang="ts" setup>
defineRouteRules({
robots: false,
})
</script>
For more complex scenarios see the Route Rules guide.
Nuxt Config
If you need finer programmatic control, you can configure the module using nuxt.config.
export default defineNuxtConfig({
robots: {
disallow: ['/secret', '/admin'],
}
})
See the Nuxt Config guide for more details.