Nuxt Robots is a module for configuring the robots crawling your site with minimal config and best practice defaults.
The core feature of the module is:
<meta name="robots" content="index"> X-Robots-Tag HTTP header.New to robots or SEO? Check out the Controlling Web Crawlers guide to learn more about why you might need these features.
While it's simple to create your own robots.txt file, the module makes sure your non-production environments get disabled from indexing. This is important to avoid duplicate content issues and to avoid search engines serving your development or staging content to users.
The module also acts as an integration point for other modules. For example:
Ready to get started? Check out the installation guide.
Nuxt Robots manages the robots crawling your site with minimal config and best practice defaults.
Configuring the rules is as simple as adding a production robots.txt file to your project.
Ensures pages that should not be indexed are not indexed with the following:
X-Robots-Tag header<meta name="robots" ...> meta tagBoth enabled by default.
Detect and classify bots with server-side header analysis and optional client-side browser fingerprinting.
Identify search engines, social media crawlers, AI bots, automation tools, and security scanners to optimize your application for both human users and automated agents.
The module uses Nuxt Site Config to determine if the site is in production mode.
It will disables non-production environments from being indexed, avoiding duplicate content issues.
Use route rules to easily target subsets of your site. When you need even more control, use the runtime Nitro hooks to dynamically configure your robots rules.
Will automatically fix any non-localised paths within your allow and disallow rules.