Config using Robots.txt
Introduction
The robots.txt standard is important for search engines to understand which pages to crawl and index on your site.
New to robots.txt? Check out the Robots.txt Guide to learn more.
To match closer to the robots standard, Nuxt Robots recommends configuring the module by using a
If you need programmatic control, you can configure the module using nuxt.config.ts, Route Rules and Nitro hooks.
robots.txt file Creating a
You can place your file in any location; the easiest is to use:
Additionally, the following paths are supported by default:
# root directory
robots.txt
# asset folders
assets/
├── robots.txt
# pages folder
pages/
├── robots.txt
├── _dir/
│ └── robots.txt
# public folder
public/
├── _robots.txt
├── _dir/
│ └── robots.txt
Custom paths
If you find this too restrictive,
you can use the
export default defineNuxtConfig({
robots: {
mergeWithRobotsTxtPath: 'assets/custom/robots.txt'
}
})
Parsed robots.txt
The following rules are parsed from your
User-agent - The user-agent to apply the rules to.Disallow - An array of paths to disallow for the user-agent.Allow - An array of paths to allow for the user-agent.Sitemap - An array of sitemap URLs to include in the generated sitemap.
This parsed data will be shown for environments that are
public/robots.txt Conflicting
To ensure other modules can integrate with your generated robots file, you must not have a
If you do, it will be moved to