---
title: "Nuxt Config"
description: "Learn how to configure Nuxt Robots using nuxt.config."
canonical_url: "https://nuxtseo.com/docs/robots/api/config"
last_updated: "2026-05-13T18:55:19.564Z"
---

## `enabled: boolean`

- Default: `true`

Conditionally toggle the module.

## `allow: string[]`

- Default: `[]`

Allow paths to be indexed for the `*` user-agent (all robots).

## `disallow: string[]`

- Default: `[]`

Disallow paths from being indexed for the `*` user-agent (all robots).

## `header: boolean`

- Default: `true`

Should a `X-Robots-Tag` header be added to the response.

## `metaTag: boolean`

- Default: `true`

Whether to add a `<meta name="robots" ...>` tag to the `<head>` of each page.

## `groups: RobotsGroupInput[]`

- Default: `[]`

Define more granular rules for the robots.txt. Each group is a set of rules for specific user agent(s).

```tstwoslash
export default defineNuxtConfig({
  robots: {
    groups: [
      {
        userAgent: ['AdsBot-Google-Mobile', 'AdsBot-Google-Mobile-Apps'],
        disallow: ['/admin'],
        allow: ['/admin/login'],
        contentUsage: { 'bots': 'y', 'train-ai': 'n' },
        contentSignal: { 'ai-train': 'no', 'search': 'yes' },
        comment: 'Allow Google AdsBot to index the login page but no-admin pages'
      },
    ]
  }
})
```

### Group Configuration Options

Each group object supports the following properties:

- `userAgent?: string | string[]` - The user agent(s) to apply rules to. Defaults to `['*']`
- `disallow?: string | string[]` - Paths to disallow for the user agent(s)
- `allow?: string | string[]` - Paths to allow for the user agent(s)
- `contentUsage?: string | string[] | Partial<ContentUsagePreferences>` - IETF Content-Usage directives for AI preferences. Valid categories: `bots`, `train-ai`, `ai-output`, `search`. Values: `y`/`n`. Use object format for type safety (see [AI Directives guide](/docs/robots/guides/ai-directives))
- `contentSignal?: string | string[] | Partial<ContentSignalPreferences>` - Cloudflare Content-Signal directives for AI preferences. Valid categories: `search`, `ai-input`, `ai-train`. Values: `yes`/`no`. Use object format for type safety (see [AI Directives guide](/docs/robots/guides/ai-directives))
- `comment?: string | string[]` - Comments to include in the robots.txt file

## `autoI18n: false | AutoI18nConfig`

- Default: `undefined`

Override the auto i18n configuration.

## `sitemap: MaybeArray<string>`

- Default: `[]`

The sitemap URL(s) for the site. If you have multiple sitemaps, you can provide an array of URLs.

You must either define the runtime config `siteUrl` or provide the sitemap as absolute URLs.

```ts
export default defineNuxtConfig({
  robots: {
    sitemap: [
      '/sitemap-one.xml',
      '/sitemap-two.xml',
    ],
  },
})
```

## `robotsEnabledValue: string`

- Default: `'index, follow, max-image-preview:large, max-snippet:-1, max-video-preview:-1'`

The value to use when the page is indexable.

## `robotsDisabledValue: string`

- Type: `string`
- Default: `'noindex, nofollow'`

The value to use when the page is not indexable.

## `mergeWithRobotsTxtPath: boolean | string`

- Default: `true`

Specify a robots.txt path to merge the config from, relative to the root directory.

When set to `true`, the default path of `<publicDir>/robots.txt` will be used.

When set to `false`, no merging will occur.

## `blockNonSeoBots: boolean`

- Default: `false`

Blocks some non-SEO bots from crawling your site. This is not a replacement for a full-blown bot management solution, but it can help to reduce the load on your server.

See [const.ts](https://github.com/nuxt-modules/robots/blob/main/src/const.ts) for the list of bots that are blocked.

```tstwoslash
export default defineNuxtConfig({
  robots: {
    blockNonSeoBots: true
  }
})
```

## `blockAiBots: boolean`

- Default: `false`

Blocks AI crawlers from crawling your site. This adds a rule group disallowing `/` for known AI bots defined in the `AiBots` constant.

```tstwoslash
export default defineNuxtConfig({
  robots: {
    blockAiBots: true
  }
})
```

## `robotsTxt: boolean`

- Default: `true`

Whether to generate a `robots.txt` file. Useful for disabling when using a base URL.

## `cacheControl: string | false`

- Default: `'max-age=14400, must-revalidate'`

Configure the Cache-Control header for the robots.txt file. By default it's cached for
4 hours and must be revalidated.

Providing false will set the header to `'no-store'`.

```ts [nuxt.config.ts]twoslash
export default defineNuxtConfig({
  robots: {
    cacheControl: 'max-age=14400, must-revalidate'
  }
})
```

## `disableNuxtContentIntegration: boolean`

- Default: `undefined`

Whether to disable the [Nuxt Content Integration](/docs/robots/advanced/content).

## `debug: boolean`

- Type: `boolean`
- Default: `false`

Enables debug logs and a debug endpoint.

## `credits: boolean`

- Default: `true`

Control the module credit comment in the generated robots.txt file.

```robots-txt [robots.txt]
# START nuxt-robots (indexable) <- credits
# ...
# END nuxt-robots <- credits
```

```ts [nuxt.config.ts]twoslash
export default defineNuxtConfig({
  robots: {
    credits: false
  }
})
```

## `disallowNonIndexableRoutes: boolean`

**⚠️ Deprecated**: Explicitly disallow routes in the `/robots.txt` file if you don't want them to be accessible.

- Default: `false`

Should route rules which disallow indexing be added to the `/robots.txt` file.

## `botDetection: boolean`

- Default: `true`

Enable bot detection plugin. When disabled, no bot detection is performed.
