How to Use Meta Robots Tags in Vue · Nuxt SEO

-
-
-
-

[1.4K](https://github.com/harlan-zw/nuxt-seo)

[Nuxt SEO on GitHub](https://github.com/harlan-zw/nuxt-seo)

Learn SEO

Master search optimization

Nuxt

 Vue

-
-
-
-
-
-
-

-
-
-
-
-
-
-

-
-
-

-
-
-
-
-
-
-
-
-
-
-

-
-
-

-
-
-
-
-
-
-
-
-

1.
2.
3.
4.
5.

# How to Use Meta Robots Tags in Vue

Control page-level indexing with meta robots tags. Block search results pages, manage pagination, and prevent duplicate content in Vue apps.

[![Harlan Wilton](https://avatars.githubusercontent.com/u/5326365?v=4)Harlan Wilton](https://x.com/harlan-zw)9 mins read Published Nov 3, 2024 Updated Jan 29, 2026

What you'll learn

- `noindex` prevents page from appearing in search results
- `data-nosnippet` blocks specific text from AI Overviews
- Use X-Robots-Tag HTTP header for non-HTML files (PDFs, images)

Meta robots tags give page-level control over search engine indexing. While

, the robots meta tag lets you handle individual pages differently.

Use them to block filtered pages from indexing, prevent duplicate content issues on pagination, or control snippet appearance. For non-HTML files (PDFs, images) use [X-Robots-Tag HTTP headers](https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag#xrobotstag) instead. For site-wide rules stick with

.
## [Setup](#setup)

Add the robots meta tag using [Unhead](https://unhead.unjs.io/) in your Vue components. Must be

 or crawlers won't see it.

Block Indexing

```
<script setup lang="ts">
import { useSeoMeta } from '@unhead/vue'

useSeoMeta({
  robots: 'noindex, follow'
})
</script>
```

Control Snippets

```
<script setup lang="ts">
import { useSeoMeta } from '@unhead/vue'

useSeoMeta({
  robots: 'index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1'
})
</script>
```

Vue apps need [manual Unhead installation](https://unhead.unjs.io/guide/getting-started/installation). Nuxt includes it by default.

## [How Meta Robots Tags Work](#how-meta-robots-tags-work)

The robots meta tag goes in your page's `<head>` and tells crawlers what to do with that specific page ([Google's documentation](https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag)):

```
<meta name="robots" content="index, follow">
```

Without a robots meta tag, crawlers assume `index, follow` by default ([MDN reference](https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/meta/name/robots)).

### [Common Directives](#common-directives)

- `noindex`: Exclude from search results
- `nofollow`: Don't follow links on this page
- `noarchive`: Prevent cached copies
- `nosnippet`: No description snippet in results (also opts out of AI Overviews)
- `max-snippet:[number]`: Limit snippet to N characters
- `max-image-preview:[setting]`: Control image preview size (none, standard, large)
- `max-video-preview:[number]`: Limit video preview to N seconds

Combine multiple directives with commas. When directives conflict, the [more restrictive one applies](https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag).

### [Critical Requirements](#critical-requirements)

Must be server-side rendered. Google can execute JavaScript but does so in a [second rendering wave](https://medium.com/@emironic/server-side-rendering-ssr-vs-client-side-rendering-csr-why-it-matters-more-than-ever-for-ai-4dbf65142abc), which can delay or prevent proper indexing. SSR puts the tag in the HTML response immediately.

If you block the page in robots.txt, crawlers [never see the meta tag](https://developers.google.com/search/docs/crawling-indexing/block-indexing). Don't combine robots.txt blocking with noindex; the noindex won't work.

Target specific crawlers by replacing `robots` with a user agent token like `googlebot`. More specific tags override general ones.

## [Controlling AI Overviews](#controlling-ai-overviews)

As of 2026, there is no single meta tag to opt-out of AI Overviews (SGE) without also affecting standard search snippets. However, you can control how your content appears in AI-generated answers.

### [`max-snippet`](#max-snippet)

Use `max-snippet` to limit the length of text Google can display. This indirectly limits how much text AI can "quote" or summarize.

```
<script setup lang="ts">
import { useSeoMeta } from '@unhead/vue'

// Limit snippets to 160 chars (reduces likelihood of long AI summaries)
useSeoMeta({
  robots: 'max-snippet:160'
})
</script>
```

### [`data-nosnippet`](#data-nosnippet)

For granular control, use the `data-nosnippet` HTML attribute to exclude specific text from snippets or AI answers. This is perfect for pricing tables, proprietary data, or internal codes.

```
<template>
  <div>
    <h1>Product Pricing</h1>
    <p>Our standard features are...</p>

    <!-- This table won't appear in Google snippets or AI summaries -->
    <div data-nosnippet>
      <table>...</table>
    </div>
  </div>
</template>
```

### [`nosnippet`](#nosnippet)

The nuclear option. `nosnippet` prevents **all** textual snippets, both in standard search results and AI Overviews. Use carefully as it can hurt CTR.

## [Noindex Follow vs Noindex Nofollow](#noindex-follow-vs-noindex-nofollow)

Use `noindex, follow` when you want to block indexing but preserve link equity flow. The page won't rank, but links on it still count ([Wolf of SEO guide](https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag)).

Use `noindex, nofollow` for pages with no valuable links. login forms, checkout steps, truly private content.

Noindex Follow - Preserve Links

```
<script setup lang="ts">
// Filter pages, thank you pages, test pages
import { useSeoMeta } from '@unhead/vue'

useSeoMeta({
  robots: 'noindex, follow'
})
</script>
```

Noindex Nofollow - Block Everything

```
<script setup lang="ts">
// Login pages, admin sections
import { useSeoMeta } from '@unhead/vue'

useSeoMeta({
  robots: 'noindex, nofollow'
})
</script>
```

Note that `follow` doesn't force crawling. it allows it. Pages may still be crawled less over time if they remain noindexed.

## [Common Use Cases](#common-use-cases)

### [Search Results and Filtered Pages](#search-results-and-filtered-pages)

Internal search results and filter combinations create

. [Google recommends](https://developers.google.com/search/docs/specialty/ecommerce/pagination-and-incremental-page-loading) blocking these with noindex or robots.txt:

pages/search.vue

```
<script setup lang="ts">
import { useSeoMeta } from '@unhead/vue'

useSeoMeta({
  robots: 'noindex, follow'
})
</script>
```

pages/products/[category].vue

```
<script setup lang="ts">
import { useSeoMeta } from '@unhead/vue'
import { useRoute } from 'vue-router'

const { query } = useRoute()

useSeoMeta({
  robots: Object.keys(query).length > 0 ? 'noindex, follow' : 'index, follow'
})
</script>
```

Combine with

 pointing to the main category page. but don't use both noindex and canonical on the same page (sends [conflicting signals](https://www.oncrawl.com/technical-seo/use-robots-txt-meta-robots-canonical-tags-correctly/)).

### [Pagination](#pagination)

Google no longer uses `rel="next"` and `rel="prev"` tags ([Google documentation](https://developers.google.com/search/docs/specialty/ecommerce/pagination-and-incremental-page-loading)). Instead, link to next/previous pages with regular `<a>` tags and let Google discover the sequence naturally. See the

 for implementation details.

You don't need to noindex pagination pages. Google treats them as part of the sequence. Only noindex if the paginated content is duplicate or low-value.

### [User-Generated Content](#user-generated-content)

Limit snippet length and prevent caching for dynamic user profiles:

pages/user/[id]/profile.vue

```
<script setup lang="ts">
import { useSeoMeta } from '@unhead/vue'

useSeoMeta({
  robots: 'index, follow, noarchive, max-snippet:50'
})
</script>
```

## [X-Robots-Tag for Non-HTML Files](#x-robots-tag-for-non-html-files)

The meta robots tag only works in HTML. For PDFs, images, videos, or other files, use the [X-Robots-Tag HTTP header](https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag#xrobotstag) instead ([MDN reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Robots-Tag)):

```
X-Robots-Tag: noindex, nofollow
```

X-Robots-Tag supports the same directives as the meta tag. It's also useful for bulk operations. you can apply it to entire directories or file patterns via server configuration.

Don't use both meta robots and X-Robots-Tag on the same resource. easy to create [conflicting instructions](https://www.semrush.com/blog/robots-meta/).

## [Verification](#verification)

Use [Google Search Console's URL Inspection tool](https://support.google.com/webmasters/answer/9012289) to verify your robots meta tag:

1. Enter the URL
2. Check "Indexing allowed?" status
3. View the rendered HTML to confirm tag appears

If a noindexed page still appears in search results, it hasn't been recrawled yet. Depending on the page's importance, [it can take months](https://developers.google.com/search/docs/crawling-indexing/block-indexing) for Google to revisit and apply the directive. Request a recrawl via the URL Inspection tool.

Monitor "Excluded by 'noindex' tag" reports in

. Sudden spikes indicate accidental noindexing of important pages.

## [Using Nuxt?](#using-nuxt)

If you're using Nuxt, check out

 which handles much of this automatically.

---

On this page

- [Setup](#setup)
- [How Meta Robots Tags Work](#how-meta-robots-tags-work)
- [Controlling AI Overviews](#controlling-ai-overviews)
- [Noindex Follow vs Noindex Nofollow](#noindex-follow-vs-noindex-nofollow)
- [Common Use Cases](#common-use-cases)
- [X-Robots-Tag for Non-HTML Files](#x-robots-tag-for-non-html-files)
- [Verification](#verification)
- [Using Nuxt?](#using-nuxt)

[GitHub](https://github.com/harlan-zw/nuxt-seo) [ Discord](https://discord.com/invite/275MBUBvgP)

###

-
-

Modules

-
-
-
-
-
-
-
-
-

###

-
-
-

###

Nuxt

-
-
-
-
-

Vue

-
-
-
-
-
-
-
-

###

-
-
-
-
-
-
-
-
-
-

Copyright © 2023-2026 Harlan Wilton - [MIT License](https://github.com/harlan-zw/nuxt-seo/blob/main/license) · [mdream](https://mdream.dev)