Meta Robots Tags in Nuxt

Control page-level indexing with meta robots tags. Block search results pages, manage pagination, and prevent duplicate content in Nuxt apps.
Harlan WiltonHarlan Wilton9 mins read Published Updated
What you'll learn
  • noindex prevents indexing, nofollow stops link equity flow through that page
  • Use X-Robots-Tag HTTP header for non-HTML files (PDFs, images)
  • Must be server-rendered—client-only meta tags may be missed by crawlers

Meta robots tags give page-level control over search engine indexing. While robots.txt provides site-wide rules, the robots meta tag lets you handle individual pages differently.

Use them to block filtered pages from indexing, prevent duplicate content issues on pagination, or control snippet appearance. For non-HTML files (PDFs, images) use X-Robots-Tag HTTP headers instead. For site-wide rules stick with robots.txt.

Setup

Add the robots meta tag using useSeoMeta() in your Nuxt pages. Nuxt server-renders these tags automatically—crawlers see them in the initial HTML response.

Block Indexing
useSeoMeta({
  robots: 'noindex, follow'
})
Control Snippets
useSeoMeta({
  robots: 'index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1'
})

How Meta Robots Tags Work

The robots meta tag goes in your page's <head> and tells crawlers what to do with that specific page (Google's documentation):

<meta name="robots" content="index, follow">

Without a robots meta tag, crawlers assume index, follow by default (MDN reference).

Common Directives

  • noindex — Exclude from search results
  • nofollow — Don't follow links on this page
  • noarchive — Prevent cached copies
  • nosnippet — No description snippet in results
  • max-snippet:[number] — Limit snippet to N characters
  • max-image-preview:[setting] — Control image preview size (none, standard, large)
  • max-video-preview:[number] — Limit video preview to N seconds

Combine multiple directives with commas. When directives conflict, the more restrictive one applies.

Critical Requirements

Must be server-side rendered. Google can execute JavaScript but does so in a second rendering wave, which can delay or prevent proper indexing. Nuxt's SSR puts the tag in the HTML response immediately.

If you block the page in robots.txt, crawlers never see the meta tag. Don't combine robots.txt blocking with noindex—the noindex won't work.

Target specific crawlers by replacing robots with a user agent token like googlebot. More specific tags override general ones.

Noindex Follow vs Noindex Nofollow

Use noindex, follow when you want to block indexing but preserve link equity flow. The page won't rank, but links on it still count (Wolf of SEO guide).

Use noindex, nofollow for pages with no valuable links—login forms, checkout steps, truly private content.

Noindex Follow - Preserve Links
// Filter pages, thank you pages, test pages
useSeoMeta({
  robots: 'noindex, follow'
})
Noindex Nofollow - Block Everything
// Login pages, admin sections
useSeoMeta({
  robots: 'noindex, nofollow'
})

Note that follow doesn't force crawling—it just allows it. Pages may still be crawled less over time if they remain noindexed.

Common Use Cases

Search Results and Filtered Pages

Internal search results and filter combinations create duplicate content. Google recommends blocking these with noindex or robots.txt:

pages/search.vue
useSeoMeta({
  robots: 'noindex, follow'
})
pages/products/[category].vue
const { query } = useRoute()

useSeoMeta({
  robots: Object.keys(query).length > 0 ? 'noindex, follow' : 'index, follow'
})

Combine with canonical tags pointing to the main category page—but don't use both noindex and canonical on the same page (sends conflicting signals).

Pagination

Google no longer uses rel="next" and rel="prev" tags (Google documentation). Instead, link to next/previous pages with regular <a> tags and let Google discover the sequence naturally.

You don't need to noindex pagination pages—Google treats them as part of the sequence. Only noindex if the paginated content is duplicate or low-value.

User-Generated Content

Limit snippet length and prevent caching for dynamic user profiles:

pages/user/[id]/profile.vue
useSeoMeta({
  robots: 'index, follow, noarchive, max-snippet:50'
})

X-Robots-Tag for Non-HTML Files

The meta robots tag only works in HTML. For PDFs, images, videos, or other files, use the X-Robots-Tag HTTP header instead (MDN reference):

X-Robots-Tag: noindex, nofollow

X-Robots-Tag supports the same directives as the meta tag. It's also useful for bulk operations—you can apply it to entire directories or file patterns via server configuration.

Don't use both meta robots and X-Robots-Tag on the same resource—easy to create conflicting instructions.

Verification

Use Google Search Console's URL Inspection tool to verify your robots meta tag:

  1. Enter the URL
  2. Check "Indexing allowed?" status
  3. View the rendered HTML to confirm tag appears

If a noindexed page still appears in search results, it hasn't been recrawled yet. Depending on the page's importance, it can take months for Google to revisit and apply the directive. Request a recrawl via the URL Inspection tool.

Monitor "Excluded by 'noindex' tag" reports in Search Console. Sudden spikes indicate accidental noindexing of important pages.