Last Updated
Published

Introduction

Meta robots tags control how search engines handle individual pages. While robots.txt provides site-wide rules, meta robots tags are crucial for precise page-level control over indexing and crawling behavior.

✅ Good for:

  • Page-specific indexing control (e.g., search results pages)
  • Dynamic content handling (e.g., filtered products)
  • Setting snippet lengths and preview sizes
  • Scheduling content removal from search
  • Protecting sensitive sections when combined with authentication

❌ Don't use for:

Quick Setup

Add meta robots tags to your Vue pages using Unhead composables:

useSeoMeta({
  robots: 'noindex, follow'
})

If you're using Nuxt, these composables are available by default. For Vue applications, you'll need to install Unhead manually.

Understanding Meta Robots

Meta robots tags consist of directives that tell search engines how to handle your page. They're implemented as a meta tag in your page's head:

<meta name="robots" content="index, follow">

Directives Explained

  • index/noindex: Allow/prevent page in search results
  • follow/nofollow: Allow/prevent following links on page
  • noarchive: Prevent cached copies
  • nosnippet: Prevent search result snippets
  • max-snippet: Control snippet length
  • max-image-preview: Control image preview size
  • max-video-preview: Control video preview length
  • unavailable_after: Schedule search removal date

For a complete list of directives and their behaviors, see Google's meta robots documentation.

Important Notes

  • Must be server-side rendered for crawler effectiveness
  • Must be in the page's <head>
  • Specific crawlers can be targeted (e.g., googlebot instead of robots)
  • Multiple directives can be combined with commas
  • More specific tags override general ones
  • Consider combining with canonical URLs for duplicate content

Common Patterns

Block Search Results Pages

// pages/search.vue
useSeoMeta({
  robots: 'noindex, follow',
  // Learn more about canonical URLs at /learn/controlling-crawlers/canonical-urls
  canonical: 'https://mysite.com/search' // point to main search page
})

Filter and Pagination Pages

// pages/products/[category].vue
const { query } = useRoute()

useSeoMeta({
// Block indexing if filters are applied
  robots: Object.keys(query).length > 0 ? 'noindex, follow' : 'index, follow',
  canonical: `https://mysite.com/products/${category}` // point to main category
})

See handling pagination in Vue for more comprehensive pagination strategies.

Temporary Content

// pages/sales/[campaign].vue
const endDate = new Date('2024-12-31')

useSeoMeta({
  robots: `index, follow, unavailable_after: ${endDate.toISOString()}`
})

For permanent content changes, consider using HTTP redirects instead.

User-Generated Content

// pages/user/[id]/profile.vue
useSeoMeta({
  // Prevent caching and limit snippets
  robots: 'index, follow, noarchive, max-snippet:50'
})

For sensitive user content, review our security guide.

Testing

Using Google Search Console

  1. Use URL Inspection tool
  2. Check "Indexing allowed?" status
  3. Verify crawling allowed
  4. Review any indexing issues

See Google's guide on robots.txt testing for detailed steps.

Important Checks

  • Confirm SSR implementation (why this matters)
  • Verify placement in <head>
  • Check directives syntax
  • Test across different page types
  • Monitor indexing status changes
  • Verify interaction with other crawler controls

Core Concepts

Implementation Methods

Additional Resources