Meta Robots Tag in Vue & Nuxt
Introduction
Meta robots tags control how search engines handle individual pages. While robots.txt provides site-wide rules, meta robots tags are crucial for precise page-level control over indexing and crawling behavior.
✅ Good for:
- Page-specific indexing control (e.g., search results pages)
- Dynamic content handling (e.g., filtered products)
- Setting snippet lengths and preview sizes
- Scheduling content removal from search
- Protecting sensitive sections when combined with authentication
❌ Don't use for:
- Non-HTML resources (use X-Robots-Tag instead)
- Site-wide rules (use robots.txt instead)
- Blocking specific crawlers (use robots.txt instead)
- URL management (use canonical URLs or redirects instead)
Quick Setup
Add meta robots tags to your Vue pages using Unhead composables:
useSeoMeta({
robots: 'noindex, follow'
})
useSeoMeta({
robots: 'index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1'
})
useSeoMeta({
robots: `index, follow, unavailable_after: ${new Date('2024-12-31').toISOString()}`
})
If you're using Nuxt, these composables are available by default. For Vue applications, you'll need to install Unhead manually.
Understanding Meta Robots
Meta robots tags consist of directives that tell search engines how to handle your page. They're implemented as a meta tag in your page's head:
<meta name="robots" content="index, follow">
Directives Explained
index/noindex : Allow/prevent page in search resultsfollow/nofollow : Allow/prevent following links on pagenoarchive : Prevent cached copiesnosnippet : Prevent search result snippetsmax-snippet : Control snippet lengthmax-image-preview : Control image preview sizemax-video-preview : Control video preview lengthunavailable_after : Schedule search removal date
For a complete list of directives and their behaviors, see Google's meta robots documentation.
Important Notes
- Must be server-side rendered for crawler effectiveness
- Must be in the page's
<head> - Specific crawlers can be targeted (e.g.,
googlebot instead ofrobots ) - Multiple directives can be combined with commas
- More specific tags override general ones
- Consider combining with canonical URLs for duplicate content
Common Patterns
Block Search Results Pages
// pages/search.vue
useSeoMeta({
robots: 'noindex, follow',
// Learn more about canonical URLs at /learn/controlling-crawlers/canonical-urls
canonical: 'https://mysite.com/search' // point to main search page
})
Filter and Pagination Pages
// pages/products/[category].vue
const { query } = useRoute()
useSeoMeta({
// Block indexing if filters are applied
robots: Object.keys(query).length > 0 ? 'noindex, follow' : 'index, follow',
canonical: `https://mysite.com/products/${category}` // point to main category
})
See handling pagination in Vue for more comprehensive pagination strategies.
Temporary Content
// pages/sales/[campaign].vue
const endDate = new Date('2024-12-31')
useSeoMeta({
robots: `index, follow, unavailable_after: ${endDate.toISOString()}`
})
For permanent content changes, consider using HTTP redirects instead.
User-Generated Content
// pages/user/[id]/profile.vue
useSeoMeta({
// Prevent caching and limit snippets
robots: 'index, follow, noarchive, max-snippet:50'
})
For sensitive user content, review our security guide.
Testing
Using Google Search Console
- Use URL Inspection tool
- Check "Indexing allowed?" status
- Verify crawling allowed
- Review any indexing issues
See Google's guide on robots.txt testing for detailed steps.
Important Checks
- Confirm SSR implementation (why this matters)
- Verify placement in
<head> - Check directives syntax
- Test across different page types
- Monitor indexing status changes
- Verify interaction with other crawler controls
Related
Core Concepts
- Understanding Web Crawlers - Complete guide to crawler control
- Securing Your Site From Crawlers - Protect sensitive content
Implementation Methods
- Robots.txt Guide - Site-wide crawler rules
- Canonical URLs - Managing duplicate content
- HTTP Redirects - Page relocation best practices
Additional Resources
- Sitemap Implementation - Help crawlers discover your content
- Google's Robots Meta Tag Documentation - Official guidelines