noindex prevents page from appearing in search resultsnofollow stops link equity from flowing through that page's linksMeta robots tags give page-level control over search engine indexing. While robots.txt provides site-wide rules, the robots meta tag lets you handle individual pages differently.
Use them to block filtered pages from indexing, prevent duplicate content issues on pagination, or control snippet appearance. For non-HTML files (PDFs, images) use X-Robots-Tag HTTP headers instead. For site-wide rules stick with robots.txt.
Add the robots meta tag using Unhead in your Vue components. Must be server-side rendered or crawlers won't see it.
useSeoMeta({
robots: 'noindex, follow'
})
useSeoMeta({
robots: 'index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1'
})
Vue apps need manual Unhead installation. Nuxt includes it by default.
The robots meta tag goes in your page's <head> and tells crawlers what to do with that specific page (Google's documentation):
<meta name="robots" content="index, follow">
Without a robots meta tag, crawlers assume index, follow by default (MDN reference).
noindex — Exclude from search resultsnofollow — Don't follow links on this pagenoarchive — Prevent cached copiesnosnippet — No description snippet in resultsmax-snippet:[number] — Limit snippet to N charactersmax-image-preview:[setting] — Control image preview size (none, standard, large)max-video-preview:[number] — Limit video preview to N secondsCombine multiple directives with commas. When directives conflict, the more restrictive one applies.
Must be server-side rendered. Google can execute JavaScript but does so in a second rendering wave, which can delay or prevent proper indexing. SSR puts the tag in the HTML response immediately.
If you block the page in robots.txt, crawlers never see the meta tag. Don't combine robots.txt blocking with noindex—the noindex won't work.
Target specific crawlers by replacing robots with a user agent token like googlebot. More specific tags override general ones.
Use noindex, follow when you want to block indexing but preserve link equity flow. The page won't rank, but links on it still count (Wolf of SEO guide).
Use noindex, nofollow for pages with no valuable links—login forms, checkout steps, truly private content.
// Filter pages, thank you pages, test pages
useSeoMeta({
robots: 'noindex, follow'
})
// Login pages, admin sections
useSeoMeta({
robots: 'noindex, nofollow'
})
Note that follow doesn't force crawling—it just allows it. Pages may still be crawled less over time if they remain noindexed.
Internal search results and filter combinations create duplicate content. Google recommends blocking these with noindex or robots.txt:
useSeoMeta({
robots: 'noindex, follow'
})
const { query } = useRoute()
useSeoMeta({
robots: Object.keys(query).length > 0 ? 'noindex, follow' : 'index, follow'
})
Combine with canonical tags pointing to the main category page—but don't use both noindex and canonical on the same page (sends conflicting signals).
Google no longer uses rel="next" and rel="prev" tags (Google documentation). Instead, link to next/previous pages with regular <a> tags and let Google discover the sequence naturally.
You don't need to noindex pagination pages—Google treats them as part of the sequence. Only noindex if the paginated content is duplicate or low-value.
Limit snippet length and prevent caching for dynamic user profiles:
useSeoMeta({
robots: 'index, follow, noarchive, max-snippet:50'
})
The meta robots tag only works in HTML. For PDFs, images, videos, or other files, use the X-Robots-Tag HTTP header instead (MDN reference):
X-Robots-Tag: noindex, nofollow
X-Robots-Tag supports the same directives as the meta tag. It's also useful for bulk operations—you can apply it to entire directories or file patterns via server configuration.
Don't use both meta robots and X-Robots-Tag on the same resource—easy to create conflicting instructions.
Use Google Search Console's URL Inspection tool to verify your robots meta tag:
If a noindexed page still appears in search results, it hasn't been recrawled yet. Depending on the page's importance, it can take months for Google to revisit and apply the directive. Request a recrawl via the URL Inspection tool.
Monitor "Excluded by 'noindex' tag" reports in Search Console. Sudden spikes indicate accidental noindexing of important pages.
If you're using Nuxt, check out Nuxt SEO which handles much of this automatically.
Learn more about meta robots tags in Nuxt →
What's the difference between robots.txt Disallow and meta noindex?
Both prevent indexing - Disallow only prevents crawling, not indexing—Google can still index uncrawled URLs from external linksDisallow blocks crawling, noindex blocks indexing - Correct! Disallow stops the crawler from accessing the page, noindex tells the crawler not to add it to search resultsNoindex is site-wide, Disallow is per-page - Opposite—robots.txt is site-wide, meta robots is per-page