Google crawled your page but won't index it. This happens to millions of pages daily. The Page Indexing report in Google Search Console shows exactly which pages have issues and why.
Google Search Console's Page Indexing report shows six main statuses:
Indexed: Page appears in Google's search index. This is what you want.
Discovered - currently not indexed: Google found your page but hasn't crawled it yet. The page sits in Google's queue. This is normal for new sites and low-priority pages.
Crawled - currently not indexed: Google crawled your page but chose not to index it. This means Google decided your content isn't worth showing in search results.
Excluded by robots.txt: Your robots.txt file blocks Google from accessing the page. Check your robots.txt configuration.
Blocked by noindex tag: Page has a noindex meta tag or HTTP header. Remove it if you want the page indexed.
Duplicate without canonical: Google found multiple identical pages without proper canonical tags. Set canonical URLs.
Soft 404: Page returns a 200 status code but looks like a 404 error to Google. Fix by returning proper 404 status codes for missing content.
This status means Google crawled your page but decided it wasn't worth indexing. Google doesn't explicitly state why, but five main causes exist.
Google skips pages with little unique value. Product pages with only titles and prices, blog posts under 300 words, and auto-generated content typically get excluded.
Fix: Add substantial content. Write detailed product descriptions, expand short blog posts to 800+ words, include images and videos, answer user questions comprehensively.
Multiple pages with identical or near-identical content waste Google's crawl budget. Common culprits: paginated URLs without proper canonicals, URL parameters creating duplicate versions, tag/category archives with the same posts.
Fix: Implement canonical tags pointing to the primary version. Consolidate similar pages. Use useSeoMeta() in your Nuxt pages:
<script setup>
useSeoMeta({
canonical: 'https://yoursite.com/primary-page'
})
</script>
Or set site-wide canonical logic in nuxt.config.ts:
export default defineNuxtConfig({
site: {
url: 'https://yoursite.com'
},
modules: ['@nuxtjs/seo']
})
The Nuxt SEO module automatically generates canonical URLs based on your site URL and current route.
Sites with thousands of nearly-identical pages (e.g., faceted search, filter combinations) trigger quality filters. Google picks representative pages and excludes the rest.
Fix: Use meta robots noindex on filter pages, parameter-based URLs, and search result pages. Consolidate similar content into fewer comprehensive pages.
<script setup>
const route = useRoute()
// Noindex pages with filter parameters
const shouldNoIndex = computed(() =>
route.query.color || route.query.size || route.query.sort
)
useSeoMeta({
robots: shouldNoIndex.value ? 'noindex,follow' : 'index,follow'
})
</script>
New sites with few backlinks face stricter indexing thresholds. Google prioritizes crawling trusted sites.
Fix: Build backlinks through guest posting, create linkable assets (tools, research, guides), get listed in industry directories, promote content on social media. This takes months—be patient.
Pages without internal links from other pages on your site signal low importance to Google. Orphan pages lack link equity and often don't get indexed.
Fix: Add internal links from relevant pages. Include new pages in your main navigation or sidebar. Link from high-authority pages on your site.
<!-- Link to important pages from your main layout -->
<template>
<nav>
<NuxtLink to="/important-page">
Important Page
</NuxtLink>
</nav>
</template>
This status means Google knows your page exists but hasn't crawled it. Three main causes exist.
Google takes weeks to crawl new sites. For brand-new domains, expect 2-4 weeks before regular crawling starts.
Fix: Wait. Submit your sitemap. Request indexing for critical pages via URL Inspection tool. Keep publishing content regularly.
Large sites (10,000+ pages) run into Google's crawl budget limits. Google won't crawl everything if your site has slow server responses, too many low-quality pages, or complex URL structures.
Fix: Optimize server response times (target under 200ms). Remove or noindex low-value pages. Fix redirect chains. Reduce duplicate content. Block unnecessary URLs in robots.txt:
// nuxt.config.ts
export default defineNuxtConfig({
modules: ['@nuxtjs/robots'],
robots: {
disallow: [
'/admin/',
'/search?*',
'/*?filter=*',
'/print-version/'
]
}
})
If your server takes over 500ms to respond, Google may crawl fewer pages.
Fix: Enable caching, use a CDN, optimize database queries, upgrade hosting. Monitor server response times in Search Console's Crawl Stats report.
Nuxt's SSR caching helps with this:
// nuxt.config.ts
export default defineNuxtConfig({
routeRules: {
// Cache static pages for 1 hour
'/blog/**': { swr: 3600 },
// Cache API responses
'/api/**': { cache: { maxAge: 60 } }
}
})
If pages exist only in your sitemap without internal links, Google considers them low priority.
Fix: Add internal links. Don't rely solely on sitemaps for discovery. Internal linking signals importance.
Nuxt renders pages on the server by default, which helps indexing. Verify your SSR is working correctly:
View your page source to confirm content is in the initial HTML:
# Check if content is in server-rendered HTML
curl -s https://yoursite.com/page | grep "expected content"
All your content should be visible in the raw HTML response. If it's not, check that:
ssr: false in definePageMeta()useAsyncData() or useFetch() (not client-only methods)nuxt.config.ts doesn't have ssr: falseEnsure data loads during SSR, not just on client:
<script setup>
// CORRECT: Data available during SSR
const { data: products } = await useFetch('/api/products')
// WRONG: Only loads on client
// onMounted(async () => {
// products.value = await $fetch('/api/products')
// })
</script>
<template>
<div v-for="product in products" :key="product.id">
{{ product.name }}
</div>
</template>
For content that must load client-side, provide fallback text that Google can index:
<template>
<div>
<h1>Product Catalog</h1>
<ClientOnly>
<LazyProductList />
<template #fallback>
<p>Loading 500+ products from our catalog...</p>
</template>
</ClientOnly>
</div>
</template>
Nuxt supports hybrid rendering. Verify your route rules are configured correctly:
// nuxt.config.ts
export default defineNuxtConfig({
routeRules: {
// Static pages (prerendered)
'/': { prerender: true },
'/about': { prerender: true },
// Dynamic pages (SSR)
'/blog/**': { swr: 3600 },
// Client-only pages (if needed)
'/dashboard/**': { ssr: false }
}
})
Use Search Console's URL Inspection tool to verify Google sees server-rendered content:
Compare the screenshot to your actual page. With proper SSR, they should be identical.
After fixing issues, request re-indexing via URL Inspection:
Google prioritizes these requests but doesn't guarantee indexing. It still evaluates content quality.
For many URLs, request indexing programmatically using the Google Indexing API:
// server/api/request-indexing.post.ts
import { google } from 'googleapis'
export default defineEventHandler(async (event) => {
const { url } = await readBody(event)
const auth = await google.auth.getClient({
scopes: ['https://www.googleapis.com/auth/indexing']
})
const indexing = google.indexing({ version: 'v3', auth })
await indexing.urlNotifications.publish({
requestBody: {
url,
type: 'URL_UPDATED'
}
})
return { success: true }
})
Track indexing status changes over time:
Expect changes to take 1-4 weeks. Google doesn't index on demand—it re-evaluates pages on its schedule.