---
title: "Debug Indexing Issues in Google Search Console"
description: "Fix \"crawled currently not indexed\" and other GSC coverage errors affecting your Nuxt site and AI visibility in 2026."
canonical_url: "https://nuxtseo.com/learn-seo/nuxt/launch-and-listen/indexing-issues"
last_updated: "2026-01-29"
---

<key-takeaways>

- "Crawled - currently not indexed" means Google saw your page but chose not to index it. This usually prevents it from appearing in AI Overviews as well
- Verify SSR works by checking View Source for your content, not just DevTools
- Use `useFetch()` for data. `onMounted()` fetches won't be in the initial HTML Google crawls

</key-takeaways>

Google crawled your page but won't index it. This happens to [millions of pages daily](https://www.onely.com/blog/how-to-fix-crawled-currently-not-indexed-in-google-search-console/). In 2026, indexing issues are often tied to content quality filters that also govern AI citation eligibility.

## Understanding Page Indexing Status

Google Search Console's Page Indexing report shows six main statuses:

![Page Indexing Status Flowchart](/images/learn-seo/vue/indexing-status-flowchart.svg)

### Good Statuses

**Indexed**: Page appears in Google's search index. This is a prerequisite for most AI citations in Google AI Overviews.

### Warning Statuses

**Discovered - currently not indexed**: Google found your page (often via sitemap) but hasn't crawled it yet. The page sits in Google's queue.

**Crawled - currently not indexed**: Google crawled your page but chose not to index it. This means Google decided your content isn't worth showing in search results or using as a source for AI answers.

### Excluded Statuses

**Excluded by robots.txt**: Your robots.txt file blocks Google from accessing the page. Note: Blocking `Googlebot` also blocks the AI crawler used for AI Overviews.

**Blocked by noindex tag**: Page has a `noindex` meta tag. Google will not index the page or use it for AI summaries.

## Fixing "Crawled - Currently Not Indexed"

This status is often the result of "Quality Filters" introduced in the June 2025 core update.

### Thin or Low-Quality Content

Google skips pages with little unique value. If a page isn't good enough for a traditional result, it's not good enough to be an AI citation source.

Fix: Add substantial content. Use [Schema.org](/learn-seo/nuxt/mastering-meta/schema-org) to define the page's purpose to both crawlers and LLMs.

### Duplicate Content

Multiple pages with identical content waste Google's crawl budget. See the [Duplicate Content guide](/learn-seo/nuxt/controlling-crawlers/duplicate-content) for a full detection and resolution workflow.

Fix: Implement [canonical tags](/learn-seo/nuxt/controlling-crawlers/canonical-urls) pointing to the primary version. Use `useSeoMeta()` in your Nuxt pages:

```vue
<script setup lang="ts">
useSeoMeta({
  ogTitle: 'My Page Title',
  canonical: 'https://yoursite.com/primary-page'
})
</script>
```

### Poor Internal Linking

Pages with few or no internal links pointing to them signal low importance to Google. Orphan pages (reachable only via [sitemap](/learn-seo/nuxt/controlling-crawlers/sitemaps), not through any `<a href>` on your site) rarely get indexed.

Fix: Add internal links from relevant pages. Include new pages in your navigation or sidebar. Link from high-authority pages to new content. See [Internal Linking Strategy](/learn-seo/nuxt/routes-and-rendering/internal-linking) for architecture patterns.

### Low Site Authority (E-E-A-T)

New sites with few backlinks face stricter indexing thresholds. [Google prioritizes crawling trusted sites](https://www.onely.com/blog/how-to-fix-crawled-currently-not-indexed-in-google-search-console/).

Fix: Build backlinks and brand mentions. AI engines are more likely to cite sites that are frequently mentioned across the web. See [Backlinks & Authority](/learn-seo/backlinks) for developer-friendly link building strategies.

## Fixing "Discovered - Currently Not Indexed"

### Site Too New

Google takes weeks to crawl new sites. For brand-new domains, expect 2-4 weeks before regular crawling starts.

### Crawl Budget Issues

Large sites (10,000+ pages) run into Google's crawl budget limits.

Fix: Optimize server response times (target under 200ms). Use Nuxt's `routeRules` for caching to reduce server load during heavy crawling.

```ts
// nuxt.config.ts
export default defineNuxtConfig({
  routeRules: {
    '/blog/**': { swr: 3600 },
    '/api/**': { cache: { maxAge: 60 } }
  }
})
```

## Verifying SSR in Nuxt

Nuxt renders pages on the server by default, which helps indexing. Verify your SSR is working correctly:

### Check Server Response

View your page source to confirm content is in the initial HTML:

```bash
# Check if content is in server-rendered HTML
curl -s https://yoursite.com/page | grep "expected content"
```

### Verify Data Fetching

Ensure data loads during SSR, not just on client:

```vue
<script setup lang="ts">
// CORRECT: Data available during SSR
const { data: products } = await useFetch('/api/products')

// WRONG: Only loads on client (Google might miss this)
// onMounted(() => {
//   (async () => {
//     products.value = await $fetch('/api/products')
//   })()
// })
</script>
```

## AI Exclusion & "AI Mode"

In 2026, you might see pages indexed for traditional search but excluded from AI Mode.

**Causes:**

- **Lack of structured data**: LLMs prefer [JSON-LD](/learn-seo/nuxt/mastering-meta/schema-org).
- **Complexity**: Content that is too hard to parse (poor HTML structure).
- **nosnippet**: Using `nosnippet` tags can prevent AI Overviews from using your content.

## Requesting Re-Indexing

After fixing issues, request re-indexing via URL Inspection:

1. Search Console → URL Inspection
2. Enter fixed URL
3. Click "Request Indexing"

For bulk updates, use the [Google Indexing API](https://developers.google.com/search/apis/indexing-api/v3/quickstart) for job/event content, or [RequestIndexing](https://requestindexing.com/) for general content.

### Crawl Budget Issues

Large sites (10,000+ pages) run into Google's crawl budget limits. Google won't crawl everything if your site has [slow server responses, too many low-quality pages, or complex URL structures](https://support.google.com/webmasters/community-guide/278777978). Review your [URL structure](/learn-seo/nuxt/routes-and-rendering/url-structure) to reduce unnecessary parameter variations.

Fix: Optimize server response times (target under 200ms). Remove or noindex low-value pages. Fix redirect chains. Reduce duplicate content. Block unnecessary URLs in robots.txt.

**Manual robots.txt approach:**

Create `public/robots.txt`:

```txt [public/robots.txt]
User-agent: *
Disallow: /admin/
Disallow: /search?*
Disallow: /*?filter=*
Disallow: /print-version/
```

Or use server middleware for dynamic control:

```ts [server/middleware/robots.ts]
export default defineEventHandler((event) => {
  if (event.path === '/robots.txt') {
    return `User-agent: *
Disallow: /admin/
Disallow: /search
Disallow: /print-version/`
  }
})
```

**Using @nuxtjs/robots module:**

```ts
// nuxt.config.ts
export default defineNuxtConfig({
  modules: ['@nuxtjs/robots'],
  robots: {
    disallow: [
      '/admin/',
      '/search?*',
      '/*?filter=*',
      '/print-version/'
    ]
  }
})
```

### Slow Server Response

If your server takes over 500ms to respond, [Google may crawl fewer pages](https://developers.google.com/search/docs/crawling-indexing/large-site-managing-crawl-budget).

Fix: Enable caching, use a CDN, optimize database queries, upgrade hosting. Monitor server response times in Search Console's Crawl Stats report.

Nuxt's SSR caching helps with this:

```ts
// nuxt.config.ts
export default defineNuxtConfig({
  routeRules: {
    // Cache static pages for 1 hour
    '/blog/**': { swr: 3600 },
    // Cache API responses
    '/api/**': { cache: { maxAge: 60 } }
  }
})
```

### Pages Only in Sitemap

If pages exist only in your sitemap without internal links, Google considers them low priority.

Fix: Add internal links. Don't rely solely on sitemaps for discovery. [Internal linking signals importance](https://seotesting.com/google-search-console/discovered-currently-not-indexed/).

## Verifying SSR in Nuxt

Nuxt renders pages on the server by default, which helps indexing. Verify your SSR is working correctly:

### Check Server Response

View your page source to confirm content is in the initial HTML:

```bash
# Check if content is in server-rendered HTML
curl -s https://yoursite.com/page | grep "expected content"
```

All your content should be visible in the raw HTML response. If it's not, check that:

1. Your page doesn't have `ssr: false` in `definePageMeta()`
2. Data fetching uses `useAsyncData()` or `useFetch()` (not client-only methods)
3. Your `nuxt.config.ts` doesn't have `ssr: false`

### Verify Data Fetching

Ensure data loads during SSR, not just on client:

```vue
<script setup lang="ts">
// CORRECT: Data available during SSR
const { data: products } = await useFetch('/api/products')

// WRONG: Only loads on client
// onMounted(() => {
//   (async () => {
//     products.value = await $fetch('/api/products')
//   })()
// })
</script>

<template>
  <div v-for="product in products" :key="product.id">
    {{ product.name }}
  </div>
</template>
```

### Handle Client-Only Content

For content that must load client-side, provide fallback text that Google can index:

```vue
<template>
  <div>
    <h1>Product Catalog</h1>
    <ClientOnly>
      <LazyProductList />
      <template #fallback>
        <p>Loading 500+ products from our catalog...</p>
      </template>
    </ClientOnly>
  </div>
</template>
```

### Test Rendering Modes

Nuxt supports hybrid rendering. Verify your route rules configuration:

```ts
// nuxt.config.ts
export default defineNuxtConfig({
  routeRules: {
    // Static pages (prerendered)
    '/': { prerender: true },
    '/about': { prerender: true },

    // Dynamic pages (SSR)
    '/blog/**': { swr: 3600 },

    // Client-only pages (if needed)
    '/dashboard/**': { ssr: false }
  }
})
```

Use Search Console's URL Inspection tool to verify Google sees server-rendered content:

1. Open Search Console → URL Inspection
2. Enter your page URL
3. Click "Test Live URL"
4. Click "View Tested Page" → "Screenshot"

Compare the screenshot to your actual page. With proper SSR, they should be identical.

## Requesting Re-Indexing

After fixing issues, request re-indexing via URL Inspection:

1. Search Console → URL Inspection
2. Enter fixed URL
3. Click "Request Indexing"

Google prioritizes these requests but doesn't guarantee indexing. [It still evaluates content quality](https://support.google.com/webmasters/answer/7440203).

For many URLs, request indexing programmatically using the [Google Indexing API](https://developers.google.com/search/apis/indexing-api/v3/quickstart):

```ts
// server/api/request-indexing.post.ts
import { google } from 'googleapis'

export default defineEventHandler(async (event) => {
  const { url } = await readBody(event)

  const auth = await google.auth.getClient({
    scopes: ['https://www.googleapis.com/auth/indexing']
  })

  const indexing = google.indexing({ version: 'v3', auth })

  await indexing.urlNotifications.publish({
    requestBody: {
      url,
      type: 'URL_UPDATED'
    }
  })

  return { success: true }
})
```

## Monitoring Progress

Track indexing status changes over time:

1. Search Console → Page Indexing
2. Check "Not indexed" count weekly
3. Look for status changes from "Crawled - not indexed" to "Indexed"

Expect changes to take 1-4 weeks. [Google doesn't index on demand](https://www.onely.com/blog/how-to-fix-crawled-currently-not-indexed-in-google-search-console/). It re-evaluates pages on its schedule.
