67.6% of websites have duplicate content issues. Same content at different URLs splits ranking signals and wastes crawl budget. Google picks which version to show—often not the one you want.
Google doesn't penalize duplicate content unless you're deliberately scraping other sites. But it hurts SEO by diluting link equity across multiple URLs and confusing search engines about which page to rank.
www vs non-www
www.mysite.com and mysite.com are treated as separate sites. Choose one, redirect the other.
HTTP vs HTTPS
http://mysite.com and https://mysite.com create duplicates. Always redirect HTTP to HTTPS.
Trailing slashes
/products and /products/ are different URLs. Pick one format site-wide.
Nuxt handles these redirects in nuxt.config.ts:
export default defineNuxtConfig({
nitro: {
prerender: {
autoSubfolderIndex: false
}
},
routeRules: {
// Force HTTPS redirect
'/**': {
redirect: {
statusCode: 301
},
headers: {
'Strict-Transport-Security': 'max-age=31536000'
}
}
}
})
For www vs non-www, configure at the server level (Netlify, Vercel, Cloudflare).
URL parameters create exponential duplicates. Three filters generate 8 combinations. Add sorting and pagination—hundreds of URLs.
/products
/products?color=red
/products?color=red&size=large
/products?color=red&size=large&sort=price
/products?color=red&size=large&sort=price&page=2
Fix: Canonical tags
<script setup lang="ts">
const route = useRoute()
useHead({
link: [{
rel: 'canonical',
// Always point to base URL, ignore parameters
href: 'https://mysite.com/products'
}]
})
</script>
Or block filtered pages from indexing:
<script setup lang="ts">
const route = useRoute()
useSeoMeta({
robots: route.query.filter ? 'noindex, follow' : 'index, follow'
})
</script>
?sort=price&filter=red and ?filter=red&sort=price are identical content, different URLs.
Fix: Enforce consistent parameter order
export function useCanonicalParams(params: Record<string, string>) {
const siteUrl = useSiteConfig().url
const route = useRoute()
// Define parameter order
const paramOrder = ['category', 'sort', 'filter', 'page']
const orderedParams = Object.fromEntries(
Object.entries(params)
.sort(([a], [b]) => {
const indexA = paramOrder.indexOf(a)
const indexB = paramOrder.indexOf(b)
if (indexA === -1)
return 1
if (indexB === -1)
return -1
return indexA - indexB
})
)
const queryString = new URLSearchParams(orderedParams).toString()
return {
link: [{
rel: 'canonical',
href: queryString
? `${siteUrl}${route.path}?${queryString}`
: `${siteUrl}${route.path}`
}]
}
}
Analytics params (utm_source, fbclid, gclid) don't change content but create duplicate URLs.
Fix: Strip from canonical
export function useCanonicalUrl(path: string) {
const siteUrl = useSiteConfig().url
const route = useRoute()
const trackingParams = [
'utm_source',
'utm_medium',
'utm_campaign',
'utm_term',
'utm_content',
'fbclid',
'gclid',
'msclkid',
'mc_cid',
'mc_eid',
'_ga',
'ref'
]
const cleanParams = Object.fromEntries(
Object.entries(route.query).filter(([key]) =>
!trackingParams.includes(key)
)
)
const queryString = new URLSearchParams(cleanParams).toString()
return {
link: [{
rel: 'canonical',
href: queryString
? `${siteUrl}${path}?${queryString}`
: `${siteUrl}${path}`
}]
}
}
Better: Redirect tracking params at the server level for proper 301 status codes.
Each paginated page has unique content. Use self-referencing canonicals—don't point page 2 to page 1.
<script setup lang="ts">
const route = useRoute()
const siteUrl = useSiteConfig().url
const page = route.query.page || '1'
useHead({
link: [{
rel: 'canonical',
// Each page references itself
href: page === '1'
? `${siteUrl}/blog`
: `${siteUrl}/blog?page=${page}`
}]
})
</script>
Printer-friendly URLs (/article?print=true) and mobile subdomains (m.mysite.com) create duplicates.
Fix: Canonical to desktop version
<script setup lang="ts">
const siteUrl = useSiteConfig().url
useHead({
link: [{
rel: 'canonical',
// Always point to main URL
href: `${siteUrl}/article`
}]
})
</script>
For print, use CSS @media print instead of separate URLs.
Session IDs in URLs create infinite variations.
/products?sessionid=abc123
/products?sessionid=xyz789
/products?sessionid=def456
Fix: Don't put session IDs in URLs. Use cookies. If unavoidable, block with robots.txt:
User-agent: *
Disallow: /*?sessionid=
Disallow: /*&sessionid=
Disallow: /*?sid=
Disallow: /*&sid=
Or use the Robots module:
Use the Page Indexing report to identify duplicates:
Click each category to see affected URLs. If Google chose a different canonical than you specified, conflicting signals exist.
Using URL Inspection:
Screaming Frog detects exact and near-duplicate content:
Exact duplicates — Pages with identical HTML (MD5 hash match)
Near duplicates — Pages with 90%+ similarity (minhash algorithm)
Setup:
Config > Content > DuplicatesCheck these columns:
Closest Similarity Match — Percentage match to most similar pageNo. Near Duplicates — Count of similar pagesHash — MD5 hash for exact duplicate detectionScreaming Frog auto-excludes nav and footer elements to focus on main content. Adjust threshold if needed (default 90%).
Use Google site search to find duplicates manually:
site:mysite.com "exact title text"
If multiple URLs appear with the same title, you have duplicates.
Siteliner — Free tool that crawls up to 250 pages, shows duplicate content percentage
Copyscape — Detects external duplicate content (other sites copying you)
Both useful for content audits but don't replace Search Console or Screaming Frog for technical SEO.
| When to Use | Canonical Tag | 301 Redirect |
|---|---|---|
| Need both URLs live | ✅ Yes | ❌ No |
| User should see one URL | ❌ No | ✅ Yes |
| Products in multiple categories | ✅ Yes | ❌ No |
| Old page no longer needed | ❌ No | ✅ Yes |
| UTM tracking parameters | ✅ Yes | ❌ No |
| www vs non-www | ❌ No | ✅ Yes |
| HTTP vs HTTPS | ❌ No | ✅ Yes |
| Moved/renamed pages | ❌ No | ✅ Yes |
Canonical tags are hints, not directives. Google may ignore them. Both versions remain accessible. Use for duplicates you need (tracking params, multiple category paths).
301 redirects are permanent. Users see the redirect target. Pass the same link equity as canonicals but remove the duplicate from the index. Use for outdated or unnecessary URLs.
Don't combine: Using both canonical tag and 301 redirect on the same page sends conflicting signals. Pick one.
Examples:
http://mysite.com → https://mysite.com — 301 redirectwww.mysite.com → mysite.com — 301 redirect/products?utm_source=twitter → /products — Canonical tag/products/shoes and /sale/shoes (same product) — Canonical tag (one canonical, one alternate)/products?filter=red — Noindex + canonical to base URL/old-page → /new-page — 301 redirectMistake 1: Canonicalizing all paginated pages to page 1
<!-- ❌ Wrong - hides pages 2+ from search -->
<script setup>
useHead({
link: [{ rel: 'canonical', href: 'https://mysite.com/blog' }]
})
</script>
Each paginated page should reference itself.
Mistake 2: Using relative canonical URLs
<!-- ❌ Wrong - must be absolute -->
<link rel="canonical" href="/products/phone">
<!-- ✅ Correct -->
<link rel="canonical" href="https://mysite.com/products/phone">
Google requires absolute URLs.
Mistake 3: Combining canonical with noindex
<!-- ❌ Conflicting signals -->
<script setup>
useHead({
link: [{ rel: 'canonical', href: 'https://mysite.com/page' }]
})
useSeoMeta({
robots: 'noindex, follow'
})
</script>
Canonical says "this is a duplicate of X." Noindex says "don't index this." Pick one.
Mistake 4: Canonical chains
Page A → canonical → Page B → canonical → Page C
Google may ignore chained canonicals. Canonical directly to the final target.
Mistake 5: Client-side canonicals in SPAs
Googlebot doesn't execute JavaScript fast enough. Nuxt renders canonical tags on the server by default—this isn't an issue.
1. View page source (not DevTools)
curl https://mysite.com/products?sort=price | grep canonical
Should return:
<link rel="canonical" href="https://mysite.com/products">
2. Google Search Console URL Inspection
3. Check for canonicalization conflicts
rel="canonical" tags on same page<head> vs HTTP header4. Test redirect chains
curl -I https://mysite.com/old-url
Should show one 301 redirect, not a chain.
Nuxt handles trailing slashes via nuxt.config.ts:
export default defineNuxtConfig({
// Force trailing slashes
nitro: {
prerender: {
autoSubfolderIndex: true
}
}
})
Or use route rules for redirects:
export default defineNuxtConfig({
routeRules: {
// Redirect /page to /page/
'/**': {
redirect: path => !path.endsWith('/') ? `${path}/` : undefined
}
}
})
Prevent infinite URL variations by whitelisting allowed parameter values:
const allowedSortValues = ['price', 'name', 'date', 'rating']
const route = useRoute()
const sort = route.query.sort
if (sort && !allowedSortValues.includes(sort as string)) {
// Redirect to base URL or default sort
await navigateTo({ query: { ...route.query, sort: undefined } })
}
Use robots.txt to block search results, filtered pages, and admin sections:
User-agent: *
# Block search results
Disallow: /search?
Disallow: /*?q=
Disallow: /*?query=
# Block filters
Disallow: /*?filter=
Disallow: /*&filter=
# Block tracking params
Disallow: /*?utm_source=
Disallow: /*?fbclid=
Disallow: /*?gclid=
# Block session IDs
Disallow: /*?sessionid=
Disallow: /*?sid=
Or configure via the Robots module:
Nuxt SEO Utils handles canonical URLs, trailing slashes, and parameter normalization automatically through site config:
Configure once in nuxt.config.ts:
export default defineNuxtConfig({
site: {
url: 'https://mysite.com',
trailingSlash: false
}
})
The module automatically: