Embeddings are numerical representations (vectors) of text that capture semantic meaning. They enable semantic search - finding content by meaning rather than just keyword matching.
Nuxt Ask AI supports separate providers for build-time and runtime environments.
Why?
Configuration:
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'bge-small-en-v1.5', // Base model name (provider-agnostic)
buildProvider: {
preset: 'transformers.js' // For prerender/build
},
runtimeProvider: {
preset: 'openai', // For search queries at runtime
apiKey: process.env.OPENAI_API_KEY
}
}
}
})
Model Resolution:
bge-small-en-v1.5Xenova/bge-small-en-v1.5text-embedding-3-smallresolveModelForPreset(baseModel, preset)All providers unified through AI SDK (ai package from Vercel).
Local ONNX models via @built-in-ai/transformers-js.
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'bge-small-en-v1.5',
buildProvider: {
preset: 'transformers.js'
},
runtimeProvider: {
preset: 'transformers.js'
}
}
}
})
Pros:
Cons:
OpenAI embeddings via @ai-sdk/openai.
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'text-embedding-3-small',
buildProvider: {
preset: 'openai',
apiKey: process.env.OPENAI_API_KEY
},
runtimeProvider: {
preset: 'openai',
apiKey: process.env.OPENAI_API_KEY
}
}
}
})
Pros:
Cons:
Local Ollama server via ollama-ai-provider-v2.
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'nomic-embed-text',
buildProvider: {
preset: 'ollama',
baseURL: 'http://localhost:11434'
},
runtimeProvider: {
preset: 'ollama',
baseURL: 'http://localhost:11434'
}
}
}
})
Pros:
Cons:
Google embeddings via @ai-sdk/google.
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'text-embedding-004',
buildProvider: {
preset: 'google',
apiKey: process.env.GOOGLE_API_KEY
}
}
}
})
Mistral embeddings via @ai-sdk/mistral.
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'mistral-embed',
buildProvider: {
preset: 'mistral',
apiKey: process.env.MISTRAL_API_KEY
}
}
}
})
Cohere embeddings via @ai-sdk/cohere.
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'embed-english-v3.0',
buildProvider: {
preset: 'cohere',
apiKey: process.env.COHERE_API_KEY
}
}
}
})
Anthropic (Voyage) embeddings via @ai-sdk/anthropic.
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'voyage-2',
buildProvider: {
preset: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY
}
}
}
})
Cloudflare Workers AI via workers-ai-provider (runtime only).
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'bge-base-en-v1.5',
buildProvider: {
preset: 'transformers.js' // Build locally
},
runtimeProvider: {
preset: 'workers-ai' // Runtime on Workers
}
}
}
})
Note: Workers AI requires Cloudflare Workers/Pages deployment.
transformers.js
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'bge-small-en-v1.5',
buildProvider: { preset: 'transformers.js' },
runtimeProvider: { preset: 'transformers.js' }
}
}
})
Cost-effective and fast:
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'bge-small-en-v1.5',
buildProvider: {
preset: 'transformers.js' // Free local build
},
runtimeProvider: {
preset: 'openai', // Fast cloud runtime
apiKey: process.env.OPENAI_API_KEY
}
}
}
})
Fastest builds:
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'text-embedding-3-small',
buildProvider: {
preset: 'openai',
apiKey: process.env.OPENAI_API_KEY
},
runtimeProvider: {
preset: 'openai',
apiKey: process.env.OPENAI_API_KEY
}
}
}
})
Local build + Workers AI runtime:
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'bge-base-en-v1.5',
buildProvider: {
preset: 'transformers.js'
},
runtimeProvider: {
preset: 'workers-ai'
},
vectorDatabase: {
provider: 'cloudflare-vectorize',
indexName: 'ai-search'
}
}
}
})
Provider-specific model names are resolved automatically:
| Base Model | transformers.js | workers-ai |
|---|---|---|
bge-small-en-v1.5 | Xenova/bge-small-en-v1.5 | @cf/baai/bge-small-en-v1.5 |
bge-base-en-v1.5 | Xenova/bge-base-en-v1.5 | @cf/baai/bge-base-en-v1.5 |
embeddinggemma-300m | onnx-community/embeddinggemma-300m-ONNX | @cf/google/embeddinggemma-300m |
Dimensions are auto-detected via:
dimensions parametergetModelDimensions(baseModel)embed({ model, value: 'test' })export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'bge-small-en-v1.5',
dimensions: 384, // Auto-detected if omitted
buildProvider: { preset: 'transformers.js' }
}
}
})
OPENAI_API_KEY=sk-...
GOOGLE_API_KEY=...
MISTRAL_API_KEY=...
COHERE_API_KEY=...
ANTHROPIC_API_KEY=...
OLLAMA_BASE_URL=http://localhost:11434
# Clear cache and retry
rm -rf node_modules/.cache/@huggingface/transformers
pnpm generate
# Increase Node.js memory
NODE_OPTIONS=--max-old-space-size=4096 pnpm generate
# Verify environment variable
echo $OPENAI_API_KEY
# Or check .env file
cat .env | grep OPENAI_API_KEY
Error: expected 384, got 1536 dimensions
Fix: Set explicit dimensions in config:
export default defineNuxtConfig({
aiSearch: {
embeddings: {
model: 'text-embedding-3-small',
dimensions: 1536, // Must match model
buildProvider: { preset: 'openai', apiKey: '...' }
}
}
})