Nuxt API

Config

Last updated by Harlan Wilton in feat: migrate to nuxt-ai-kit.

enabled

  • Type: boolean
  • Default: true

Whether to enable the module.

debug

  • Type: boolean
  • Default: false

Enable debug logging and write chunks to public/__embeddings/.

  • Type: boolean | { enabled?: boolean, route?: string }
  • Default: { enabled: true, route: '/api/search' }

Configure the Search API endpoint. Set false to disable.

chat

  • Type: boolean | { enabled?: boolean, route?: string }
  • Default: { enabled: true, route: '/api/chat' }

Configure the Chat API endpoint (RAG with streaming). Set false to disable.

bulk

  • Type: boolean | { enabled?: boolean, route?: string }
  • Default: { enabled: true, route: '/_ai-kit/bulk' }

Configure the Bulk API endpoint (static JSONL file). Set false to disable.

embeddings

  • Type: EmbeddingsConfig
  • Required: Yes

Configure embedding generation and vector storage.

embeddings.model

  • Type: string
  • Required: Yes

Base model name. Provider-agnostic - the same model name works across providers that support it.

embeddings.dimensions

  • Type: number
  • Default: Auto-detected from model

Vector dimensions. Usually auto-detected via getModelDimensions().

embeddings.buildProvider

  • Type: { preset: string, apiKey?: string, baseURL?: string }
  • Required: Yes

Provider for build-time embedding generation.

Available presets: transformers.js, ollama, openai, google, mistral, cohere, anthropic.

embeddings: {
  model: 'bge-small-en-v1.5',
  buildProvider: { preset: 'transformers.js' }
}

embeddings.runtimeProvider

  • Type: { preset: string, apiKey?: string, baseURL?: string }
  • Default: Same as buildProvider

Provider for runtime query embeddings. Can differ from build provider - useful for local build + cloud runtime.

Additional preset: workers-ai (Cloudflare Workers AI, runtime only).

embeddings: {
  model: 'bge-small-en-v1.5',
  buildProvider: { preset: 'transformers.js' },
  runtimeProvider: { preset: 'openai', apiKey: process.env.OPENAI_API_KEY }
}

embeddings.chunking

  • Type: { chunkSize?: number, chunkOverlap?: number }
  • Default: { chunkSize: 400, chunkOverlap: 50 }

Content chunking strategy.

embeddings.vectorDatabase

  • Type: VectorDbConfig
  • Default: { provider: 'sqlite-vec' }

Configure vector storage backend.

vectorDatabase.provider

Available providers:

ProviderUse Case
sqlite-vecLocal SQLite with vector extension (default)
libsql-nodeTurso/libsql local
libsql-webTurso/libsql browser
libsql-httpTurso/libsql remote
pgvectorPostgreSQL with pgvector
upstashUpstash Vector
cloudflare-vectorizeCloudflare Vectorize
jsonlStatic JSONL file

vectorDatabase.path

  • Type: string
  • Default: .data/ai-kit/embeddings.db

File path for sqlite-vec, libsql-node, jsonl.

vectorDatabase.url

  • Type: string

Connection URL for libsql-*, pgvector, upstash.

vectorDatabase.authToken

  • Type: string

Auth token for libsql remote connections.

vectorDatabase.token

  • Type: string

Token for Upstash Vector.

vectorDatabase.indexName

  • Type: string

Index name for Cloudflare Vectorize.

vectorDatabase.metric

  • Type: 'cosine' | 'euclidean' | 'dot-product'
  • Default: 'cosine'

Distance metric for vector similarity.

llm

  • Type: false | { provider: string, model: string, apiKey?: string, baseURL?: string }
  • Default: false

Configure LLM for RAG (chat API) and query rewriting. Required for chat functionality.

llm: {
  provider: 'openai',
  model: 'gpt-4o-mini',
  apiKey: process.env.OPENAI_API_KEY
}

Providers: openai, anthropic, google, mistral, cohere, ollama, workers-ai.

queryRewriting

  • Type: { enabled?: boolean, systemPrompt?: string }
  • Default: { enabled: true } (if LLM configured)

Query rewriting expands user queries with synonyms and related terms before embedding. Auto-disabled for models <7B params.

include

  • Type: string[]
  • Default: ['/**']

Glob patterns for routes to index. See Filtering Routes.

exclude

  • Type: string[]
  • Default: []

Glob patterns for routes to exclude from indexing.

Did this page help you?