Nuxt API

Module Options

Last updated by
Harlan Wilton
in chore: broken links.

ModuleOptions

Complete TypeScript interface for module configuration.

interface ModuleOptions {
  enabled?: boolean
  debug?: boolean

  // API Endpoints
  search?: boolean | SearchApiConfig
  chat?: boolean | ChatApiConfig

  // Embeddings & Vector DB
  embeddings: EmbeddingsConfig

  // LLM for RAG
  llm?: false | LlmConfig

  // Query Rewriting
  queryRewriting?: QueryRewritingConfig
}

Core Options

enabled

  • Type: boolean
  • Default: true

Toggles the module on or off.

nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    enabled: process.env.NODE_ENV === 'production'
  }
})

debug

  • Type: boolean
  • Default: false

Enable debug logging for troubleshooting.

nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    debug: true
  }
})

API Endpoints

  • Type: boolean | SearchApiConfig
  • Default: { enabled: true, route: '/api/search' }

Configure the Search API endpoint.

interface SearchApiConfig {
  enabled?: boolean
  route?: string
}
nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    search: {
      enabled: true,
      route: '/api/search'
    }
  }
})

chat

  • Type: boolean | ChatApiConfig
  • Default: { enabled: true, route: '/api/chat' }

Configure the Chat API endpoint (RAG with streaming).

interface ChatApiConfig {
  enabled?: boolean
  route?: string
}
nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    chat: {
      enabled: true,
      route: '/api/chat'
    }
  }
})

bulk

  • Type: boolean | BulkApiConfig
  • Default: { enabled: true, route: '/_ai-search/bulk' }

Configure the Bulk API endpoint (static JSONL file).

interface BulkApiConfig {
  enabled?: boolean
  route?: string
}
nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    bulk: {
      enabled: true,
      route: '/_ai-search/bulk'
    }
  }
})

Embeddings Configuration

embeddings

  • Type: EmbeddingsConfig
  • Required: Yes

Configure embedding generation and vector storage.

interface EmbeddingsConfig {
  model: string                   // Base model name (provider-agnostic)
  dimensions?: number             // Auto-detected via getModelDimensions()
  buildProvider: ProviderConfig   // For build-time embeddings
  runtimeProvider?: ProviderConfig // For runtime query embeddings
  chunking?: ChunkingConfig
  vectorDatabase?: VectorDbConfig
}

interface ProviderConfig {
  preset: 'transformers.js' | 'ollama' | 'openai' | 'google' | 'mistral' | 'cohere' | 'anthropic' | 'workers-ai'
  apiKey?: string
  baseURL?: string
}

Basic Example

nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    embeddings: {
      model: 'bge-small-en-v1.5',
      buildProvider: {
        preset: 'transformers.js'
      },
      runtimeProvider: {
        preset: 'transformers.js'
      }
    }
  }
})

Dual-Provider Example (Local Build, Cloud Runtime)

nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    embeddings: {
      model: 'bge-small-en-v1.5',
      buildProvider: {
        preset: 'transformers.js'  // Free local build
      },
      runtimeProvider: {
        preset: 'openai',           // Fast cloud runtime
        apiKey: process.env.OPENAI_API_KEY
      }
    }
  }
})

Cloudflare Example

nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    embeddings: {
      model: 'bge-base-en-v1.5',
      buildProvider: {
        preset: 'transformers.js'
      },
      runtimeProvider: {
        preset: 'workers-ai'  // Cloudflare Workers AI
      },
      vectorDatabase: {
        provider: 'cloudflare-vectorize',
        indexName: 'ai-search'
      }
    }
  }
})

Embedding Providers

Available providers via AI SDK:

  • transformers.js: Local ONNX models (free, no API needed)
  • openai: OpenAI embeddings
  • google: Google embeddings
  • mistral: Mistral embeddings
  • cohere: Cohere embeddings
  • anthropic: Anthropic (Voyage) embeddings
  • ollama: Ollama local server
  • workers-ai: Cloudflare Workers AI (runtime only)

embeddings.chunking

  • Type: ChunkingConfig
  • Default: { chunkSize: 400, chunkOverlap: 50 }

Configure content chunking strategy.

interface ChunkingConfig {
  chunkSize?: number
  chunkOverlap?: number
}
nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    embeddings: {
      model: 'bge-small-en-v1.5',
      buildProvider: { preset: 'transformers.js' },
      chunking: {
        chunkSize: 400,
        chunkOverlap: 50
      }
    }
  }
})

embeddings.vectorDatabase

  • Type: VectorDbConfig
  • Default: { provider: 'sqlite-vec' }

Configure vector storage backend.

interface VectorDbConfig {
  provider: 'sqlite-vec' | 'libsql-node' | 'libsql-web' | 'libsql-http' | 'pgvector' | 'upstash' | 'jsonl' | 'cloudflare-vectorize-wrangler' | 'cloudflare-vectorize'

  // For sqlite-vec, libsql-node, jsonl
  path?: string

  // For libsql (all variants), pgvector, upstash
  url?: string

  // For libsql remote
  authToken?: string

  // For upstash
  token?: string

  // For Cloudflare Vectorize
  indexName?: string
  local?: boolean

  // For all providers
  metric?: 'cosine' | 'euclidean' | 'dot-product'
  dimensions?: number  // Set automatically from embeddings
}

SQLite-vec (Default)

nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    embeddings: {
      model: 'bge-small-en-v1.5',
      buildProvider: { preset: 'transformers.js' },
      vectorDatabase: {
        provider: 'sqlite-vec',
        path: '.data/ai-search/embeddings.db'
      }
    }
  }
})

Libsql (Turso)

nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    embeddings: {
      model: 'bge-small-en-v1.5',
      buildProvider: { preset: 'transformers.js' },
      vectorDatabase: {
        provider: 'libsql-node',
        url: 'libsql://my-db.turso.io',
        authToken: process.env.TURSO_AUTH_TOKEN
      }
    }
  }
})

PostgreSQL + pgvector

nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    embeddings: {
      model: 'bge-small-en-v1.5',
      buildProvider: { preset: 'transformers.js' },
      vectorDatabase: {
        provider: 'pgvector',
        url: process.env.POSTGRES_URL
      }
    }
  }
})

Upstash Vector

nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    embeddings: {
      model: 'bge-small-en-v1.5',
      buildProvider: { preset: 'transformers.js' },
      vectorDatabase: {
        provider: 'upstash',
        url: process.env.UPSTASH_VECTOR_URL,
        token: process.env.UPSTASH_VECTOR_TOKEN
      }
    }
  }
})

Cloudflare Vectorize

nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    embeddings: {
      model: 'bge-base-en-v1.5',
      buildProvider: { preset: 'transformers.js' },
      vectorDatabase: {
        provider: 'cloudflare-vectorize',
        indexName: 'ai-search',
        metric: 'cosine'
      }
    }
  }
})

LLM Configuration

llm

  • Type: false | LlmConfig
  • Default: false

Configure LLM for RAG (chat API) and query rewriting.

interface LlmConfig {
  provider: string
  model: string
  apiKey?: string
  baseURL?: string
}
nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    llm: {
      provider: 'openai',
      model: 'gpt-4o-mini',
      apiKey: process.env.OPENAI_API_KEY
    }
  }
})

Cloudflare Workers AI:

nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    llm: {
      provider: 'workers-ai',
      model: '@cf/meta/llama-3.1-8b-instruct'
    }
  }
})

Ollama:

nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    llm: {
      provider: 'ollama',
      model: 'llama3.1',
      baseURL: 'http://localhost:11434'
    }
  }
})

Query Rewriting

queryRewriting

  • Type: QueryRewritingConfig
  • Default: { enabled: true } (if LLM supports it)

Configure query rewriting for the search API.

interface QueryRewritingConfig {
  enabled?: boolean
  systemPrompt?: string
}
nuxt.config.ts
export default defineNuxtConfig({
  aiSearch: {
    queryRewriting: {
      enabled: true,
      systemPrompt: 'Expand the query with relevant synonyms...'
    }
  }
})

Query rewriting is automatically disabled for models <7B params.

Complete Example

nuxt.config.ts
export default defineNuxtConfig({
  modules: ['@mdream/nuxt', 'nuxt-ai-search'],

  aiSearch: {
    // Core
    enabled: true,
    debug: false,

    // API Endpoints
    search: { enabled: true, route: '/api/search' },
    chat: { enabled: true, route: '/api/chat' },

    // Embeddings & Vector DB
    embeddings: {
      model: 'bge-small-en-v1.5',
      dimensions: 384,
      buildProvider: {
        preset: 'transformers.js'
      },
      runtimeProvider: {
        preset: 'openai',
        apiKey: process.env.OPENAI_API_KEY
      },
      chunking: {
        chunkSize: 400,
        chunkOverlap: 50
      },
      vectorDatabase: {
        provider: 'sqlite-vec',
        path: '.data/ai-search/embeddings.db'
      }
    },

    // LLM for RAG
    llm: {
      provider: 'openai',
      model: 'gpt-4o-mini',
      apiKey: process.env.OPENAI_API_KEY
    },

    // Query Rewriting
    queryRewriting: {
      enabled: true
    }
  }
})

Type Definitions

Import types for TypeScript:

import type { ModuleOptions } from 'nuxt-ai-search'

const config: ModuleOptions = {
  // Your config with full type safety
}

Next Steps

Did this page help you?