Api

Composables API Reference

Last updated by
Harlan Wilton
in chore: sync.

useAISearch()

Headless ASK ai composable providing semantic search (list mode) and AI chat (generate mode) following NLWeb specification. Ephemeral sessions stored in sessionStorage.

Import

import { useAISearch } from '#ai-search/composables/useAISearch'

Signature

function useAISearch(options?: UseAISearchOptions): UseAISearchReturn

Options

mode (optional)

  • Type: 'search' | 'chat'
  • Default: 'search'
  • Description: Initial mode (search = list mode, chat = generate mode)
const ai = useAISearch({ mode: 'search' })
const ai = useAISearch({ mode: 'chat' })

sessionKey (optional)

  • Type: string
  • Default: 'ai-search-session'
  • Description: sessionStorage key for chat persistence
const ai = useAISearch({
  mode: 'chat',
  sessionKey: 'my-custom-session-key'
})

limit (optional)

  • Type: number
  • Default: 10
  • Description: Max search results in list mode
const ai = useAISearch({
  mode: 'search',
  limit: 20
})

debounce (optional)

  • Type: number (milliseconds)
  • Default: 0 (disabled)
  • Description: Debounce delay for auto-search (requires autoSearch: true)
const ai = useAISearch({
  mode: 'search',
  autoSearch: true,
  debounce: 300
})

autoSearch (optional)

  • Type: boolean
  • Default: false
  • Description: Auto-search on query change (search mode only)
const ai = useAISearch({
  mode: 'search',
  autoSearch: true,
  debounce: 300
})

onStreamChunk (optional)

  • Type: (chunk: string) => void
  • Description: Callback for each streaming chunk (chat mode only)
const ai = useAISearch({
  mode: 'chat',
  onStreamChunk: (chunk) => {
    console.log('Received chunk:', chunk)
  }
})

onError (optional)

  • Type: (error: Error) => void
  • Description: Error callback for all operations
const ai = useAISearch({
  onError: (error) => {
    console.error('Ask AI error:', error)
  }
})

Return Value

State

mode
  • Type: Readonly<Ref<'search' | 'chat'>>
  • Description: Current mode (read-only)
const ai = useAISearch()
console.log(ai.mode.value) // 'search'
query
  • Type: Ref<string>
  • Description: Current query/input text (writable)
const ai = useAISearch()
ai.query.value = 'nuxt modules'
isLoading
  • Type: Readonly<Ref<boolean>>
  • Description: Loading state (read-only)
const ai = useAISearch()
if (ai.isLoading.value) {
  console.log('Loading...')
}
error
  • Type: Readonly<Ref<Error | null>>
  • Description: Last error (read-only)
const ai = useAISearch()
if (ai.error.value) {
  console.error(ai.error.value.message)
}
searchResults (search mode)
  • Type: Readonly<Ref<SearchResult[]>>
  • Description: Search results array (read-only)
const ai = useAISearch({ mode: 'search' })
ai.searchResults.value.forEach(result => {
  console.log(result.name, result.score)
})
messages (chat mode)
  • Type: Readonly<Ref<Message[]>>
  • Description: Chat messages array (read-only)
const ai = useAISearch({ mode: 'chat' })
ai.messages.value.forEach(msg => {
  console.log(`${msg.role}: ${msg.content}`)
})

Actions

submit()
  • Type: () => Promise<void>
  • Description: Submit query (delegates to search() or sendMessage() based on mode)
const ai = useAISearch()
await ai.submit()
  • Type: () => Promise<void>
  • Description: Execute semantic search (list mode)
const ai = useAISearch({ mode: 'search' })
ai.query.value = 'nuxt modules'
await ai.search()
console.log(ai.searchResults.value)
sendMessage()
  • Type: () => Promise<void>
  • Description: Send chat message (generate mode with streaming)
const ai = useAISearch({ mode: 'chat' })
ai.query.value = 'How do I create a module?'
await ai.sendMessage()
console.log(ai.messages.value)
switchMode()
  • Type: (newMode: 'search' | 'chat') => void
  • Description: Switch between search and chat modes (clears query and error)
const ai = useAISearch({ mode: 'search' })
ai.switchMode('chat')
clearSession()
  • Type: () => void
  • Description: Clear chat session (removes from sessionStorage)
const ai = useAISearch({ mode: 'chat' })
ai.clearSession()
copyAllMessages()
  • Type: () => Promise<void>
  • Description: Copy all chat messages to clipboard (markdown format)
const ai = useAISearch({ mode: 'chat' })
await ai.copyAllMessages()
exportAsMarkdown()
  • Type: () => void
  • Description: Export chat as markdown file (downloads .md file)
const ai = useAISearch({ mode: 'chat' })
ai.exportAsMarkdown()
abort()
  • Type: () => void
  • Description: Abort current request
const ai = useAISearch()
ai.abort()

TypeScript Types

Message

interface Message {
  id: string
  role: 'user' | 'assistant'
  content: string
  timestamp: number
}

SearchResult

interface SearchResult {
  url: string
  name: string
  score: number
  markdown?: string
  description?: string
}

UseAISearchOptions

interface UseAISearchOptions {
  mode?: 'search' | 'chat'
  sessionKey?: string
  limit?: number
  debounce?: number
  autoSearch?: boolean
  onStreamChunk?: (chunk: string) => void
  onError?: (error: Error) => void
}

Examples

<script setup lang="ts">
const ai = useAISearch({ mode: 'search' })
</script>

<template>
  <div>
    <input v-model="ai.query.value" @keyup.enter="ai.submit()" />
    <button @click="ai.submit()" :disabled="ai.isLoading.value">
      Search
    </button>

    <div v-if="ai.searchResults.value.length">
      <div v-for="result in ai.searchResults.value" :key="result.url">
        <a :href="result.url">{{ result.name }}</a>
        <p>{{ result.description }}</p>
        <small>Score: {{ (result.score * 100).toFixed(0) }}%</small>
      </div>
    </div>
  </div>
</template>

Auto-Search with Debounce

<script setup lang="ts">
const ai = useAISearch({
  mode: 'search',
  autoSearch: true,
  debounce: 300
})
</script>

<template>
  <div>
    <input
      v-model="ai.query.value"
      placeholder="Search as you type..."
    />

    <div v-if="ai.isLoading.value">Loading...</div>

    <ul v-else-if="ai.searchResults.value.length">
      <li v-for="result in ai.searchResults.value" :key="result.url">
        <a :href="result.url">{{ result.name }}</a>
      </li>
    </ul>
  </div>
</template>

Chat with Streaming

<script setup lang="ts">
const ai = useAISearch({
  mode: 'chat',
  onStreamChunk: (chunk) => {
    console.log('Chunk:', chunk)
  }
})
</script>

<template>
  <div>
    <div v-for="msg in ai.messages.value" :key="msg.id">
      <strong>{{ msg.role }}:</strong>
      <p>{{ msg.content }}</p>
    </div>

    <input v-model="ai.query.value" @keyup.enter="ai.sendMessage()" />
    <button @click="ai.sendMessage()" :disabled="ai.isLoading.value">
      Send
    </button>
    <button v-if="ai.isLoading.value" @click="ai.abort()">
      Stop
    </button>
  </div>
</template>

Mode Switching

<script setup lang="ts">
const ai = useAISearch({ mode: 'search' })
</script>

<template>
  <div>
    <div>
      <button
        @click="ai.switchMode('search')"
        :disabled="ai.mode.value === 'search'"
      >
        Search
      </button>
      <button
        @click="ai.switchMode('chat')"
        :disabled="ai.mode.value === 'chat'"
      >
        Chat
      </button>
    </div>

    <!-- Search Mode -->
    <div v-if="ai.mode.value === 'search'">
      <input v-model="ai.query.value" />
      <button @click="ai.search()">Search</button>

      <div v-for="result in ai.searchResults.value">
        {{ result.name }}
      </div>
    </div>

    <!-- Chat Mode -->
    <div v-else>
      <div v-for="msg in ai.messages.value">
        {{ msg.content }}
      </div>

      <input v-model="ai.query.value" />
      <button @click="ai.sendMessage()">Send</button>
    </div>
  </div>
</template>

Error Handling

<script setup lang="ts">
const ai = useAISearch({
  onError: (error) => {
    console.error('Error:', error.message)
  }
})
</script>

<template>
  <div>
    <input v-model="ai.query.value" />
    <button @click="ai.submit()">Submit</button>

    <div v-if="ai.error.value" class="error">
      {{ ai.error.value.message }}
    </div>
  </div>
</template>

Session Management

<script setup lang="ts">
const ai = useAISearch({
  mode: 'chat',
  sessionKey: 'my-custom-session'
})
</script>

<template>
  <div>
    <div v-for="msg in ai.messages.value">
      {{ msg.content }}
    </div>

    <input v-model="ai.query.value" />
    <button @click="ai.sendMessage()">Send</button>

    <div>
      <button @click="ai.copyAllMessages()">Copy All</button>
      <button @click="ai.exportAsMarkdown()">Export</button>
      <button @click="ai.clearSession()">Clear</button>
    </div>
  </div>
</template>

Performance

Initialization

  • First call: <50ms
  • Subsequent calls: <10ms (reuses instance)

Operations

  • Search: 100-300ms (embedding generation + vector query)
  • Chat streaming: First token <500ms, subsequent <50ms
  • Mode switch: <1ms
  • Abort: <10ms

Memory Usage

  • Base: ~5MB
  • With messages (100): ~8MB
  • SessionStorage: ~1KB per message

Best Practices

Use Auto-Search Sparingly

// Good: User-initiated search
const ai = useAISearch({ mode: 'search' })

// Use with caution: May cause many requests
const ai = useAISearch({
  mode: 'search',
  autoSearch: true,
  debounce: 500 // Higher debounce for fewer requests
})

Handle Errors Gracefully

const ai = useAISearch({
  onError: (error) => {
    // Log to error tracking service
    console.error('[Ask AI]', error)

    // Show user-friendly message
    toast.error('Search failed. Please try again.')
  }
})

Clean Up Chat Sessions

// Clear session on logout
onBeforeUnmount(() => {
  ai.clearSession()
})

Abort Long Requests

// Abort when component unmounts
onBeforeUnmount(() => {
  ai.abort()
})

// Abort when switching modes
watch(() => ai.mode.value, () => {
  ai.abort()
})

Next Steps

Did this page help you?