AI Chat Plugin
AI-powered chat functionality with conversation history, streaming, sidebar navigation, and customizable models
Installation
Ensure you followed the general framework installation guide first.
Follow these steps to add the AI Chat plugin to your BTST setup.
1. Add Plugin to Backend API
Import and register the AI Chat backend plugin in your stack.ts file:
import { stack } from "@btst/stack"
import { aiChatBackendPlugin } from "@btst/stack/plugins/ai-chat/api"
import { openai } from "@ai-sdk/openai"
// ... your adapter imports
const { handler, dbSchema } = stack({
basePath: "/api/data",
plugins: {
aiChat: aiChatBackendPlugin({
model: openai("gpt-4o"), // Or any LanguageModel from AI SDK
mode: "authenticated", // "authenticated" (default) or "public"
// Extract userId from request headers to scope conversations per user
getUserId: async (ctx) => {
const token = ctx.headers?.get("authorization")
if (!token) return null // Deny access if no auth
const user = await verifyToken(token) // Your auth logic
return user?.id ?? null
},
systemPrompt: "You are a helpful assistant.", // Optional
tools: {}, // Optional: AI SDK v5 tools
})
},
adapter: (db) => createMemoryAdapter(db)({})
})
export { handler, dbSchema }The aiChatBackendPlugin() accepts optional hooks for customizing behavior (authorization, logging, etc.).
Model Configuration: You can use any model from the AI SDK, including OpenAI, Anthropic, Google, and more. Make sure to install the corresponding provider package (e.g., @ai-sdk/openai) and set up your API keys in environment variables.
2. Add Plugin to Client
Register the AI Chat client plugin in your stack-client.tsx file:
import { createStackClient } from "@btst/stack/client"
import { aiChatClientPlugin } from "@btst/stack/plugins/ai-chat/client"
import { QueryClient } from "@tanstack/react-query"
const getBaseURL = () =>
typeof window !== 'undefined'
? (process.env.NEXT_PUBLIC_BASE_URL || window.location.origin)
: (process.env.BASE_URL || "http://localhost:3000")
export const getStackClient = (queryClient: QueryClient, options?: { headers?: Headers }) => {
const baseURL = getBaseURL()
return createStackClient({
plugins: {
aiChat: aiChatClientPlugin({
// Required configuration
apiBaseURL: baseURL,
apiBasePath: "/api/data",
siteBaseURL: baseURL,
siteBasePath: "/pages",
queryClient: queryClient,
headers: options?.headers,
// Mode should match backend config
mode: "authenticated", // "authenticated" (default) or "public"
// Optional: SEO configuration
seo: {
siteName: "My Chat App",
description: "AI-powered chat assistant",
},
})
}
})
}Required configuration:
apiBaseURL: Base URL for API calls during SSR data prefetching (use environment variables for flexibility)apiBasePath: Path where your API is mounted (e.g.,/api/data)siteBaseURL: Base URL of your sitesiteBasePath: Path where your pages are mounted (e.g.,/pages)queryClient: React Query client instance
Why configure API paths here? This configuration is used by server-side loaders that prefetch data before your pages render. These loaders run outside of React Context, so they need direct configuration. You'll also provide apiBaseURL and apiBasePath again in the Provider overrides (Section 4) for client-side components that run during actual rendering.
3. Import Plugin CSS
Add the AI Chat plugin CSS to your global stylesheet:
@import "@btst/stack/plugins/ai-chat/css";This includes all necessary styles for the chat components and markdown rendering.
4. Add Context Overrides
Configure framework-specific overrides in your StackProvider:
"use client"
import { useState } from "react"
import { StackProvider } from "@btst/stack/context"
import { QueryClientProvider } from "@tanstack/react-query"
import type { AiChatPluginOverrides } from "@btst/stack/plugins/ai-chat/client"
import Link from "next/link"
import Image from "next/image"
import { useRouter } from "next/navigation"
import { getOrCreateQueryClient } from "@/lib/query-client"
const getBaseURL = () =>
typeof window !== 'undefined'
? (process.env.NEXT_PUBLIC_BASE_URL || window.location.origin)
: (process.env.BASE_URL || "http://localhost:3000")
type PluginOverrides = {
"ai-chat": AiChatPluginOverrides
}
export default function Layout({ children }) {
const router = useRouter()
const [queryClient] = useState(() => getOrCreateQueryClient())
const baseURL = getBaseURL()
return (
<QueryClientProvider client={queryClient}>
<StackProvider<PluginOverrides>
basePath="/pages"
overrides={{
"ai-chat": {
mode: "authenticated", // Should match backend config
apiBaseURL: baseURL,
apiBasePath: "/api/data",
navigate: (path) => router.push(path),
refresh: () => router.refresh(),
uploadFile: async (file) => {
// Implement your file upload logic
return "https://example.com/uploads/file.pdf"
},
Link: ({ href, ...props }) => <Link href={href || "#"} {...props} />,
Image: (props) => <Image {...props} />,
}
}}
>
{children}
</StackProvider>
</QueryClientProvider>
)
}import { useState } from "react"
import { Outlet, Link, useNavigate } from "react-router"
import { StackProvider } from "@btst/stack/context"
import { QueryClientProvider, QueryClient } from "@tanstack/react-query"
import type { AiChatPluginOverrides } from "@btst/stack/plugins/ai-chat/client"
const getBaseURL = () =>
typeof window !== 'undefined'
? (import.meta.env.VITE_BASE_URL || window.location.origin)
: (process.env.BASE_URL || "http://localhost:5173")
type PluginOverrides = {
"ai-chat": AiChatPluginOverrides
}
export default function Layout() {
const navigate = useNavigate()
const [queryClient] = useState(() => new QueryClient())
const baseURL = getBaseURL()
return (
<QueryClientProvider client={queryClient}>
<StackProvider<PluginOverrides>
basePath="/pages"
overrides={{
"ai-chat": {
mode: "authenticated",
apiBaseURL: baseURL,
apiBasePath: "/api/data",
navigate: (href) => navigate(href),
uploadFile: async (file) => {
return "https://example.com/uploads/file.pdf"
},
Link: ({ href, children, className, ...props }) => (
<Link to={href || ""} className={className} {...props}>
{children}
</Link>
),
}
}}
>
<Outlet />
</StackProvider>
</QueryClientProvider>
)
}import { useState } from "react"
import { StackProvider } from "@btst/stack/context"
import { QueryClientProvider, QueryClient } from "@tanstack/react-query"
import type { AiChatPluginOverrides } from "@btst/stack/plugins/ai-chat/client"
import { Link, useRouter, Outlet } from "@tanstack/react-router"
const getBaseURL = () =>
typeof window !== 'undefined'
? (import.meta.env.VITE_BASE_URL || window.location.origin)
: (process.env.BASE_URL || "http://localhost:3000")
type PluginOverrides = {
"ai-chat": AiChatPluginOverrides
}
function Layout() {
const router = useRouter()
const [queryClient] = useState(() => new QueryClient())
const baseURL = getBaseURL()
return (
<QueryClientProvider client={queryClient}>
<StackProvider<PluginOverrides>
basePath="/pages"
overrides={{
"ai-chat": {
mode: "authenticated",
apiBaseURL: baseURL,
apiBasePath: "/api/data",
navigate: (href) => router.navigate({ href }),
uploadFile: async (file) => {
return "https://example.com/uploads/file.pdf"
},
Link: ({ href, children, className, ...props }) => (
<Link to={href} className={className} {...props}>
{children}
</Link>
),
}
}}
>
<Outlet />
</StackProvider>
</QueryClientProvider>
)
}Required overrides:
apiBaseURL: Base URL for API calls (used by client-side components during rendering)apiBasePath: Path where your API is mountednavigate: Function for programmatic navigation
Optional overrides:
mode: Plugin mode ("authenticated"or"public")uploadFile: Function to upload files and return their URLallowedFileTypes: Array of allowed file type categories (default: all types)chatSuggestions: Array of suggested prompts shown in empty chat stateLink: Custom Link component (defaults to<a>tag)Image: Custom Image component (useful for Next.js Image optimization)refresh: Function to refresh server-side cache (useful for Next.js)localization: Custom localization stringsheaders: Headers to pass with API requests
Why provide API paths again? You already configured these in Section 2, but that configuration is only available to server-side loaders. The overrides here provide the same values to client-side components (like hooks, forms, and UI) via React Context. These two contexts serve different phases: loaders prefetch data server-side before rendering, while components use data during actual rendering (both SSR and CSR).
5. Generate Database Schema
After adding the plugin, generate your database schema using the CLI:
npx @btst/cli generate --orm prisma --config lib/stack.ts --output prisma/schema.prismaThis will create the necessary database tables for conversations and messages. Run migrations as needed for your ORM.
For more details on the CLI and all available options, see the CLI documentation.
Congratulations, You're Done! 🎉
Your AI Chat plugin is now fully configured and ready to use! Here's a quick reference of what's available:
Plugin Modes
The AI Chat plugin supports two distinct modes:
Authenticated Mode (Default)
- Conversation persistence in database
- User-scoped data via
getUserId - Full UI with sidebar and conversation history
- Routes:
/chat(new/list) and/chat/:id(existing conversation)
Public Mode
- No persistence (stateless)
- Simple UI without sidebar
- Ideal for public-facing chatbots
- Single route:
/chat
API Endpoints
The AI Chat plugin provides the following API endpoints (mounted at your configured apiBasePath):
- POST
/chat- Send a message and receive streaming response - GET
/conversations- List all conversations (authenticated mode only) - GET
/conversations/:id- Get a conversation with messages - POST
/conversations- Create a new conversation - PUT
/conversations/:id- Update (rename) a conversation - DELETE
/conversations/:id- Delete a conversation
Page Routes
The AI Chat plugin automatically creates the following pages (mounted at your configured siteBasePath):
Authenticated mode:
/chat- Start a new conversation (with sidebar showing history)/chat/:id- Resume an existing conversation
Public mode:
/chat- Simple chat interface without history
Features
- Full-page Layout: Responsive chat interface with collapsible sidebar
- Conversation Sidebar: View, rename, and delete past conversations
- Streaming Responses: Real-time streaming of AI responses using AI SDK v5
- Markdown Support: Full markdown rendering with code highlighting
- File Uploads: Attach images, PDFs, and text files to messages
- Tools Support: Use AI SDK v5 tools for function calling
- Customizable Models: Use any LanguageModel from the AI SDK
- Authorization Hooks: Add custom authentication and authorization logic
- Localization: Customize all UI strings
Page Component Overrides
You can replace any built-in page with your own React component using the optional pageComponents field in aiChatClientPlugin(config). The built-in component is used as the fallback whenever an override is not provided, so this is fully backward-compatible.
aiChatClientPlugin({
// ... other config
pageComponents: {
// Replace the chat home page
chat: MyCustomChatPage,
// Replace the conversation page (authenticated mode only)
// receives conversationId as a prop
chatConversation: ({ conversationId }) => (
<MyCustomConversationPage conversationId={conversationId} />
),
},
})Adding Authorization
To add authorization rules and customize behavior, you can use the lifecycle hooks defined in the API Reference section below. These hooks allow you to control access to API endpoints, add logging, and customize the plugin's behavior to fit your application's needs.
API Reference
Backend (@btst/stack/plugins/ai-chat/api)
aiChatBackendPlugin
Prop
Type
AiChatBackendConfig
The backend plugin accepts a configuration object with the model, mode, and optional hooks:
Prop
Type
AiChatBackendHooks
Customize backend behavior with optional lifecycle hooks. All hooks are optional and allow you to add authorization, logging, and custom behavior:
Prop
Type
Example usage:
import { aiChatBackendPlugin, type AiChatBackendHooks } from "@btst/stack/plugins/ai-chat/api"
const chatHooks: AiChatBackendHooks = {
// Authorization hooks — throw to deny access
onBeforeChat(messages, context) {
const authHeader = context.headers?.get("authorization")
if (!authHeader) throw new Error("Unauthorized")
},
async onBeforeListConversations(context) {
if (!await isAuthenticated(context.headers as Headers))
throw new Error("Unauthorized")
},
async onBeforeDeleteConversation(conversationId, context) {
if (!await isAuthenticated(context.headers as Headers))
throw new Error("Unauthorized")
},
// Lifecycle hooks
onConversationCreated(conversation, context) {
console.log("Conversation created:", conversation.id)
},
onAfterChat(conversationId, messages, context) {
console.log("Chat completed:", conversationId, "messages:", messages.length)
},
// Error hooks
onChatError(error, context) {
console.error("Chat error:", error.message)
},
}
const { handler, dbSchema } = stack({
plugins: {
aiChat: aiChatBackendPlugin({
model: openai("gpt-4o"),
hooks: chatHooks
})
},
// ...
})ChatApiContext
Prop
Type
Client (@btst/stack/plugins/ai-chat/client)
aiChatClientPlugin
Prop
Type
AiChatClientConfig
The client plugin accepts a configuration object with required fields and optional SEO settings:
Prop
Type
Example usage:
aiChat: aiChatClientPlugin({
// Required configuration
apiBaseURL: baseURL,
apiBasePath: "/api/data",
siteBaseURL: baseURL,
siteBasePath: "/pages",
queryClient: queryClient,
headers: options?.headers,
// Mode configuration
mode: "authenticated",
// Optional SEO configuration
seo: {
siteName: "My AI Assistant",
description: "Chat with our AI assistant",
locale: "en_US",
defaultImage: `${baseURL}/og-image.png`,
},
})AiChatClientHooks
Customize client-side behavior with lifecycle hooks. These hooks are called during data fetching (both SSR and CSR):
Prop
Type
Example usage:
aiChat: aiChatClientPlugin({
// ... rest of the config
headers: options?.headers,
hooks: {
beforeLoadConversations: async (context) => {
// Check if user is authenticated before loading
if (!await isAuthenticated(context.headers))
throw new Error("Unauthorized")
},
afterLoadConversation: async (conversation, id, context) => {
// Log access for analytics
console.log("User accessed conversation:", id)
},
onLoadError(error, context) {
// Handle error - redirect to login
redirect("/auth/sign-in")
},
}
})LoaderContext
Prop
Type
RouteContext
Prop
Type
AiChatPluginOverrides
Configure framework-specific overrides and route lifecycle hooks. All lifecycle hooks are optional:
Prop
Type
Example usage:
overrides={{
"ai-chat": {
// Required overrides
apiBaseURL: baseURL,
apiBasePath: "/api/data",
navigate: (path) => router.push(path),
// Optional overrides
mode: "authenticated",
uploadFile: async (file) => {
const formData = new FormData()
formData.append("file", file)
const res = await fetch("/api/upload", { method: "POST", body: formData })
const { url } = await res.json()
return url
},
allowedFileTypes: ["image", "pdf", "text"], // Restrict allowed types
// Suggested prompts shown in empty chat state
chatSuggestions: [
"What can you help me with?",
"Tell me about your features",
"How do I get started?",
],
// Custom tool UI renderers (see "Custom Tool UI Renderers" section)
toolRenderers: {
getWeather: WeatherCard,
searchDocs: SearchResultsRenderer,
},
// Optional lifecycle hooks
onBeforeChatPageRendered: (context) => {
// Check if user can view chat. Useful for SPA.
// Throw to deny: throw new Error("Unauthorized")
},
onBeforeConversationPageRendered: (id, context) => {
// Check if user can view this specific conversation.
// Throw to deny: throw new Error("Unauthorized")
},
}
}}ChatLayout Component
The ChatLayout component provides a ready-to-use chat interface. It can be used directly for custom integrations or public mode with persistence:
import { ChatLayout, type ChatLayoutProps, type UIMessage } from "@btst/stack/plugins/ai-chat/client"ChatLayoutProps
Prop
Type
Widget layout — built-in trigger (default)
The default widget mode manages its own open/close state and renders a floating trigger button. Drop it anywhere in your layout and it just works:
<ChatLayout
apiBaseURL={baseURL}
apiBasePath="/api/data"
layout="widget"
widgetHeight="520px"
/>Widget layout — externally controlled (no trigger)
Use defaultOpen and showTrigger={false} when your own UI handles opening and closing — for example, a Next.js intercepting route modal or a custom dialog. The chat panel is immediately visible and the built-in trigger button is not rendered:
{/* Rendered inside a modal/dialog that you control */}
<ChatLayout
apiBaseURL={baseURL}
apiBasePath="/api/data"
layout="widget"
widgetHeight="500px"
defaultOpen={true}
showTrigger={false}
/>Next.js parallel-routes + intercepting-routes pattern — a common way to display the widget as a modal overlay while keeping a floating button on every page:
app/
@chatWidget/
default.tsx ← floating button (Link to /chat)
loading.tsx ← loading overlay
(.)chat/
page.tsx ← intercepting route: renders modal with ChatLayout
chat/
page.tsx ← full-page fallback (hard nav / refresh)
layout.tsx ← passes chatWidget slot into the body"use client";
import Link from "next/link";
import { BotIcon } from "lucide-react";
export default function ChatWidgetButton() {
return (
<Link href="/chat" className="fixed bottom-6 right-6 z-50 ...">
<BotIcon className="size-8" />
</Link>
);
}"use client";
import { useRouter } from "next/navigation";
import { StackProvider } from "@btst/stack/context";
import { ChatLayout } from "@btst/stack/plugins/ai-chat/client";
const getBaseURL = () =>
typeof window !== "undefined"
? (process.env.NEXT_PUBLIC_BASE_URL || window.location.origin)
: (process.env.BASE_URL || "http://localhost:3000");
export default function ChatModal() {
const router = useRouter();
const baseURL = getBaseURL();
return (
{/* Backdrop */}
<div className="fixed inset-0 z-50 bg-black/50" onClick={() => router.back()}>
{/* Modal card */}
<div className="..." onClick={(e) => e.stopPropagation()}>
<StackProvider ...>
{/* Panel is pre-opened; no trigger button rendered */}
<ChatLayout
apiBaseURL={baseURL}
apiBasePath="/api/data"
layout="widget"
defaultOpen={true}
showTrigger={false}
/>
</StackProvider>
</div>
</div>
);
}Example usage with localStorage persistence:
<ChatLayout
apiBaseURL={baseURL}
apiBasePath="/api/data"
layout="widget"
widgetHeight="500px"
initialMessages={savedMessages}
onMessagesChange={(messages) => localStorage.setItem("chat", JSON.stringify(messages))}
/>React Data Hooks and Types
You can import the hooks from "@btst/stack/plugins/ai-chat/client/hooks" to use in your components.
import {
useConversations,
useConversation,
useSuspenseConversations,
useSuspenseConversation,
useCreateConversation,
useRenameConversation,
useDeleteConversation,
} from "@btst/stack/plugins/ai-chat/client/hooks"UseConversationsOptions
Prop
Type
UseConversationsResult
Prop
Type
UseConversationOptions
Prop
Type
UseConversationResult
Prop
Type
Example usage:
import {
useConversations,
useConversation,
useCreateConversation,
useRenameConversation,
useDeleteConversation,
} from "@btst/stack/plugins/ai-chat/client/hooks"
function ConversationsList() {
// List all conversations
const { conversations, isLoading, error, refetch } = useConversations()
// Get single conversation with messages
const { conversation } = useConversation(selectedId)
// Mutations
const createMutation = useCreateConversation()
const renameMutation = useRenameConversation()
const deleteMutation = useDeleteConversation()
const handleCreate = async () => {
const newConv = await createMutation.mutateAsync({ title: "New Chat" })
// Navigate to new conversation
}
const handleRename = async (id: string, newTitle: string) => {
await renameMutation.mutateAsync({ id, title: newTitle })
}
const handleDelete = async (id: string) => {
await deleteMutation.mutateAsync({ id })
}
// ... render conversations
}Model & Tools Configuration
Using Different Models
import { openai } from "@ai-sdk/openai"
import { anthropic } from "@ai-sdk/anthropic"
import { google } from "@ai-sdk/google"
// Use OpenAI
aiChat: aiChatBackendPlugin({
model: openai("gpt-4o"),
})
// Or use Anthropic
aiChat: aiChatBackendPlugin({
model: anthropic("claude-3-5-sonnet-20241022"),
})
// Or use Google
aiChat: aiChatBackendPlugin({
model: google("gemini-1.5-pro"),
})Adding Tools
Use AI SDK v5 tools for function calling:
import { tool } from "ai"
import { z } from "zod"
const weatherTool = tool({
description: "Get the current weather in a location",
inputSchema: z.object({
location: z.string().describe("The city and state"),
}),
execute: async ({ location }) => {
// Your implementation
return { temperature: 72, condition: "sunny" }
},
})
aiChat: aiChatBackendPlugin({
model: openai("gpt-4o"),
tools: {
getWeather: weatherTool,
},
})Custom Tool UI Renderers
By default, tool calls are displayed using a collapsible accordion that shows the tool name, status, input, and output. You can customize this UI by providing custom renderers for specific tools via the toolRenderers override.
Default Tool UI
The default ToolCallDisplay component shows:
- Tool name with status indicator (loading spinner, checkmark, or error icon)
- Collapsible accordion to inspect tool call details
- Input arguments passed to the tool
- Output returned by the tool (when complete)
- Error message (if tool execution failed)
Custom Tool Renderers
Provide custom UI components for specific tools using the toolRenderers override. Each key should match the tool name from your backend configuration:
import type { AiChatPluginOverrides, ToolCallProps } from "@btst/stack/plugins/ai-chat/client"
// Custom weather card component
function WeatherCard({ input, output, isLoading }: ToolCallProps<{ location: string }, { temperature: number; condition: string }>) {
if (isLoading) {
return (
<div className="p-4 border rounded-lg animate-pulse">
<div className="h-4 w-24 bg-muted rounded" />
</div>
)
}
if (!output) return null
return (
<div className="p-4 border rounded-lg bg-gradient-to-r from-blue-50 to-blue-100">
<h4 className="font-medium">{input?.location}</h4>
<p className="text-2xl font-bold">{output.temperature}°F</p>
<p className="text-muted-foreground">{output.condition}</p>
</div>
)
}
// In your layout
<StackProvider<PluginOverrides>
basePath="/pages"
overrides={{
"ai-chat": {
apiBaseURL: baseURL,
apiBasePath: "/api/data",
navigate: (path) => router.push(path),
// Custom tool renderers
toolRenderers: {
getWeather: WeatherCard,
searchDocs: ({ input, output, isLoading }) => (
<SearchResultsCard query={input?.query} results={output} loading={isLoading} />
),
},
}
}}
>
{children}
</StackProvider>ToolCallProps
Each custom renderer receives these props:
Prop
Type
ToolCallState
The possible states of a tool call:
Prop
Type
Using the Default ToolCallDisplay
You can also import and use the default ToolCallDisplay component in your custom renderers as a fallback:
import { ToolCallDisplay, type ToolCallProps } from "@btst/stack/plugins/ai-chat/client"
function MyCustomToolRenderer(props: ToolCallProps) {
// Custom rendering for specific states
if (props.state === "output-available" && props.output) {
return <MyCustomOutput data={props.output} />
}
// Fall back to default display for other states
return <ToolCallDisplay {...props} />
}Public Mode Configuration
For public chatbots without user authentication:
Backend Setup
import { stack } from "@btst/stack"
import { aiChatBackendPlugin } from "@btst/stack/plugins/ai-chat/api"
import { openai } from "@ai-sdk/openai"
// Example rate limiter (implement your own)
const rateLimiter = new Map<string, number>()
const { handler, dbSchema } = stack({
basePath: "/api/data",
plugins: {
aiChat: aiChatBackendPlugin({
model: openai("gpt-4o"),
mode: "public", // Stateless mode - no persistence
systemPrompt: "You are a helpful customer support bot.",
hooks: {
onBeforeChat: async (messages, ctx) => {
// Implement rate limiting
const ip = ctx.headers?.get("x-forwarded-for") || "unknown"
const requests = rateLimiter.get(ip) || 0
if (requests > 10) {
throw new Error("Rate limit exceeded")
}
rateLimiter.set(ip, requests + 1)
},
},
})
},
adapter: (db) => createMemoryAdapter(db)({})
})Client Setup
aiChat: aiChatClientPlugin({
apiBaseURL: baseURL,
apiBasePath: "/api/data",
siteBaseURL: baseURL,
siteBasePath: "/pages",
queryClient: queryClient,
mode: "public", // Must match backend
})Context Overrides
overrides={{
"ai-chat": {
mode: "public",
apiBaseURL: baseURL,
apiBasePath: "/api/data",
navigate: (path) => router.push(path),
// No uploadFile needed in public mode typically
}
}}In public mode, the sidebar is hidden, conversation history is not saved to the database, and only the /chat route is available.
Local Storage Persistence
By default, public mode is completely stateless - messages are lost on page refresh. However, you can persist conversations to localStorage (or any storage mechanism) using the initialMessages and onMessagesChange props on ChatLayout:
"use client";
import { ChatLayout, type UIMessage } from "@btst/stack/plugins/ai-chat/client";
import { useLocalStorage } from "@/hooks/useLocalStorage"; // Your hook
const baseURL = typeof window !== "undefined" ? window.location.origin : "http://localhost:3000";
export default function PublicChat() {
const [messages, setMessages] = useLocalStorage<UIMessage[]>(
"public-chat-messages",
[]
);
return (
<ChatLayout
apiBaseURL={baseURL}
apiBasePath="/api/data"
layout="widget"
initialMessages={messages}
onMessagesChange={setMessages}
/>
);
}SSR Hydration: When using localStorage with SSR frameworks, ensure you handle hydration correctly to avoid mismatches. The initialMessages prop is applied on mount, so it works well with client-side storage hooks that return an empty array during SSR.
Key points:
initialMessages- Pre-populates the chat with saved messages on mountonMessagesChange- Called whenever messages change (only fires in public mode)UIMessagetype is re-exported from@btst/stack/plugins/ai-chat/clientfor convenience
This pattern enables:
- localStorage - Simple browser-based persistence
- sessionStorage - Per-tab conversation history
- IndexedDB - Larger storage for long conversations
- External state management - Redux, Zustand, etc.
Localization
Customize UI strings by providing a localization override:
overrides={{
"ai-chat": {
// ... other overrides
localization: {
CHAT_PLACEHOLDER: "Ask me anything...",
CHAT_EMPTY_STATE: "How can I help you today?",
SIDEBAR_NEW_CHAT: "Start new conversation",
CONVERSATION_DELETE_CONFIRM_TITLE: "Delete this chat?",
// See AiChatLocalization type for all available strings
}
}
}}AiChatLocalization
Prop
Type
Server-side Data Access
The AI Chat plugin exposes standalone getter functions for server-side use cases, giving you direct access to conversation history without going through HTTP.
Two patterns
Pattern 1 — via stack().api
import { myStack } from "./stack";
// List all conversations (optionally scoped to a user)
const all = await myStack.api["ai-chat"].getAllConversations();
const userConvs = await myStack.api["ai-chat"].getAllConversations("user-123");
// Get a conversation with its full message history
const conv = await myStack.api["ai-chat"].getConversationById("conv-456");
if (conv) {
console.log(conv.messages); // Message[]
}Pattern 2 — direct import
import {
getAllConversations,
getConversationById,
} from "@btst/stack/plugins/ai-chat/api";
const conv = await getConversationById(myAdapter, conversationId);Available getters
| Function | Description |
|---|---|
getAllConversations(adapter, userId?) | Returns all conversations, optionally filtered by userId |
getConversationById(adapter, id) | Returns a conversation with messages, or null |
Route-Aware AI Context
The AI chat plugin supports route-aware context — pages register contextual data and client-side tool handlers that the chat widget reads automatically. This enables:
- The AI to summarize content from the current page
- The AI to fill in forms or update editors on the user's behalf
- Dynamic suggestion chips that change based on which page is open
Setup
Step 1 — Add PageAIContextProvider to your root layout (above all StackProvider instances):
import { PageAIContextProvider } from "@btst/stack/plugins/ai-chat/client/context"
export default function RootLayout({ children }) {
return (
<html>
<body>
<PageAIContextProvider>
{/* Everything else, including StackProvider and your chat modal */}
{children}
</PageAIContextProvider>
</body>
</html>
)
}Place PageAIContextProvider above any StackProvider so it spans both the main app tree and any chat modals rendered as Next.js parallel/intercept routes. Both trees need to be descendants of the same context instance for context to flow between them.
Step 2 — Enable page tools in your backend config:
aiChatBackendPlugin({
model: openai("gpt-4o"),
enablePageTools: true, // activates built-in fillBlogForm, updatePageLayers tools
})Registering Page Context
Call useRegisterPageAIContext in any page component to publish context to the chat. The registration is cleaned up automatically when the component unmounts.
import { useRegisterPageAIContext } from "@btst/stack/plugins/ai-chat/client/context"
// Blog post page — provides content for summarization
function BlogPostPage({ post }) {
useRegisterPageAIContext(post ? {
routeName: "blog-post",
pageDescription: `Blog post: "${post.title}"\n\n${post.content.slice(0, 16000)}`,
suggestions: ["Summarize this post", "What are the key takeaways?"],
} : null)
// ...
}Pass null to conditionally disable context (e.g. while data is loading).
Client-Side Tools
Pages can expose client-side tool handlers — functions the AI can call to mutate page state. Built-in tools (fillBlogForm, updatePageLayers) are already wired up in the blog and UI builder plugins. For custom pages:
1. Register a tool handler on the page:
import { useRegisterPageAIContext } from "@btst/stack/plugins/ai-chat/client/context"
function ProductPage({ product, cart }) {
useRegisterPageAIContext({
routeName: "product-detail",
pageDescription: `Product: ${product.name}. Price: $${product.price}.`,
suggestions: ["Tell me about this product", "Add to cart"],
clientTools: {
addToCart: async ({ quantity }) => {
cart.add(product.id, quantity)
return { success: true, message: `Added ${quantity} to cart` }
}
}
})
}2. Register the tool schema server-side (so the LLM knows the parameter shapes):
import { tool } from "ai"
import { z } from "zod"
aiChatBackendPlugin({
model: openai("gpt-4o"),
enablePageTools: true,
clientToolSchemas: {
addToCart: tool({
description: "Add the current product to the shopping cart",
inputSchema: z.object({ quantity: z.number().int().min(1) }),
// No execute — this is handled client-side
}),
}
})When the AI calls addToCart, the return value from the client handler is sent back to the model as the tool result, allowing the conversation to continue.
Built-In Page Tools
| Tool | Registered by | Description |
|---|---|---|
fillBlogForm | Blog new/edit pages | Fills title, content, excerpt, and tags in the post editor |
updatePageLayers | UI builder edit page | Replaces the component layer tree in the page builder |
API Reference
PageAIContextProvider
import { PageAIContextProvider } from "@btst/stack/plugins/ai-chat/client/context"
<PageAIContextProvider>
{children}
</PageAIContextProvider>useRegisterPageAIContext(config)
import { useRegisterPageAIContext } from "@btst/stack/plugins/ai-chat/client/context"
useRegisterPageAIContext({
routeName: string, // shown as badge in chat header
pageDescription: string, // injected into system prompt (max 8,000 chars)
suggestions?: string[], // quick-action chips in chat empty state
clientTools?: { // handlers the AI can invoke
[toolName: string]: (args: any) => Promise<{ success: boolean; message?: string }>
}
})AiChatBackendConfig — new options
| Option | Type | Default | Description |
|---|---|---|---|
enablePageTools | boolean | false | Activate page tool support |
clientToolSchemas | Record<string, Tool> | — | Custom tool schemas for non-BTST pages |
hooks.onBeforeToolsActivated | (toolNames, routeName, context) => string[] | — | Filter active tools per request; throw to abort with 403 |
Tool Authorization Hook
onBeforeToolsActivated runs server-side after the structural routeName allowlist check. Use it to add user-level authorization — for example, restricting which tools are available based on the authenticated user's role or subscription tier.
import type { AiChatBackendHooks } from "@btst/stack/plugins/ai-chat/api"
aiChatBackendPlugin({
enablePageTools: true,
hooks: {
onBeforeToolsActivated: async (toolNames, routeName, context) => {
const role = await getUserRole(context.headers);
// Viewers cannot use any interactive tools
if (role === "viewer") return [];
// Non-editors cannot fill the blog form
if (role !== "editor") return toolNames.filter(t => t !== "fillBlogForm");
return toolNames;
},
},
})| Parameter | Type | Description |
|---|---|---|
toolNames | string[] | Tools that passed the routeName allowlist check |
routeName | string | undefined | Claimed route name from the request |
context | ChatApiContext | Full request context (headers, body, etc.) |
Return a subset of toolNames to allow, or [] to suppress all page tools. Throw an Error to abort the entire chat request — the endpoint catches it and returns a 403 response.
This hook runs after the structural routeName allowlist check (which validates that each built-in tool is only requested from its intended page). onBeforeToolsActivated is the right place to add user-specific logic — the two layers are complementary.
Shadcn Registry
The AI Chat plugin UI layer is distributed as a shadcn registry block. Use the registry to eject and fully customize the page components while keeping all data-fetching and API logic from @btst/stack.
The registry installs only the view layer. Hooks and data-fetching continue to come from @btst/stack/plugins/ai-chat/client/hooks.
npx shadcn@latest add https://github.com/better-stack-ai/better-stack/blob/main/packages/stack/registry/btst-ai-chat.jsonpnpx shadcn@latest add https://github.com/better-stack-ai/better-stack/blob/main/packages/stack/registry/btst-ai-chat.jsonbunx shadcn@latest add https://github.com/better-stack-ai/better-stack/blob/main/packages/stack/registry/btst-ai-chat.jsonThis copies the page components into src/components/btst/ai-chat/client/ in your project. All relative imports remain valid and you can edit the files freely — the plugin's data layer stays intact.
Using ejected components
After installing, wire your custom components into the plugin via the pageComponents option in your client plugin config:
import { aiChatClientPlugin } from "@btst/stack/plugins/ai-chat/client"
// Import your ejected (and customized) page components
import { ChatPageComponent } from "@/components/btst/ai-chat/client/components/pages/chat-page"
import { ChatConversationPageComponent } from "@/components/btst/ai-chat/client/components/pages/chat-conversation-page"
aiChatClientPlugin({
apiBaseURL: "...",
apiBasePath: "/api/data",
queryClient,
pageComponents: {
chat: ChatPageComponent, // replaces the chat home page
chatConversation: ChatConversationPageComponent, // replaces the conversation page
},
})Any key you omit falls back to the built-in default, so you can override just the pages you want to change.

