@sarracin0/ai-kit
v0.1.0
Published
Utility-first AI workflow toolkit for Next.js — thin wrappers over Vercel AI SDK v6 for structured generation, tool calling, RAG, and streaming chat.
Downloads
101
Maintainers
Readme
@sarracin0/ai-kit
Utility-first AI workflow toolkit for Next.js. Thin wrappers over Vercel AI SDK v6 that remove boilerplate without hiding the flow.
What is ai-kit?
Most AI frameworks try to own your entire stack. ai-kit does the opposite: it gives you small, composable utilities that handle the repetitive parts — input normalization, tool result extraction, embedding pipelines, streaming chat setup — while you keep full control over models, prompts, schemas, and business logic.
Built for: Vercel AI SDK ^6.0.0 · Next.js App Router · TypeScript
Philosophy: You control every step. The utilities just make each step cleaner.
What ai-kit handles
- Structured object generation from any input type (string, PDF, content parts)
- Tool calling with automatic result extraction
- Text chunking, embedding, and vector storage
- RAG similarity search (with a ready-made AI tool)
- Streaming chat handlers with auth, context, and persistence
- React chat hook with transport setup
- Drizzle ORM table factory for embeddings with HNSW indexing
What stays in your project
- AI models and providers (
openai,anthropic, etc.) - Prompts and system messages
- Zod schemas (your domain: quiz, flashcard, script, etc.)
- UI components
- Database queries and schemas
- Auth, rate limiting, business logic
Table of Contents
- Quick Start
- Installation
- API Reference
- Full Example
- Using in Existing Projects
- Publishing
- Contributing
- License
Quick Start
npm install @sarracin0/ai-kit ai zodimport { generateStructured } from '@sarracin0/ai-kit'
import { openai } from '@ai-sdk/openai'
import { z } from 'zod'
const result = await generateStructured(openai('gpt-4o'), {
schema: z.object({
title: z.string(),
summary: z.string(),
tags: z.array(z.string()),
}),
system: 'Extract metadata from the given text.',
input: 'Artificial intelligence is transforming how we build software...',
})
console.log(result.title) // typed, validated
console.log(result.summary)
console.log(result.tags)Installation
npm install @sarracin0/ai-kitPeer dependencies
Required:
npm install ai@^6 zod@^3Optional (only if you use the corresponding modules):
# For the React chat hook (./react)
npm install @ai-sdk/react@^3
# For the Drizzle embeddings table (./drizzle)
npm install drizzle-orm@^0.39Local linking (for development)
If you're working on ai-kit alongside your project:
# From your project directory
npm install ../ai-kitOr with npm link:
# In ai-kit directory
npm link
# In your project directory
npm link @sarracin0/ai-kitAPI Reference
Generation
Import from @sarracin0/ai-kit.
generateStructured(model, options)
Generate a typed, Zod-validated object from an AI model. Wraps generateObject() with automatic input normalization.
Accepts three input types:
string— converted to a text messageBuffer | Uint8Array— converted to a file message (requiresinputMediaType)ContentPart[]— passed through as-is
import { generateStructured } from '@sarracin0/ai-kit'
// From a string
const result = await generateStructured(model, {
schema: mySchema,
system: 'Analyze the text.',
input: 'Some text to analyze...',
})
// From a PDF buffer
const result = await generateStructured(model, {
schema: scriptSchema,
system: 'Generate a podcast script from this document.',
input: pdfBuffer,
inputMediaType: 'application/pdf',
inputText: 'Create a two-speaker dialogue about this content.',
})| Option | Type | Required | Description |
|--------|------|----------|-------------|
| system | string | Yes | System prompt |
| input | string \| Buffer \| Uint8Array \| ContentPart[] | Yes | Input content |
| schema | ZodType | Yes | Zod schema for output validation |
| inputMediaType | string | When input is Buffer | MIME type (e.g. 'application/pdf') |
| inputText | string | No | Additional text alongside file input |
Returns: Promise<z.infer<T>> — the validated object.
callTool(model, toolName, tool, options)
Call a single AI tool and get the result directly. Wraps generateText() with automatic tool result extraction from steps.flatMap(s => s.toolResults).
import { callTool, createQuizTool } from '@sarracin0/ai-kit'
const quizTool = createQuizTool({ type: 'pre', questionSchema, min: 3, max: 5 })
const quiz = await callTool(model, 'generate_pre_quiz', quizTool, {
system: 'Generate comprehension questions about this content.',
input: `Content: ${script.rawText}`,
})
if (quiz) {
console.log(quiz.pre_quiz) // directly the tool output
}| Option | Type | Required | Default | Description |
|--------|------|----------|---------|-------------|
| system | string | Yes | — | System prompt |
| input | string | Yes | — | User message |
| maxSteps | number | No | 3 | Max AI steps before stopping |
Returns: Promise<TResult | null> — the tool's output, or null if the model didn't call the tool.
Tools
Import from @sarracin0/ai-kit.
createTool(options)
Factory for AI SDK v6 tools with less boilerplate. Uses inputSchema (the v6 standard).
import { createTool } from '@sarracin0/ai-kit'
import { z } from 'zod'
const flashcardTool = createTool({
name: 'generate_flashcards',
description: 'Generate study flashcards from the content.',
schema: z.object({
flashcards: z.array(z.object({
front: z.string(),
back: z.string(),
})).min(5).max(15),
}),
})By default, the execute function returns the input with { status: 'generated' } — a passthrough pattern where the AI's structured output is the result. You can override execute for custom logic.
| Option | Type | Required | Description |
|--------|------|----------|-------------|
| name | string | Yes | Tool identifier |
| description | string | Yes | Description shown to the AI model |
| schema | ZodType | Yes | Input validation schema |
| execute | (input) => Promise<any> | No | Custom execute function |
createQuizTool(options)
Preset for the common quiz generation pattern. Builds on createTool.
import { createQuizTool } from '@sarracin0/ai-kit'
const preQuizTool = createQuizTool({
type: 'pre',
questionSchema,
min: 3,
max: 5,
})
const postQuizTool = createQuizTool({
type: 'post',
questionSchema,
min: 5,
max: 8,
description: 'Generate personalized questions targeting weak areas.',
})This creates a tool named generate_{type}_quiz with an input schema containing a {type}_quiz array field constrained by min/max.
| Option | Type | Required | Description |
|--------|------|----------|-------------|
| type | string | Yes | Quiz type identifier (e.g. 'pre', 'post') |
| questionSchema | ZodType | Yes | Schema for a single question |
| min | number | Yes | Minimum number of questions |
| max | number | Yes | Maximum number of questions |
| description | string | No | Custom description (auto-generated if omitted) |
RAG
Import from @sarracin0/ai-kit.
chunkText(text, options?)
Split text into chunks on sentence boundaries. Never exceeds maxSize characters per chunk.
import { chunkText } from '@sarracin0/ai-kit'
const chunks = chunkText(longArticle, { maxSize: 300 })
// ['First sentence. Second sentence.', 'Third sentence. Fourth.', ...]| Option | Type | Required | Default | Description |
|--------|------|----------|---------|-------------|
| maxSize | number | No | 500 | Max characters per chunk |
Returns: string[]
embedChunks(model, texts, options)
Chunk texts, generate embeddings, and store them in one call. Wraps embedMany(). You provide the store callback to persist embeddings however you want.
import { embedChunks } from '@sarracin0/ai-kit'
await embedChunks(embeddingModel, scriptTexts, {
chunkSize: 500,
store: async (items) => {
await db.insert(embeddingsTable).values(
items.map((i) => ({ ...i, podcastId, sourceType: 'script' }))
)
},
})| Option | Type | Required | Description |
|--------|------|----------|-------------|
| chunkSize | number | No | If set, texts are chunked before embedding |
| store | (items: EmbeddingItem[]) => Promise<void> | Yes | Persistence callback |
Each EmbeddingItem has { content: string, embedding: number[] }.
Returns: Promise<EmbeddingItem[]> — the items that were stored.
ragSearch(model, query, options)
Embed a query and run similarity search. Wraps embed(). You provide the search callback that runs the actual DB query.
import { ragSearch } from '@sarracin0/ai-kit'
const results = await ragSearch(embeddingModel, 'What is photosynthesis?', {
search: async (embedding) => {
const sim = sql`1 - (${cosineDistance(table.embedding, embedding)})`
return db.select({ content: table.content, similarity: sim })
.from(table)
.where(gt(sim, 0.5))
.orderBy(desc(sim))
.limit(6)
},
})Returns: Promise<SearchResult[]> where each result has { content: string, similarity: number }.
ragSearchTool(model, searchFn, options?)
Create an AI SDK tool that performs RAG search. Plug it directly into chatHandler or streamText.
import { ragSearchTool } from '@sarracin0/ai-kit'
const search = ragSearchTool(embeddingModel, mySearchFn, {
description: 'Search the podcast knowledge base.',
})
// Use in chatHandler:
tools: { getInformation: search }The tool accepts { question: string } as input from the AI model.
Chat (Server)
Import from @sarracin0/ai-kit.
chatHandler(config)
Create a Next.js App Router POST handler for AI chat with streaming. Handles the full lifecycle: auth → parse body → load context → stream → save.
// app/api/chat/route.ts
import { chatHandler, ragSearchTool } from '@sarracin0/ai-kit'
import { openai } from '@ai-sdk/openai'
export const POST = chatHandler({
model: openai('gpt-4o'),
auth: async (req) => {
const user = await verifyAuth()
if (!user) return new Response('Unauthorized', { status: 401 })
return { user }
},
getContext: async (body, auth) => {
const podcast = await getPodcast(body.podcastId)
return { title: podcast.title, podcastId: podcast.id }
},
system: (ctx) => `You are a helpful tutor for "${ctx.title}".`,
tools: (ctx, auth) => ({
getInformation: ragSearchTool(embeddingModel, (embedding) =>
findRelevantContent(embedding, ctx.podcastId)
),
}),
onFinish: async (messages, ctx, auth) => {
await saveChatHistory(ctx.podcastId, auth.user.id, messages)
},
})| Config | Type | Required | Default | Description |
|--------|------|----------|---------|-------------|
| model | LanguageModel | Yes | — | AI model |
| auth | (req) => Promise<{user} \| Response> | Yes | — | Auth function. Return a Response to reject. |
| getContext | (body, auth) => Promise<T> | No | — | Load context from request |
| system | string \| (ctx, auth) => string | Yes | — | System prompt (static or dynamic) |
| tools | Record \| (ctx, auth) => Record | No | — | AI tools (static or dynamic) |
| maxSteps | number | No | 5 | Max AI reasoning steps |
| onFinish | (messages, ctx, auth) => Promise<void> | No | — | Called when stream ends (even on disconnect) |
| parseBody | (req) => Promise<any \| Response> | No | req.json() | Custom body parser |
| messageIdPrefix | string | No | 'msg' | Prefix for generated message IDs |
Error responses: 400 (missing messages), 401 (auth failure), 500 (unexpected error).
Chat (Client)
Import from @sarracin0/ai-kit/react.
useAIChat(options?)
React hook for AI chat. Thin wrapper over useChat() that handles DefaultChatTransport setup automatically.
import { useAIChat, getMessageText } from '@sarracin0/ai-kit/react'
function Chat({ podcastId }: { podcastId: string }) {
const { messages, sendMessage, isLoading } = useAIChat({
endpoint: '/api/chat',
body: { podcastId },
initialMessages: loadedHistory,
})
return (
<div>
{messages.map((m) => (
<p key={m.id}>{getMessageText(m.parts)}</p>
))}
<button onClick={() => sendMessage({ text: input })}>Send</button>
</div>
)
}| Option | Type | Required | Default | Description |
|--------|------|----------|---------|-------------|
| endpoint | string | No | '/api/chat' | API route for the chat handler |
| body | Record<string, unknown> | No | — | Extra fields sent with every request |
| initialMessages | UIMessage[] | No | — | Pre-loaded chat history |
Returns: { messages, sendMessage, status, isLoading, error }
getMessageText(parts)
Extract text content from a UIMessage's parts array. In AI SDK v6, messages use a parts array instead of a content string.
{messages.map((m) => (
<div key={m.id}>{getMessageText(m.parts)}</div>
))}Drizzle
Import from @sarracin0/ai-kit/drizzle.
createEmbeddingsTable(name, options?)
Create a Drizzle ORM PostgreSQL table for vector embeddings with an HNSW index.
import { createEmbeddingsTable } from '@sarracin0/ai-kit/drizzle'
import { varchar, uuid } from 'drizzle-orm/pg-core'
export const embeddings = createEmbeddingsTable('embeddings', {
extraColumns: {
podcastId: uuid('podcast_id').notNull(),
sessionId: uuid('session_id'),
},
})Generated columns: id (uuid PK), sourceType (varchar), content (text), embedding (vector).
| Option | Type | Required | Default | Description |
|--------|------|----------|---------|-------------|
| dimensions | number | No | 1536 | Vector dimensions (1536 for OpenAI text-embedding-3-small) |
| extraColumns | Record<string, PgColumnBuilder> | No | — | Project-specific columns |
| extraConfig | (table) => PgTableExtraConfig | No | — | Additional indexes or constraints |
Full Example
A complete workflow: upload a document, generate structured content, embed it, and chat with RAG.
// lib/ai/models.ts — your project, your models
import { openai } from '@ai-sdk/openai'
export const generationModel = openai('gpt-4o')
export const chatModel = openai('gpt-4o')
export const embeddingModel = openai.embedding('text-embedding-3-small')Step 1: Generate structured content from a PDF
import { generateStructured } from '@sarracin0/ai-kit'
import { generationModel } from '@/lib/ai/models'
import { scriptSchema } from '@/lib/validators'
const script = await generateStructured(generationModel, {
schema: scriptSchema,
system: 'Generate a podcast script from this document.',
input: pdfBuffer,
inputMediaType: 'application/pdf',
})Step 2: Generate a quiz using tool calling
import { callTool, createQuizTool } from '@sarracin0/ai-kit'
import { questionSchema } from '@/lib/validators'
const quizTool = createQuizTool({ type: 'pre', questionSchema, min: 3, max: 5 })
const quiz = await callTool(generationModel, 'generate_pre_quiz', quizTool, {
system: 'Generate comprehension questions.',
input: `Script: ${script.rawText}`,
})Step 3: Embed the content for RAG
import { embedChunks } from '@sarracin0/ai-kit'
import { embeddingModel } from '@/lib/ai/models'
await embedChunks(embeddingModel, [script.rawText], {
chunkSize: 500,
store: async (items) => {
await db.insert(embeddingsTable).values(
items.map((i) => ({ ...i, podcastId, sourceType: 'script' }))
)
},
})Step 4: Set up the chat API route
// app/api/chat/route.ts
import { chatHandler, ragSearchTool } from '@sarracin0/ai-kit'
import { chatModel, embeddingModel } from '@/lib/ai/models'
export const POST = chatHandler({
model: chatModel,
auth: async (req) => {
const user = await verifyAuth()
if (!user) return new Response('Unauthorized', { status: 401 })
return { user }
},
getContext: async (body) => await getPodcast(body.podcastId),
system: (ctx) => `You are a tutor for "${ctx.title}". Answer based on the content.`,
tools: (ctx) => ({
getInformation: ragSearchTool(embeddingModel, (embedding) =>
findRelevantContent(embedding, ctx.id)
),
}),
onFinish: async (messages, ctx, auth) => {
await saveChatHistory(ctx.id, auth.user.id, messages)
},
})Step 5: Add the chat UI
// components/chat.tsx
import { useAIChat, getMessageText } from '@sarracin0/ai-kit/react'
export function PodcastChat({ podcastId }: { podcastId: string }) {
const { messages, sendMessage, isLoading } = useAIChat({
endpoint: '/api/chat',
body: { podcastId },
})
return (
<div>
{messages.map((m) => (
<p key={m.id}>{getMessageText(m.parts)}</p>
))}
</div>
)
}Using in Existing Projects
If you have a Next.js project already using Vercel AI SDK v6, you can adopt ai-kit incrementally:
1. Install
npm install @sarracin0/ai-kit
# peer deps you likely already have:
npm install ai@^6 zod@^32. Replace one thing at a time
You don't need to rewrite everything. Pick the most painful boilerplate and replace it:
Structured generation:
- const { object } = await generateObject({
- model,
- schema,
- system: prompt,
- messages: [{ role: 'user', content: [{ type: 'file', data: buf, mediaType: 'application/pdf' }, { type: 'text', text: 'Generate...' }] }],
- })
+ const object = await generateStructured(model, {
+ schema,
+ system: prompt,
+ input: buf,
+ inputMediaType: 'application/pdf',
+ inputText: 'Generate...',
+ })Tool result extraction:
- const result = await generateText({ model, system, messages, tools: { myTool }, stopWhen: stepCountIs(3) })
- const output = result.steps.flatMap(s => s.toolResults).find(r => r !== undefined)
- const data = output ? (output as any).output : null
+ const data = await callTool(model, 'myTool', myTool, { system, input: message })Chat route handler:
- export async function POST(req: Request) {
- const user = await verifyAuth()
- if (!user) return new Response('Unauthorized', { status: 401 })
- const { messages } = await req.json()
- const result = streamText({
- model,
- system: prompt,
- messages: await convertToModelMessages(messages),
- tools: { ... },
- stopWhen: stepCountIs(5),
- })
- result.consumeStream()
- return result.toUIMessageStreamResponse({ ... })
- }
+ export const POST = chatHandler({
+ model,
+ auth: async () => { ... },
+ system: prompt,
+ tools: { ... },
+ onFinish: async (messages) => { ... },
+ })3. Gradually migrate
There's no lock-in. ai-kit utilities call the same AI SDK functions you're already using. You can mix ai-kit utilities with direct AI SDK calls in the same project.
Publishing
ai-kit is not yet published on npm. To publish:
# Login to npm (once)
npm login
# Build and publish
npm run build
npm publish --access publicTo use a private registry or GitHub Packages instead, update publishConfig in package.json:
{
"publishConfig": {
"registry": "https://npm.pkg.github.com"
}
}For local development across projects, use npm link or direct path installs as described in Installation.
Contributing
See CONTRIBUTING.md for development setup, code style, and pull request guidelines.
License
MIT
