@kognitivedev/rag
v0.2.28
Published
Provider-agnostic RAG pipeline with chunkers, vector stores, and tool output
Maintainers
Readme
@kognitivedev/rag
Provider-agnostic retrieval toolkit with chunkers, vector stores, pipeline ingestion, and managed indexing primitives.
Installation
bun add @kognitivedev/rag @kognitivedev/tools zodAdd @kognitivedev/adapter-ai-sdk only if you want the AI SDK embedding bridge.
Quick Start
import { DocumentPipeline, InMemoryVectorStore, RecursiveTextChunker } from "@kognitivedev/rag";
import { AISDKEmbeddingProvider } from "@kognitivedev/adapter-ai-sdk";
import { openai } from "@ai-sdk/openai";
const pipeline = new DocumentPipeline({
chunker: new RecursiveTextChunker({ chunkSize: 1000 }),
embedder: new AISDKEmbeddingProvider({
model: openai.embedding("text-embedding-3-small"),
}),
vectorStore: new InMemoryVectorStore(),
});
await pipeline.ingest([{ content: "Your documents here" }]);
const results = await pipeline.search("query", { topK: 5 });
const tool = pipeline.asTool();Managed Indexing
Use IndexManager when you need a higher-level index abstraction on top of chunk/embed/store primitives.
import { IndexManager, InMemoryVectorStore, RecursiveTextChunker } from "@kognitivedev/rag";
const manager = new IndexManager({
chunker: new RecursiveTextChunker({ chunkSize: 1000, overlap: 200 }),
embedder,
vectorStore: new InMemoryVectorStore(),
});
const index = manager.createIndex({ name: "docs" });
const source = manager.addSource({ indexId: index.id, kind: "file_upload" });
await manager.syncSource({
indexId: index.id,
sourceId: source.id,
filename: "report.md",
documents: [{ content: "# Report\n\nImportant content" }],
});Supported retrieval modes:
chunksfiles_via_metadatafiles_via_contentauto_routed
Highlights
- 6 chunkers
- 3 vector stores
- Provider-agnostic
EmbeddingProviderinterface pipeline.asTool()returns a Kognitive tool that works with agents and runtime adapters
For PDFs, DOCX, images, and OCR-backed ingestion, preprocess files with @kognitivedev/documents.
