prompt-chunker
v1.0.0
Published
Advanced prompt chunking library for AI applications. Split large prompts intelligently for LLMs.
Downloads
110
Maintainers
Readme
🚀 prompt-chunker
A high-performance, framework-agnostic library for intelligent prompt chunking.
Designed for AI developers building applications with LLMs (ChatGPT, Claude, Gemini) where long context needs to be managed efficiently on either the frontend or backend.
✨ Why prompt-chunker?
Sending massive prompts to LLMs often leads to "Message too long" errors or context window exhaustion. prompt-chunker solves this by:
- 🧠 Intelligent Splitting: Respects sentence boundaries and paragraphs.
- 🔗 Contextual Overlap: Maintains coherence by overlapping chunk edges.
- ⚛️ React Ready: Comes with a powerful
usePromptChunkerhook. - 🛠️ Environment Agnostic: Works in Browser, Node.js, and Edge environments.
- ⚡ Ultra Lightweight: Zero dependencies (core), tiny bundle size, tree-shakeable.
📦 Installation
npm install prompt-chunker📖 Usage
Base Library (Node/Native JS)
import { Chunker } from 'prompt-chunker';
const text = "Your very long prompt here...";
const result = Chunker.split(text, {
maxSize: 1000,
overlap: 100,
strategy: 'intelligent'
});
console.log(`Split into ${result.chunks.length} chunks`);
console.log(result.chunks[0].content);React Hook
Perfect for building "Prompt Splitter" UIs or handled automated multi-message flows.
import { usePromptChunker } from 'prompt-chunker/react';
function PromptSplitter({ longPrompt }) {
const {
currentChunk,
next,
prev,
isLast,
progress
} = usePromptChunker(longPrompt, { maxSize: 2000 });
if (!currentChunk) return null;
return (
<div>
<div className="progress-bar" style={{ width: `${progress}%` }} />
<h3>Chunk {currentChunk.index} of {currentChunk.total}</h3>
<pre>{currentChunk.content}</pre>
<button onClick={prev}>Previous</button>
<button onClick={next} disabled={isLast}>Next</button>
<button onClick={() => navigator.clipboard.writeText(currentChunk.content)}>
Copy to Clipboard
</button>
</div>
);
}💎 Advanced Features
💻 Code Block Protection
Unlike simple string splitters, prompt-chunker detects Markdown code blocks (...) and keeps them intact within a single chunk whenever possible. It will only split a code block if the block itself exceeds the maxSize.
🏁 Automatic Progress Headers
Want the LLM to know it's reading a multi-part prompt? Enable appendMetadata to automatically add headers to every chunk:
const result = Chunker.split(text, {
appendMetadata: true
});
// Chunk 1 will start with: "[Part 1/5]"
// Chunk 2 will start with: "[Part 2/5]"
// ... etc⚙️ API Configuration
| Option | Type | Default | Description |
| :--- | :--- | :--- | :--- |
| maxSize | number | 2000 | Max characters per chunk. |
| overlap | number | 0 | Characters to repeat from previous chunk for context. |
| strategy | string | 'intelligent' | intelligent (sentence), hard (fixed), or delimiter. |
| delimiter| string | \n\n | Custom string to split by (only for delimiter strategy). |
| appendMetadata | boolean | false | Automatically prepend "[Part X/Y]" to chunks. |
| tokenEstimator | fn | undefined | Custom function to calculate token counts. |
🏗️ Advanced Integration
Token Estimation
You can integrate any tokenizer like gpt-tokenizer or tiktoken:
import { encode } from 'gpt-tokenizer';
const result = Chunker.split(text, {
tokenEstimator: (t) => encode(t).length
});
console.log(result.chunks[0].tokensEstimate);📈 Performance & Bundle Size
- Core Library: ~1.2 KB (Minified + Gzipped)
- React Hook: ~0.8 KB (Minified + Gzipped)
- Dependencies: 0 total dependencies.
📄 License
MIT © aswintt
