npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@context-chef/ai-sdk-middleware

v1.0.4

Published

AI SDK middleware for context-chef. Transparent history compression, tool result truncation, and token budget management.

Downloads

687

Readme

@context-chef/ai-sdk-middleware

npm version npm downloads License TypeScript AI SDK

Vercel AI SDK middleware powered by context-chef. Transparent history compression, tool result truncation, and token budget management — zero code changes required.

Installation

npm install @context-chef/ai-sdk-middleware ai

Quick Start

import { withContextChef } from '@context-chef/ai-sdk-middleware';
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';

const model = withContextChef(openai('gpt-4o'), {
  contextWindow: 128_000,
  compress: { model: openai('gpt-4o-mini') },
  truncate: { threshold: 5000, headChars: 500, tailChars: 1000 },
});

// Everything below stays exactly the same — works with generateText and streamText
const result = await generateText({
  model,
  messages: conversationHistory,
  tools: myTools,
});

That's it. History compression, tool result truncation, and token budget tracking happen automatically behind the scenes.

Features

History Compression

When the conversation exceeds the token budget, the middleware compresses older messages to make room. Two modes:

Without a compression model (default) — old messages are discarded, only recent messages are kept:

const model = withContextChef(openai('gpt-4o'), {
  contextWindow: 128_000,
});

With a compression model — old messages are summarized by a cheap model before being replaced:

const model = withContextChef(openai('gpt-4o'), {
  contextWindow: 128_000,
  compress: {
    model: openai('gpt-4o-mini'),  // cheap model for summarization
    preserveRatio: 0.8,             // keep 80% of context for recent messages
  },
});

Tool Result Truncation

Large tool outputs (terminal logs, API responses) are automatically truncated while preserving the head and tail:

const model = withContextChef(openai('gpt-4o'), {
  contextWindow: 128_000,
  truncate: {
    threshold: 5000,   // truncate tool results over 5000 chars
    headChars: 500,    // preserve first 500 chars
    tailChars: 1000,   // preserve last 1000 chars
  },
});

Optionally persist the original content via a storage adapter so the LLM can retrieve it later via a context://vfs/ URI:

import { FileSystemAdapter } from '@context-chef/core';

const model = withContextChef(openai('gpt-4o'), {
  contextWindow: 128_000,
  truncate: {
    threshold: 5000,
    headChars: 500,
    tailChars: 1000,
    storage: new FileSystemAdapter('.context_vfs'), // or your own DB adapter
  },
});

Token Budget Tracking

The middleware automatically extracts token usage from generateText and streamText responses and feeds it back to the compression engine. No manual reportTokenUsage() calls needed.

API

withContextChef(model, options)

Wraps an AI SDK language model with context-chef middleware.

import { withContextChef } from '@context-chef/ai-sdk-middleware';

const wrappedModel = withContextChef(model, options);

Parameters:

| Option | Type | Required | Description | |---|---|---|---| | contextWindow | number | Yes | Model's context window size in tokens | | compress | CompressOptions | No | Enable LLM-based compression | | compress.model | LanguageModelV3 | Yes (if compress) | Cheap model for summarization | | compress.preserveRatio | number | No | Ratio of context to preserve (default: 0.8) | | truncate | TruncateOptions | No | Enable tool result truncation | | truncate.threshold | number | Yes (if truncate) | Character count to trigger truncation | | truncate.headChars | number | No | Characters to preserve from start (default: 0) | | truncate.tailChars | number | No | Characters to preserve from end (default: 1000) | | truncate.storage | VFSStorageAdapter | No | Storage adapter to persist original content before truncation | | tokenizer | (msgs) => number | No | Custom tokenizer for precise counting | | onCompress | (summary, count) => void | No | Hook called after compression |

Returns: LanguageModelV3 — a wrapped model that can be used anywhere the original model was used.

createMiddleware(options)

Creates a raw LanguageModelMiddleware if you want to apply it yourself via wrapLanguageModel:

import { createMiddleware } from '@context-chef/ai-sdk-middleware';
import { wrapLanguageModel } from 'ai';

const middleware = createMiddleware({ contextWindow: 128_000 });
const model = wrapLanguageModel({ model: openai('gpt-4o'), middleware });

fromAISDK(prompt) / toAISDK(messages)

Low-level converters between AI SDK LanguageModelV3Prompt and context-chef Message[] IR. Useful if you want to use context-chef modules directly with AI SDK message formats.

import { fromAISDK, toAISDK } from '@context-chef/ai-sdk-middleware';

const irMessages = fromAISDK(aiSdkPrompt);
// ... process with context-chef modules ...
const aiSdkPrompt = toAISDK(irMessages);

How It Works

generateText / streamText ({ model: wrappedModel, messages })
  |
  v
transformParams (before LLM call)
  1. Truncate large tool results (if configured)
     - Optionally persist originals to storage adapter
  2. Convert AI SDK messages -> context-chef IR
  3. Run Janitor compression (if over token budget)
  4. Convert back to AI SDK messages
  |
  v
LLM call executes normally
  |
  v
wrapGenerate / wrapStream (after LLM call)
  5. Extract token usage from response
  6. Feed back to Janitor for next call's budget check
  |
  v
Result returned unchanged

The middleware is stateful — it tracks token usage across calls to know when compression is needed. Create one wrapped model per conversation/session.

Need More Control?

The middleware covers the most common use case: transparent compression and truncation. For advanced features like dynamic state injection, tool namespaces, memory, or snapshot/restore, use @context-chef/core directly.

License

ISC