npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@context-chef/core

v3.0.1

Published

Context compiler for TypeScript/JavaScript AI agents. Automatically compiles agent state into optimized LLM payloads with history compression, tool pruning, multi-provider support, and more.

Readme

@context-chef/core

npm version npm downloads GitHub stars License TypeScript CI

Context compiler for TypeScript/JavaScript AI agents.

ContextChef solves the most common context engineering problems in AI agent development: conversations too long for the model to remember, too many tools causing hallucinations, having to rewrite prompts when switching providers, and state drift in long-running tasks. It doesn't take over your control flow — it just compiles your state into an optimal payload before each LLM call.

中文文档 | GitHub

Looking for zero-config AI SDK integration? See @context-chef/ai-sdk-middleware

Blog Series

  1. Why "Compile" Your Context
  2. Janitor — Separating Trigger Logic from Compression Policy
  3. Pruner — Decoupling Tool Registration from Routing
  4. Offloader/VFS — Relocate Information, Don't Destroy It
  5. Core Memory — Zero-Cost Reads, Structured Writes
  6. Snapshot & Restore — Capture Everything That Determines the Next Compile
  7. The Provider Adapter Layer — Let Differences Stop at Compile Time
  8. Five Extension Points in the Compile Pipeline

Features

  • Conversations too long? — Automatically compress history, preserve recent memory, delegate old messages to a small model for summarization
  • Too many tools? — Dynamically prune the tool list per task, or use a two-layer architecture (stable namespaces + on-demand loading) to eliminate tool hallucinations
  • Switching providers? — Same prompt architecture compiles to OpenAI / Anthropic / Gemini with automatic prefill, cache, and tool call format adaptation
  • Long tasks drifting? — Zod schema-based state injection forces the model to stay aligned with the current task on every call
  • Terminal output too large? — Auto-truncate and offload to VFS, keeping error lines + a context:// URI pointer for on-demand retrieval
  • Can't remember across sessions? — Memory lets the model persist key information (project rules, user preferences) via tool calls, auto-injected on the next session
  • Need to rollback? — Snapshot & Restore captures and rolls back full context state for branching and exploration
  • Need external context?onBeforeCompile hook lets you inject RAG results, AST snippets, or MCP queries before compilation
  • Need observability? — Unified event system (chef.on('compress', ...)) for logging, metrics, and debugging across all internal modules

Installation

npm install @context-chef/core zod

Quick Start

import { ContextChef } from "@context-chef/core";
import { z } from "zod";

const TaskSchema = z.object({
  activeFile: z.string(),
  todo: z.array(z.string()),
});

const chef = new ContextChef({
  janitor: {
    contextWindow: 200000,
    compressionModel: async (msgs) => callGpt4oMini(msgs),
  },
});

const payload = await chef
  .setSystemPrompt([
    {
      role: "system",
      content: "You are an expert coder.",
      _cache_breakpoint: true,
    },
  ])
  .setHistory(conversationHistory)
  .setDynamicState(TaskSchema, {
    activeFile: "auth.ts",
    todo: ["Fix login bug"],
  })
  .withGuardrails({
    enforceXML: { outputTag: "response" },
    prefill: "<thinking>\n1.",
  })
  .compile({ target: "anthropic" });

const response = await anthropic.messages.create(payload);

API Reference

new ContextChef(config?)

const chef = new ContextChef({
  vfs?: { threshold?: number, storageDir?: string },
  janitor?: JanitorConfig,
  pruner?: { strategy?: 'union' | 'intersection' },
  memory?: MemoryConfig,
  transformContext?: (messages: Message[]) => Message[] | Promise<Message[]>,
  onBeforeCompile?: (context: BeforeCompileContext) => string | null | Promise<string | null>,
});

Context Building

chef.setSystemPrompt(messages): this

Sets the static system prompt layer. Cached prefix — should rarely change.

chef.setSystemPrompt([
  {
    role: "system",
    content: "You are an expert coder.",
    _cache_breakpoint: true,
  },
]);

_cache_breakpoint: true tells the Anthropic adapter to inject cache_control: { type: 'ephemeral' }.

chef.setHistory(messages): this

Sets the conversation history. Janitor compresses automatically on compile().

chef.setDynamicState(schema, data, options?): this

Injects Zod-validated state as XML into the context.

const TaskSchema = z.object({
  activeFile: z.string(),
  todo: z.array(z.string()),
});

chef.setDynamicState(TaskSchema, { activeFile: "auth.ts", todo: ["Fix bug"] });
// placement defaults to 'last_user' (injected into the last user message)
// use { placement: 'system' } for a standalone system message

chef.withGuardrails(options): this

Applies output format guardrails and optional prefill.

chef.withGuardrails({
  enforceXML: { outputTag: "final_code" }, // wraps output rules in EPHEMERAL_MESSAGE
  prefill: "<thinking>\n1.", // trailing assistant message (auto-degraded for OpenAI/Gemini)
});

chef.compile(options?): Promise<TargetPayload>

Compiles everything into a provider-ready payload. Triggers Janitor compression. Registered tools are auto-included.

const payload = await chef.compile({ target: "openai" }); // OpenAIPayload
const payload = await chef.compile({ target: "anthropic" }); // AnthropicPayload
const payload = await chef.compile({ target: "gemini" }); // GeminiPayload

History Compression (Janitor)

Janitor provides two compression paths. Choose the one that fits your setup:

Path 1: Tokenizer (precise control)

Provide your own token counting function for precise per-message calculation. Janitor preserves recent messages that fit within contextWindow * preserveRatio and compresses the rest.

const chef = new ContextChef({
  janitor: {
    contextWindow: 200000,
    tokenizer: (msgs) =>
      msgs.reduce((sum, m) => sum + encode(m.content).length, 0),
    preserveRatio: 0.8, // keep 80% of contextWindow for recent messages (default)
    compressionModel: async (msgs) => callGpt4oMini(msgs),
    onCompress: async (summary, count) => {
      await db.saveCompression(sessionId, summary, count);
    },
  },
});

Path 2: reportTokenUsage (simple, no tokenizer needed)

Most LLM APIs return token usage in their response. Feed that value back — when it exceeds contextWindow, Janitor compresses everything except the last N messages.

const chef = new ContextChef({
  janitor: {
    contextWindow: 200000,
    preserveRecentMessages: 1,       // keep last 1 message on compression (default)
    compressionModel: async (msgs) => callGpt4oMini(msgs),
  },
});

// After each LLM call:
const response = await openai.chat.completions.create({ ... });
chef.reportTokenUsage(response.usage.prompt_tokens);

Note: Without a compressionModel, old messages are discarded with no summary. A console warning is printed at construction time if neither tokenizer nor compressionModel is provided.

JanitorConfig

| Option | Type | Default | Description | | ------------------------ | ------------------------------------------- | ---------- | -------------------------------------------------------------------------------------------- | | contextWindow | number | required | Model's context window size (tokens). Compression triggers when usage exceeds this. | | tokenizer | (msgs: Message[]) => number | — | Enables the tokenizer path for precise per-message token calculation. | | preserveRatio | number | 0.8 | [Tokenizer path] Ratio of contextWindow to preserve for recent messages. | | preserveRecentMessages | number | 1 | [reportTokenUsage path] Number of recent messages to keep when compressing. | | compressionModel | (msgs: Message[]) => Promise<string> | — | Async hook to summarize old messages via a low-cost LLM. | | onCompress | (summary, count) => void | — | Fires after compression with the summary message and truncated count. | | onBudgetExceeded | (history, tokenInfo) => Message[] \| null | — | Fires before compression. Return modified history to intervene, or null to proceed normally. |

chef.reportTokenUsage(tokenCount): this

Feed the API-reported token count. On the next compile(), if this value exceeds contextWindow, compression is triggered. In the tokenizer path, the higher of the local calculation and the fed value is used.

const response = await openai.chat.completions.create({ ... });
chef.reportTokenUsage(response.usage.prompt_tokens);

onBudgetExceeded hook

Fires when the token budget is exceeded, before automatic compression. Return a modified Message[] to replace the history (e.g., offload tool results to VFS), or return null to let default compression proceed.

const chef = new ContextChef({
  janitor: {
    contextWindow: 200000,
    tokenizer: (msgs) => countTokens(msgs),
    onBudgetExceeded: (history, { currentTokens, limit }) => {
      // Example: offload large tool results to VFS before compression
      return history.map((msg) =>
        msg.role === "tool" && msg.content.length > 5000
          ? { ...msg, content: pointer.offload(msg.content).content }
          : msg,
      );
    },
  },
});

chef.clearHistory(): this

Explicitly clear history and reset Janitor state when switching topics or completing sub-tasks.


Large Output Offloading (Offloader / VFS)

// Offload if content exceeds threshold; preserves last 2000 chars by default
const safeLog = chef.offload(rawTerminalOutput);
history.push({ role: "tool", content: safeLog, tool_call_id: "call_123" });
// safeLog: original content if small, or truncated with context://vfs/ URI

// Preserve head (first 500 chars) + tail (last 1000 chars), snapped to line boundaries
const safeOutput = chef.offload(content, { headChars: 500, tailChars: 1000 });

// No preview content — just truncation notice + URI
const safeDoc = chef.offload(largeFileContent, { headChars: 0, tailChars: 0 });

// Override threshold per call
const safeOutput2 = chef.offload(content, { threshold: 2000, tailChars: 500 });

Register a tool for the LLM to read full content when needed:

// In your tool handler:
import { Offloader } from "@context-chef/core";
const offloader = new Offloader({ storageDir: ".context_vfs" });
const fullContent = offloader.resolve(uri);

Tool Management (Pruner)

Flat Mode

chef.registerTools([
  { name: "read_file", description: "Read a file", tags: ["file", "read"] },
  { name: "run_bash", description: "Run a command", tags: ["shell"] },
  {
    name: "get_time",
    description: "Get timestamp" /* no tags = always kept */,
  },
]);

const { tools, removed } = chef.getPruner().pruneByTask("Read the auth.ts file");
// tools: [read_file, get_time]

Also supports allowOnly(names) and pruneByTaskAndAllowlist(task, names).

Namespace + Lazy Loading (Two-Layer Architecture)

Layer 1 — Namespaces: Core tools grouped into stable tool definitions. The tool list never changes across turns.

Layer 2 — Lazy Loading: Long-tail tools registered as a lightweight XML directory. The LLM loads full schemas on demand via load_toolkit.

// Layer 1: Stable namespace tools
chef.registerNamespaces([
  {
    name: "file_ops",
    description: "File system operations",
    tools: [
      {
        name: "read_file",
        description: "Read a file",
        parameters: { path: { type: "string" } },
      },
      {
        name: "write_file",
        description: "Write to a file",
        parameters: { path: { type: "string" }, content: { type: "string" } },
      },
    ],
  },
  {
    name: "terminal",
    description: "Shell command execution",
    tools: [
      {
        name: "run_bash",
        description: "Execute a command",
        parameters: { command: { type: "string" } },
      },
    ],
  },
]);

// Layer 2: On-demand toolkits
chef.registerToolkits([
  {
    name: "Weather",
    description: "Weather forecast APIs",
    tools: [
      /* ... */
    ],
  },
  {
    name: "Database",
    description: "SQL query and schema inspection",
    tools: [
      /* ... */
    ],
  },
]);

// Compile — tools: [file_ops, terminal, load_toolkit] (always stable)
const { tools, directoryXml } = chef.getPruner().compile();
// directoryXml: inject into system prompt so LLM knows available toolkits

Agent Loop integration:

for (const toolCall of response.tool_calls) {
  if (chef.getPruner().isNamespaceCall(toolCall)) {
    // Route namespace call to real tool
    const { toolName, args } = chef.getPruner().resolveNamespace(toolCall);
    const result = await executeTool(toolName, args);
  } else if (chef.getPruner().isToolkitLoader(toolCall)) {
    // LLM requested a toolkit — expand and re-call
    const parsed = JSON.parse(toolCall.function.arguments);
    const newTools = chef.getPruner().extractToolkit(parsed.toolkit_name);
    // Merge newTools into the next LLM request
  }
}

Memory

Persistent key-value memory that survives across sessions. Memory is modified via tool calls (create_memory / modify_memory), which are auto-injected into the payload on compile().

import { InMemoryStore, VFSMemoryStore } from "@context-chef/core";

const chef = new ContextChef({
  memory: {
    store: new InMemoryStore(), // ephemeral (testing)
    // store: new VFSMemoryStore(dir),   // persistent (production)
  },
});

// In your agent loop, intercept memory tool calls:
for (const toolCall of response.tool_calls) {
  if (toolCall.function.name === "create_memory") {
    const { key, value, description } = JSON.parse(toolCall.function.arguments);
    await chef.getMemory().createMemory(key, value, description);
  } else if (toolCall.function.name === "modify_memory") {
    const { action, key, value, description } = JSON.parse(toolCall.function.arguments);
    if (action === "update") {
      await chef.getMemory().updateMemory(key, value, description);
    } else {
      await chef.getMemory().deleteMemory(key);
    }
  }
}

// Direct read/write (developer use, bypasses validation hooks)
await chef.getMemory().set("persona", "You are a senior engineer", {
  description: "The agent's persona and role",
});
const value = await chef.getMemory().get("persona");

// On compile():
// - Memory tools (create_memory, modify_memory) are auto-injected into payload.tools
// - Existing memories are injected as <memory> XML between systemPrompt and history

Snapshot & Restore

Capture and rollback full context state for branching or error recovery.

const snap = chef.snapshot("before risky tool call");

// ... agent executes tool, something goes wrong ...

chef.restore(snap); // rolls back everything: history, dynamic state, janitor state, memory

Lifecycle Events

Unified event system for observability across all internal modules. Subscribe via chef.on(), unsubscribe via chef.off().

// Log when history gets compressed
chef.on('compress', ({ summary, truncatedCount }) => {
  console.log(`Compressed ${truncatedCount} messages`);
});

// Track compile metrics
chef.on('compile:done', ({ payload }) => {
  metrics.track('compile', { messageCount: payload.messages.length });
});

// Monitor memory changes
chef.on('memory:changed', ({ type, key, value }) => {
  console.log(`Memory ${type}: ${key}`);
});

Available Events

| Event | Payload | Description | |---|---|---| | compile:start | { systemPrompt, history } | Emitted at the start of compile() | | compile:done | { payload } | Emitted after compile() produces the final payload | | compress | { summary, truncatedCount } | Emitted after Janitor compresses history | | memory:changed | { type, key, value, oldValue } | Emitted after any memory mutation (set, delete, expire) | | memory:expired | MemoryEntry | Emitted when a memory entry expires during compile() |

Events are observation-only — they don't affect control flow. Intercept hooks (onBudgetExceeded, onMemoryUpdate, onBeforeCompile, transformContext) remain as config callbacks.

Events coexist with existing config callbacks: if you provide onCompress in JanitorConfig, it fires first, then the compress event is emitted.


onBeforeCompile Hook

Inject external context (RAG, AST snippets, MCP queries) right before compilation without modifying the message array.

const chef = new ContextChef({
  onBeforeCompile: async (ctx) => {
    const snippets = await vectorDB.search(ctx.dynamicStateXml);
    return snippets.map((s) => s.content).join("\n");
    // Injected as <implicit_context>...</implicit_context> alongside dynamic state
    // Return null to skip injection
  },
});

Target Adapters

| Feature | OpenAI | Anthropic | Gemini | | ---------------------------- | --------------------------- | -------------------------------------- | ------------------------------------------ | | Format | Chat Completions | Messages API | generateContent | | Cache breakpoints | Stripped | cache_control: { type: 'ephemeral' } | Stripped (uses separate CachedContent API) | | Prefill (trailing assistant) | Degraded to [System Note] | Native support | Degraded to [System Note] | | thinking field | Stripped | Mapped to ThinkingBlockParam | Stripped | | Tool calls | tool_calls array | tool_use blocks | functionCall parts |

Adapters are selected automatically by compile({ target }). You can also use them standalone:

import { getAdapter } from "@context-chef/core";
const adapter = getAdapter("gemini");
const payload = adapter.compile(messages);

Skill

ContextChef ships with a Claude Code Skill that helps you integrate the library into your project interactively. The skill analyzes your existing codebase (LLM provider, package manager, project structure) and generates tailored integration code.

Install the Skill

npx skills add MyPrototypeWhat/context-chef

Use

Open Claude Code in your project and type:

/integrate

Claude will:

  1. Detect your setup — which LLM SDK you use (OpenAI / Anthropic / Gemini), package manager, TypeScript vs JavaScript
  2. Ask about your needs — history compression, tool management, memory, VFS offloading, snapshot/restore
  3. Generate integration code — tailored to your project structure and existing agent loop
  4. Explain the architecture — the sandwich model, cache breakpoints, dynamic state placement

License

ISC