npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, πŸ‘‹, I’m Ryan HefnerΒ  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you πŸ™

Β© 2026 – Pkg Stats / Ryan Hefner

contextomizer

v1.0.0

Published

Contextomizer is an ultra-fast, deterministic library for transforming bloated tool outputs, raw APIs, documents, and messy logs into perfectly optimized context for AI Agents πŸ€–πŸš€

Readme

Contextomizer πŸ—œοΈβœ¨

npm version License Node.js Coverage CI PRs Welcome

Contextomizer is an ultra-fast, deterministic library for transforming bloated tool outputs, raw APIs, documents, and messy logs into perfectly optimized context for AI Agents. πŸ€–πŸš€

If you are building an AI agent, you know the struggle: tools return massive JSONs, error traces are hundreds of lines long, and HTML pages blow up your token budget instantly. Worst of all, you might be leaking API keys in the prompt! 😱

Contextomizer sits between your tools (like MCP servers, OpenAI functions, or Vercel AI SDK tools) and the LLM. It automatically:

  • πŸ“‰ Reduces tokens deterministically without extra LLM calls!
  • 🧹 Removes noise (HTML tags, generic log info).
  • πŸ” Redacts secrets securely before they hit the model.
  • 🧠 Preserves useful information intelligently (errors, structural bounds).
  • 🧩 Integrates seamlessly with AI frameworks!

🎭 Before & After (Example)

Input (Huge messy result with Secrets & Noise):

{
  "user": "Alice",
  "apiKey": "sk-live-123456789",
  "logs": "INFO starting...\nINFO loading x...\nERROR Connection failed at db.js:42\nINFO retry..."
}

Output (Contextomized for LLM):

{"user":"Alice","apiKey":"[REDACTED]","logs":"ERROR Connection failed at db.js:42\n...[LOGS TRUNCATED]"}

(Token cost reduced by 70%. Secrets secured. Noise removed. The AI gets exactly what it needs!)


πŸ“¦ Installation

npm install contextomizer

πŸš€ Basic Usage

The core of the library is the contextomize function. Just pass it your raw data, set your constraints, and let it do the magic! ✨

import { contextomize } from 'contextomizer';

const data = {
  veryImportantField: "Keep this, it's vital!",
  hugeArray: Array.from({length: 1000}).map((_, i) => ({ id: i, data: "bloat" })),
  secretToken: "Bearer sk-live-abc123def456.789"
};

const result = await contextomize(data, {
  maxTokens: 50, // Keep it tight!
  enableRedaction: true, // Hide those secrets!
  dropKeys: ['hugeArray'] // We don't need this bulk
});

console.log(result.forModel); 
// πŸ‘‰ Output is a clean, redacted string that fits perfectly in your prompt!

console.log(`Saved tokens: ${result.meta.estimatedSavedTokens} πŸ’ͺ`);

πŸ”Œ Advanced Integrations

Contextomizer shines when you plug it straight into your agent workflows! We provide ready-to-use adapters for the most popular ecosystems. 🌍

1. Model Context Protocol (MCP) Server Integration πŸ–₯️

If you are building an MCP Server, your tools return a specific CallToolResult format. Contextomizer has an adapter that wraps your output into the exact format that MCP clients (like Claude Desktop) expect, while applying token budgets!

import { MCPAdapter } from 'contextomizer/adapters/mcp';

const adapter = new MCPAdapter();

// Inside your MCP Server tool handler:
server.setRequestHandler(CallToolRequestSchema, async (request) => {
    try {
        const rawResult = await runMyHeavyDatabaseQuery(request.params.arguments);
        
        // Contextomizer formats it perfectly for MCP!
        return await adapter.decorateCallToolResult(rawResult, {
            maxTokens: 4000,
            enableRedaction: true
        });
    } catch (error) {
        // Formats errors beautifully too!
        return await adapter.decorateCallToolResult(error);
    }
});

2. Vercel AI SDK πŸš€

Wrap your tool definitions effortlessly so the Vercel AI SDK Agent only receives context-optimized results.

import { AISDKToolAdapter } from 'contextomizer/adapters/ai-sdk';
import { tool } from 'ai';
import { z } from 'zod';

const adapter = new AISDKToolAdapter();

const myHeavyTool = tool({
  description: 'Fetches huge system logs',
  parameters: z.object({ target: z.string() }),
  execute: async ({ target }) => {
    const hugeLogData = await fetchLogs(target);
    return hugeLogData; // Normally, this would crash your context window!
  }
});

// Wrap it!
export const optimizedTool = adapter.wrapTool(myHeavyTool, {
  maxTokens: 1000, // Now it will automatically truncate logs!
});

3. OpenAI Function Calling πŸ€–

If you are using the raw OpenAI SDK, you can wrap your function call results before appending them to the message history.

import { OpenAIToolAdapter } from 'contextomizer/adapters/openai';

const adapter = new OpenAIToolAdapter();

const rawResult = await executeRawFunction(toolCall);
const safeString = await adapter.wrapToolResult(rawResult, { maxTokens: 500 });

messages.push({
    role: "tool",
    tool_call_id: toolCall.id,
    content: safeString 
});

πŸ›‘οΈ Content Detection & Reducers

Contextomizer automatically detects what you throw at it and applies the best reduction strategy organically! 🧬

  • πŸ“„ JSON: Drops keys, truncates deep nesting, prioritizes defined paths.
  • 🌐 HTML: Strips <script>, <style>, and <svg>, keeping only readable semantic text.
  • πŸ“‹ Logs: Keeps ERROR and FATAL lines, truncates generic INFO spam when over budget!
  • 🚨 Error Traces: Preserves the core error message and root cause, shedding useless stack frame bloat.
  • πŸ“ Plain Text: Intelligent token-aware string truncation.

🧠 Model Assist (Optional AI Overdrive)

While Contextomizer is proudly deterministic and pure by default, sometimes you really need to compress a 50,000-word document into 500 tokens without losing the semantic meaning.

For this, you can plug in any LLM via the Model Assist Provider! 🎩✨

Contextomizer ships with built-in, zero-dependency providers for OpenAI and Anthropic (Claude) that use native fetch under the hood.

import { contextomize } from 'contextomizer';
import { OpenAIProvider, AnthropicProvider } from 'contextomizer/model-assist';

// Using OpenAI
const result = await contextomize(hugeDocument, {
  maxTokens: 500,
  enableModelAssist: true,
  modelAssistProvider: new OpenAIProvider({ 
    apiKey: process.env.OPENAI_API_KEY,
    model: 'gpt-4o-mini' // Optional, defaults to gpt-4o-mini
  })
});

// Or using Anthropic (Claude)
const claudeResult = await contextomize(hugeDocument, {
  maxTokens: 500,
  enableModelAssist: true,
  modelAssistProvider: new AnthropicProvider({ 
    apiKey: process.env.ANTHROPIC_API_KEY,
    model: 'claude-3-5-haiku-latest' // Optional
  })
});

Example: Implementing a Simple Provider

The core library provides the abstract interface for ModelAssistProvider. You inject your preferred LLM client!

Here is how easily you can build your own ModelAssistProvider to call your company's internal model API, for example:

import { AbstractModelAssistProvider, ModelAssistInput, ModelAssistOutput } from 'contextomizer';

export class MyInternalModelProvider extends AbstractModelAssistProvider {
  async summarize(input: ModelAssistInput): Promise<ModelAssistOutput> {
    // Call your own internal API or any other open-source model endpoint
    const response = await fetch('https://api.mycompany.internal/v1/summarize', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        text: input.text,
        maxTokens: input.targetTokens,
        contextType: input.detectedType
      })
    });
    
    const data = await response.json();
    return { 
      text: data.summary,
      estimatedTokens: data.usedTokens
    };
  }
}

πŸ› οΈ Configuration Options

| Option | Type | Default | Description | |--------|------|---------|-------------| | maxTokens | number | undefined | The strict token budget for the result. | | enableRedaction | boolean | false | Scans and masks secrets (API keys, tokens, etc.). | | dropKeys | string[] | [] | JSON keys to blindly drop during reduction. | | keepKeys | string[] | [] | JSON keys to forcefully keep at all costs. | | logger | ILogger | console | Inject a custom logger to trace the inner reduction pipeline! | | enableModelAssist | boolean | false | Falls back to an LLM provider if deterministic reduction fails. |


🀝 Contributing

We love contributions! Feel free to open issues or PRs. Make sure you run tests:

npm run test
npm run lint

πŸ“œ License

MIT License. See LICENSE for more details. Build safely! 🏰✨