npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@aibrowser-optimizer/sdk

v0.1.0

Published

Automatically reduce OpenAI and Anthropic token usage by 60-90% with intelligent text compression

Readme

AI Browser Token Optimizer SDK (JavaScript/TypeScript)

Reduce your OpenAI and Anthropic costs by 60-90% automatically.

Drop-in replacement for OpenAI and Anthropic clients that automatically compresses long text before sending it to LLMs.

Features

  • Drop-in replacement - Change just 2 lines of code
  • TypeScript support - Full type definitions included
  • Automatic compression - No manual work required
  • 60-90% token savings - Typical reduction
  • Supports OpenAI & Anthropic - Works with all models
  • Smart detection - Auto-detects content type
  • Zero config - Works out of the box

Installation

npm install @aibrowser/optimizer

With OpenAI:

npm install @aibrowser/optimizer openai

With Anthropic:

npm install @aibrowser/optimizer @anthropic-ai/sdk

Quick Start

OpenAI (TypeScript)

import { OptimizedOpenAI } from '@aibrowser/optimizer';

// Just change these 2 lines:
const client = new OptimizedOpenAI({
  openaiKey: 'sk-...',           // Your OpenAI API key
  optimizerKey: 'your-api-key'   // Your AI Browser API key
});

// Use exactly like OpenAI:
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [
    { role: 'user', content: 'Explain this code: ' + longCodeFile }
  ]
});

console.log(response.choices[0].message.content);
console.log(`Tokens saved: ${client.getTotalTokensSaved()}`);

OpenAI (JavaScript)

const { OptimizedOpenAI } = require('@aibrowser/optimizer');

const client = new OptimizedOpenAI({
  openaiKey: 'sk-...',
  optimizerKey: 'your-api-key'
});

const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'long text...' }]
});

Anthropic (TypeScript)

import { OptimizedAnthropic } from '@aibrowser/optimizer';

const client = new OptimizedAnthropic({
  anthropicKey: 'sk-ant-...',      // Your Anthropic API key
  optimizerKey: 'your-api-key'     // Your AI Browser API key
});

const response = await client.messages.create({
  model: 'claude-3-5-sonnet-20241022',
  max_tokens: 1024,
  messages: [
    { role: 'user', content: 'Analyze these logs: ' + longLogs }
  ]
});

console.log(response.content[0].text);
console.log(`Tokens saved: ${client.getTotalTokensSaved()}`);

Configuration

Compression Threshold

const client = new OptimizedOpenAI({
  openaiKey: '...',
  optimizerKey: '...',
  threshold: 5000  // Only compress if > 5000 characters
});

Disable Auto-Compression

const client = new OptimizedOpenAI({
  openaiKey: '...',
  optimizerKey: '...',
  autoCompress: false  // Disable automatic compression
});

Custom Optimizer URL

const client = new OptimizedOpenAI({
  openaiKey: '...',
  optimizerKey: '...',
  optimizerUrl: 'http://localhost:3002/v1'  // Use local instance
});

Examples

Code Explanation

import { OptimizedOpenAI } from '@aibrowser/optimizer';
import fs from 'fs';

const client = new OptimizedOpenAI({
  openaiKey: process.env.OPENAI_KEY,
  optimizerKey: process.env.OPTIMIZER_KEY
});

// Read large code file
const code = fs.readFileSync('large_codebase.js', 'utf-8');  // 10,000 lines

// Without optimization: ~30,000 tokens = $0.90
// With optimization: ~3,000 tokens = $0.09
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: `Explain this code:\n\n${code}` }]
});

console.log(response.choices[0].message.content);
// Savings: $0.81 (90%)

Log Analysis

import { OptimizedOpenAI } from '@aibrowser/optimizer';
import fs from 'fs';

const client = new OptimizedOpenAI({
  openaiKey: process.env.OPENAI_KEY,
  optimizerKey: process.env.OPTIMIZER_KEY
});

const logs = fs.readFileSync('error.log', 'utf-8');  // 50,000 lines

const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: `Find the root cause:\n\n${logs}` }]
});

console.log(response.choices[0].message.content);

Migration from Existing Code

Before:

import OpenAI from 'openai';

const client = new OpenAI({ apiKey: 'sk-...' });

const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: '...' }]
});

After:

import { OptimizedOpenAI } from '@aibrowser/optimizer';

const client = new OptimizedOpenAI({
  openaiKey: 'sk-...',
  optimizerKey: 'your-api-key'
});

const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: '...' }]
});

Only 2 lines changed!

API

OptimizedOpenAI

class OptimizedOpenAI {
  constructor(options: {
    openaiKey: string;
    optimizerKey: string;
    optimizerUrl?: string;  // Default: 'https://api.aibrowser.dev/v1'
    autoCompress?: boolean; // Default: true
    threshold?: number;     // Default: 2000
    baseURL?: string;       // OpenAI base URL
    organization?: string;  // OpenAI organization
  });

  // Get total tokens saved
  getTotalTokensSaved(): number;

  // Access to OpenAI methods (with auto-compression)
  chat: { completions: { create(...) } };
  completions: OpenAI.Completions;
  embeddings: OpenAI.Embeddings;
  images: OpenAI.Images;
  // ... all other OpenAI methods

  // Direct access to underlying OpenAI client
  raw: OpenAI;
}

OptimizedAnthropic

class OptimizedAnthropic {
  constructor(options: {
    anthropicKey: string;
    optimizerKey: string;
    optimizerUrl?: string;  // Default: 'https://api.aibrowser.dev/v1'
    autoCompress?: boolean; // Default: true
    threshold?: number;     // Default: 2000
  });

  // Get total tokens saved
  getTotalTokensSaved(): number;

  // Access to Anthropic methods (with auto-compression)
  messages: { create(...) };

  // Direct access to underlying Anthropic client
  raw: Anthropic;
}

Use Cases

Perfect for:

  • 📊 Code analysis - Explain large codebases
  • 🐛 Debugging - Analyze error logs
  • 📚 Documentation - Process technical docs
  • 🤖 AI Agents - Optimize context for agents
  • 💬 Long conversations - Compress chat history
  • 🔍 Research - Summarize research papers

Get API Key

Sign up at: https://aibrowser.dev/signup

Pricing

  • Free tier: 10,000 compressions/month
  • Pro: $9/month - Unlimited compressions
  • Enterprise: Custom pricing

ROI: Average user saves $50-200/month in LLM costs, paying only $9 for the optimizer.

FAQ

Q: Does it work with ESM and CommonJS? A: Yes, both are supported.

Q: TypeScript support? A: Full TypeScript definitions included.

Q: Does it modify the LLM responses? A: No, only input text is compressed.

Q: What if compression fails? A: Falls back to original text automatically.

License

MIT License

Support


Made with ❤️ by the AI Browser team