@aibrowser-optimizer/sdk
v0.1.0
Published
Automatically reduce OpenAI and Anthropic token usage by 60-90% with intelligent text compression
Maintainers
Readme
AI Browser Token Optimizer SDK (JavaScript/TypeScript)
Reduce your OpenAI and Anthropic costs by 60-90% automatically.
Drop-in replacement for OpenAI and Anthropic clients that automatically compresses long text before sending it to LLMs.
Features
- ✅ Drop-in replacement - Change just 2 lines of code
- ✅ TypeScript support - Full type definitions included
- ✅ Automatic compression - No manual work required
- ✅ 60-90% token savings - Typical reduction
- ✅ Supports OpenAI & Anthropic - Works with all models
- ✅ Smart detection - Auto-detects content type
- ✅ Zero config - Works out of the box
Installation
npm install @aibrowser/optimizerWith OpenAI:
npm install @aibrowser/optimizer openaiWith Anthropic:
npm install @aibrowser/optimizer @anthropic-ai/sdkQuick Start
OpenAI (TypeScript)
import { OptimizedOpenAI } from '@aibrowser/optimizer';
// Just change these 2 lines:
const client = new OptimizedOpenAI({
openaiKey: 'sk-...', // Your OpenAI API key
optimizerKey: 'your-api-key' // Your AI Browser API key
});
// Use exactly like OpenAI:
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [
{ role: 'user', content: 'Explain this code: ' + longCodeFile }
]
});
console.log(response.choices[0].message.content);
console.log(`Tokens saved: ${client.getTotalTokensSaved()}`);OpenAI (JavaScript)
const { OptimizedOpenAI } = require('@aibrowser/optimizer');
const client = new OptimizedOpenAI({
openaiKey: 'sk-...',
optimizerKey: 'your-api-key'
});
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'long text...' }]
});Anthropic (TypeScript)
import { OptimizedAnthropic } from '@aibrowser/optimizer';
const client = new OptimizedAnthropic({
anthropicKey: 'sk-ant-...', // Your Anthropic API key
optimizerKey: 'your-api-key' // Your AI Browser API key
});
const response = await client.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{ role: 'user', content: 'Analyze these logs: ' + longLogs }
]
});
console.log(response.content[0].text);
console.log(`Tokens saved: ${client.getTotalTokensSaved()}`);Configuration
Compression Threshold
const client = new OptimizedOpenAI({
openaiKey: '...',
optimizerKey: '...',
threshold: 5000 // Only compress if > 5000 characters
});Disable Auto-Compression
const client = new OptimizedOpenAI({
openaiKey: '...',
optimizerKey: '...',
autoCompress: false // Disable automatic compression
});Custom Optimizer URL
const client = new OptimizedOpenAI({
openaiKey: '...',
optimizerKey: '...',
optimizerUrl: 'http://localhost:3002/v1' // Use local instance
});Examples
Code Explanation
import { OptimizedOpenAI } from '@aibrowser/optimizer';
import fs from 'fs';
const client = new OptimizedOpenAI({
openaiKey: process.env.OPENAI_KEY,
optimizerKey: process.env.OPTIMIZER_KEY
});
// Read large code file
const code = fs.readFileSync('large_codebase.js', 'utf-8'); // 10,000 lines
// Without optimization: ~30,000 tokens = $0.90
// With optimization: ~3,000 tokens = $0.09
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: `Explain this code:\n\n${code}` }]
});
console.log(response.choices[0].message.content);
// Savings: $0.81 (90%)Log Analysis
import { OptimizedOpenAI } from '@aibrowser/optimizer';
import fs from 'fs';
const client = new OptimizedOpenAI({
openaiKey: process.env.OPENAI_KEY,
optimizerKey: process.env.OPTIMIZER_KEY
});
const logs = fs.readFileSync('error.log', 'utf-8'); // 50,000 lines
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: `Find the root cause:\n\n${logs}` }]
});
console.log(response.choices[0].message.content);Migration from Existing Code
Before:
import OpenAI from 'openai';
const client = new OpenAI({ apiKey: 'sk-...' });
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: '...' }]
});After:
import { OptimizedOpenAI } from '@aibrowser/optimizer';
const client = new OptimizedOpenAI({
openaiKey: 'sk-...',
optimizerKey: 'your-api-key'
});
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: '...' }]
});Only 2 lines changed! ✅
API
OptimizedOpenAI
class OptimizedOpenAI {
constructor(options: {
openaiKey: string;
optimizerKey: string;
optimizerUrl?: string; // Default: 'https://api.aibrowser.dev/v1'
autoCompress?: boolean; // Default: true
threshold?: number; // Default: 2000
baseURL?: string; // OpenAI base URL
organization?: string; // OpenAI organization
});
// Get total tokens saved
getTotalTokensSaved(): number;
// Access to OpenAI methods (with auto-compression)
chat: { completions: { create(...) } };
completions: OpenAI.Completions;
embeddings: OpenAI.Embeddings;
images: OpenAI.Images;
// ... all other OpenAI methods
// Direct access to underlying OpenAI client
raw: OpenAI;
}OptimizedAnthropic
class OptimizedAnthropic {
constructor(options: {
anthropicKey: string;
optimizerKey: string;
optimizerUrl?: string; // Default: 'https://api.aibrowser.dev/v1'
autoCompress?: boolean; // Default: true
threshold?: number; // Default: 2000
});
// Get total tokens saved
getTotalTokensSaved(): number;
// Access to Anthropic methods (with auto-compression)
messages: { create(...) };
// Direct access to underlying Anthropic client
raw: Anthropic;
}Use Cases
Perfect for:
- 📊 Code analysis - Explain large codebases
- 🐛 Debugging - Analyze error logs
- 📚 Documentation - Process technical docs
- 🤖 AI Agents - Optimize context for agents
- 💬 Long conversations - Compress chat history
- 🔍 Research - Summarize research papers
Get API Key
Sign up at: https://aibrowser.dev/signup
Pricing
- Free tier: 10,000 compressions/month
- Pro: $9/month - Unlimited compressions
- Enterprise: Custom pricing
ROI: Average user saves $50-200/month in LLM costs, paying only $9 for the optimizer.
FAQ
Q: Does it work with ESM and CommonJS? A: Yes, both are supported.
Q: TypeScript support? A: Full TypeScript definitions included.
Q: Does it modify the LLM responses? A: No, only input text is compressed.
Q: What if compression fails? A: Falls back to original text automatically.
License
MIT License
Support
- 📧 Email: [email protected]
- 💬 Discord: Join our community
- 📖 Docs: https://docs.aibrowser.dev
- 🐛 Issues: https://github.com/yourusername/aibrowser-optimizer/issues
Made with ❤️ by the AI Browser team
