npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@tscg/core

v1.3.0

Published

Deterministic prompt compiler for tool-schema compression. Implements all 8 TSCG operators (SDM, TAS, DRO, CFL, CFO, CAS, SAD-F, CCP). Reduces LLM tool-definition overhead by 71.7%. Zero dependencies.

Downloads

344

Readme

@tscg/core

Deterministic prompt compiler for tool-schema compression. Reduces LLM tool-definition overhead by 71.7% with zero accuracy loss.

Zero runtime dependencies. 50 tools in 2.4ms. 34.7KB bundle (11.7KB gzipped).

Installation

npm install @tscg/core
pnpm add @tscg/core
yarn add @tscg/core

Requirements: Node.js >= 18.0.0

Quick Start

import { compress } from '@tscg/core';

const tools = [
  {
    type: 'function',
    function: {
      name: 'get_weather',
      description: 'Get the current weather for a location',
      parameters: {
        type: 'object',
        properties: {
          location: { type: 'string', description: 'City name or coordinates' },
          units: { type: 'string', enum: ['celsius', 'fahrenheit'], description: 'Temperature units' },
        },
        required: ['location'],
      },
    },
  },
  {
    type: 'function',
    function: {
      name: 'send_email',
      description: 'Send an email to a specified recipient with a subject and body',
      parameters: {
        type: 'object',
        properties: {
          to: { type: 'string', description: 'Recipient email address' },
          subject: { type: 'string', description: 'Email subject line' },
          body: { type: 'string', description: 'Email body content' },
          cc: { type: 'string', description: 'CC recipient email address' },
        },
        required: ['to', 'subject', 'body'],
      },
    },
  },
];

const result = compress(tools, { model: 'claude-sonnet', profile: 'balanced' });

console.log(result.compressed);
// get_weather(location:str!, units?:str[celsius|fahrenheit]) -> weather data
// send_email(to:str!, subject:str!, body:str!, cc?:str) -> send result

console.log(result.metrics.tokens.savingsPercent); // ~71%
console.log(result.metrics.compressionTimeMs);     // <1ms

API Reference

compress(tools, options?)

The primary entry point. Compresses an array of tool definitions.

import { compress } from '@tscg/core';

const result = compress(tools, {
  model: 'claude-sonnet',
  profile: 'balanced',
});

Parameters:

| Parameter | Type | Description | |-----------|------|-------------| | tools | AnyToolDefinition[] | Array of tool definitions (OpenAI or Anthropic format) | | options | CompilerOptions | Optional compression configuration |

Returns: CompressedResult

compressToolSchema(tool, options?)

Convenience wrapper for compressing a single tool.

import { compressToolSchema } from '@tscg/core';

const result = compressToolSchema(weatherTool, { model: 'gpt-5' });

compressBatch(tools, models)

Compress the same tool catalog for multiple models at once.

import { compressBatch } from '@tscg/core';

const results = compressBatch(tools, ['claude-sonnet', 'gpt-5', 'mistral-7b']);

for (const [model, result] of results) {
  console.log(`${model}: ${result.metrics.tokens.savingsPercent}% savings`);
}

TSCGCompiler

The compiler class for reuse across multiple compression calls.

import { TSCGCompiler } from '@tscg/core';

const compiler = new TSCGCompiler({
  model: 'claude-sonnet',
  profile: 'aggressive',
  principles: { sad: true },
});

const result1 = compiler.compile(tool1);
const result2 = compiler.compileMany([tool1, tool2, tool3]);
const config = compiler.getMetrics();

Methods:

| Method | Description | |--------|-------------| | compile(tool) | Compress a single tool definition | | compileMany(tools) | Compress a catalog of tools (leverages cross-tool redundancies) | | getMetrics() | Get current compiler configuration (model, profile, principles) |

getTokenizerProfile(model)

Get the tokenizer profile for a specific model target.

import { getTokenizerProfile } from '@tscg/core';

const profile = getTokenizerProfile('claude-sonnet');
console.log(profile.charsPerToken);    // 4.0
console.log(profile.charsPerTokenCode); // 2.8

listProfiles()

List all available tokenizer profiles.

import { listProfiles } from '@tscg/core';

for (const profile of listProfiles()) {
  console.log(`${profile.model}: ${profile.charsPerToken} chars/token`);
}

Utility Functions

import { estimateTokens, formatSavings } from '@tscg/core';

const tokens = estimateTokens('some text', 'claude-sonnet');
const display = formatSavings(1000, 287); // "71.3% savings (1000 -> 287 tokens)"

Compiler Options

interface CompilerOptions {
  /** Target model for tokenizer-specific optimization */
  model?: ModelTarget;

  /** Compression aggressiveness: 'conservative' | 'balanced' | 'aggressive' */
  profile?: string;

  /** Toggle individual TSCG principles */
  principles?: {
    ata?: boolean;  // Abbreviated Type Annotations (str, int, bool)
    cfl?: boolean;  // Constraint-First Layout
    rke?: boolean;  // Redundant Key Elimination
    sad?: boolean;  // Selective Anchor Duplication (Claude-only)
    tas?: boolean;  // Tokenizer Alignment Scoring
    dtr?: boolean;  // Description Text Reduction
    sco?: boolean;  // Structural Compression Operators
    csp?: boolean;  // Context-Sensitive Pruning
  };

  /** Output format: 'json' | 'yaml-like' | 'compact' */
  outputFormat?: string;

  /** Preserve tool names unchanged (default: true) */
  preserveToolNames?: boolean;
}

Profiles

| Profile | Principles Enabled | Use Case | |---------|-------------------|----------| | conservative | ATA, RKE, DTR | Maximum compatibility, moderate savings (~40%) | | balanced | All except SAD | Best accuracy/savings tradeoff (~71%) | | aggressive | All including SAD | Maximum compression, Claude-only for SAD (~75%) |

Supported Models

| Model Target | Family | |-------------|--------| | claude-sonnet, claude-opus, claude-haiku | Anthropic Claude | | gpt-4, gpt-5, gpt-4o-mini | OpenAI GPT | | llama-3.1, llama-3.2 | Meta Llama | | mistral-7b, mistral-large | Mistral | | gemma-3 | Google Gemma | | phi-4 | Microsoft Phi | | qwen-3 | Alibaba Qwen | | deepseek-v3 | DeepSeek | | auto | Auto-detect (conservative defaults) |

Tool Definition Formats

TSCG accepts both OpenAI and Anthropic tool formats:

OpenAI format:

const tool = {
  type: 'function',
  function: {
    name: 'get_weather',
    description: 'Get weather for a location',
    parameters: {
      type: 'object',
      properties: { location: { type: 'string' } },
      required: ['location'],
    },
  },
};

Anthropic format:

const tool = {
  name: 'get_weather',
  description: 'Get weather for a location',
  input_schema: {
    type: 'object',
    properties: { location: { type: 'string' } },
    required: ['location'],
  },
};

Compressed Result

interface CompressedResult {
  /** Compressed tool definitions as a string */
  compressed: string;

  /** Compression metrics */
  metrics: {
    tokens: {
      original: number;
      compressed: number;
      savings: number;
      savingsPercent: number;
    };
    characters: { original: number; compressed: number };
    perTool: Array<{
      name: string;
      originalTokens: number;
      compressedTokens: number;
      savingsPercent: number;
    }>;
    compressionTimeMs: number;
  };

  /** Which TSCG principles were applied */
  appliedPrinciples: string[];
}

Advanced: Individual Transforms

For advanced users, TSCG exports individual transform functions from the engine:

import {
  applyToolSDM,   // Schema Description Minimization
  applyToolDRO,   // Description Redundancy Optimization
  applyToolCAS,   // Context-Aware Sorting
  applyToolTAS,   // Tokenizer Alignment Scoring
  optimizeToolDefinitions,  // Full pipeline
} from '@tscg/core';

Benchmark Results

Tested across 6 scenarios in the TAB (Tool-Aware Benchmark):

| Scenario | Token Savings | Accuracy | |----------|:------------:|:--------:| | Frontier Models (Claude 4, GPT-5.2, Gemini 2.5) | 71.7% | 100% | | Small Models (7B-8B, 50 tools) | 71.2% | Significant improvement | | Claude Code Simulation (77 tools) | 71.7% | 100% | | GSM8K Reasoning + 50 tools | 71.5% | Improved reasoning | | MCP Aggregation (50 tools, 5 servers) | 72.1% | 100% | | BFCL Accuracy Retention | 71.7% | 99.5% ARR |

How It Works

TSCG applies 8 compression principles grounded in transformer architecture:

  1. ATA -- Abbreviated Type Annotations: string -> str, integer -> int
  2. DTR -- Description Text Reduction: Remove redundant words from descriptions
  3. RKE -- Redundant Key Elimination: Remove JSON keys the model can infer
  4. SCO -- Structural Compression Operators: Flatten nested schemas
  5. CFL -- Constraint-First Layout: Place constraints where causal attention processes them
  6. TAS -- Tokenizer Alignment Scoring: Optimize for BPE token boundaries
  7. CSP -- Context-Sensitive Pruning: Remove derivable information
  8. SAD -- Selective Anchor Duplication: Reinforce critical parameters (Claude-only)

Related Packages

License

MIT