gram-library
v1.1.1
Published
A library for estimating token counts and costs for various LLMs.
Maintainers
Readme
Gram Library
A lightweight library for estimating token counts and costs for various Large Language Models (LLMs).
Features
- Supports OpenAI, Anthropic, Google (Gemini), Mistral, DeepSeek, and Meta-hosted model pricing presets.
- Estimates input and output costs based on current pricing.
- Handles chat format (arrays of messages) and raw text.
- Runs in the browser and Node.js.
- Exposes
modelAliasesanddeprecatedModelIdsto help manage legacy model IDs.
Installation
npm install gram-libraryUsage
import { Gram, modelAliases, deprecatedModelIds } from 'gram-library';
const gram = new Gram();
// Estimate cost and tokens for a simple string
const text = "Hello, how are you?";
const estimate = await gram.estimate(text, "gpt-4o");
console.log(estimate);
// Output: { tokens: 6, inputCost: 0.000015, outputCost: 0.00006 }
// Estimate for a chat conversation
const messages = [
{ role: "user", content: "Hello" },
{ role: "assistant", content: "Hi there!" }
];
const chatEstimate = await gram.estimate(messages, "claude-sonnet-4-6");
console.log(chatEstimate);
// Alias IDs are resolved automatically at runtime
const legacyEstimate = await gram.estimate(text, "mistral-large-3");
console.log(legacyEstimate);
// Optional: resolve legacy IDs to canonical IDs
const maybeAlias = "mistral-large-3";
const canonicalModelId = modelAliases[maybeAlias] || maybeAlias;
console.log({ maybeAlias, canonicalModelId, isDeprecated: deprecatedModelIds.includes(maybeAlias) });