openai-cost
v1.0.19
Published
OpenAI token usage cost calculator and fetch-based call tracker utilities.
Maintainers
Readme
openai-cost
Utilities to estimate OpenAI API call costs from response.usage, plus a fetch wrapper to track each OpenAI call.
This folder currently provides:
OpenAiCostCalculator: computes USD cost for one call.OpenAICallTracker: wrapsfetchso you can process each API response in a callback.
What It Solves
- Convert OpenAI usage tokens into per-call USD cost.
- Handle cached input tokens (when pricing supports cache discounts).
- Support pricing tiers via
priorityType:batch,flex,standard,priority. - Track costs per bucket/team/project with a callback-based fetch wrapper.
Install
From this folder:
npm installProject dependency currently used here:
@jeromeetienne/openai-cache
Examples also use:
openaicacheable@keyv/sqlitechalk(comparator example)
Quick Start: Cost Per Response
import OpenAI from "openai";
import { OpenAiCostCalculator } from "../src/openai_cost_calculator";
const openaiClient = new OpenAI();
const modelName = "gpt-4.1-nano";
const response = await openaiClient.responses.create({
model: modelName,
input: "say hello",
});
const usage = response.usage!;
const cost = await OpenAiCostCalculator.calculateCost(modelName, usage);
console.log(cost);
// {
// inputCost: number,
// cacheInputCost: number,
// outputCost: number,
// totalCost: number
// }API
OpenAiCostCalculator.calculateCost(modelName, usage, priorityType?)
Signature:
calculateCost(
modelName: string,
openaiUsage: OpenAI.Responses.ResponseUsage,
priorityType: "batch" | "flex" | "standard" | "priority" = "standard"
): Promise<OpenAiCostResponse>Returns:
inputCost: input token cost in USDcacheInputCost: cached input token cost in USDoutputCost: output token cost in USDtotalCost: total USD cost
Behavior notes:
- Normalizes date-suffixed models such as
gpt-4o-2024-05-13togpt-4o. - Handles special pricing split for:
gpt-5.4(<272Kvs>272Kcontext length)gpt-5.4-pro(<272Kvs>272Kcontext length)
- Throws if no pricing is found for the model.
OpenAICallTracker.getFetchFn(trackerCallback, options?)
Signature:
getFetchFn(
trackerCallback: (bucketId: string, response: Response) => Promise<void>,
options?: {
bucketId?: string;
originalFetch?: (input: RequestInfo, init?: RequestInit) => Promise<Response>;
}
): Promise<(input: RequestInfo, init?: RequestInit) => Promise<Response>>Behavior:
- Calls the provided
originalFetch(default: globalfetch). - Clones the response and passes it to
trackerCallback(bucketId, response.clone()). - Returns the original response untouched.
Running The Included Examples
From contribs/openai-cost:
npm run sample:basic
npm run sample:comparator
npm run sample:trackerWhat they show:
examples/openai_cost_example.ts: one call, print usage and cost.examples/openai_cost_comparator.ts: compare cost per call across multiple models.examples/openai_cost_tracker_example.ts: combine cache + tracker callback + persisted sample DB.
Tracker + Cache Pattern
openai_cost_tracker_example.ts demonstrates:
- wrapping
@jeromeetienne/openai-cachefetch withOpenAICallTracker, - grouping tracked costs by
bucketId, - splitting spending into
spent(live calls) andsaved(cache hits), - persisting tracked totals to JSON.
Pricing Table
Pricing is hardcoded in:
src/openai_cost_calculator.ts(pricingPerModel)
Update that object when OpenAI pricing changes.
Caveats
- Cost results are estimates based on the local pricing table.
- Unknown/new model names will throw until added to
pricingPerModel. - The calculator currently mutates model pricing values when
priorityTypeis notstandard; if you call it repeatedly with mixed priorities in one process, results can drift. If you need strict correctness, clone pricing values before applying priority multipliers.
License
MIT
