npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@cogitator-ai/models

v3.0.0

Published

Dynamic model registry with pricing for Cogitator

Downloads

667

Readme

@cogitator-ai/models

Dynamic model registry with pricing information for Cogitator. Fetches up-to-date model data from LiteLLM and provides built-in fallbacks for major providers.

Installation

pnpm add @cogitator-ai/models

Features

  • Dynamic Data - Fetches latest model info from LiteLLM
  • Pricing Information - Input/output costs per million tokens
  • Capability Tracking - Vision, tools, streaming, JSON mode support
  • Multi-Provider - OpenAI, Anthropic, Google, Ollama, Azure, AWS, and more
  • Caching - Memory or file-based cache with configurable TTL
  • Fallback - Built-in models when external data unavailable
  • Filtering - Query models by provider, capabilities, price

Quick Start

import { initializeModels, getModel, getPrice, listModels } from '@cogitator-ai/models';

await initializeModels();

const model = getModel('gpt-4o');
console.log(model?.contextWindow);
console.log(model?.capabilities?.supportsVision);

const price = getPrice('claude-sonnet-4-20250514');
console.log(`Input: $${price?.input}/M tokens`);
console.log(`Output: $${price?.output}/M tokens`);

const toolModels = listModels({
  supportsTools: true,
  provider: 'openai',
});

Model Registry

The ModelRegistry class manages model data with caching and auto-refresh.

Initialization

import { ModelRegistry } from '@cogitator-ai/models';

const registry = new ModelRegistry({
  cache: {
    ttl: 24 * 60 * 60 * 1000,
    storage: 'file',
    filePath: './cache/models.json',
  },
  autoRefresh: true,
  refreshInterval: 24 * 60 * 60 * 1000,
  fallbackToBuiltin: true,
});

await registry.initialize();

Configuration Options

interface RegistryOptions {
  cache?: CacheOptions;
  autoRefresh?: boolean;
  refreshInterval?: number;
  fallbackToBuiltin?: boolean;
}

interface CacheOptions {
  ttl: number;
  storage: 'memory' | 'file';
  filePath?: string;
}

| Option | Default | Description | | ------------------- | ---------- | ------------------------------------ | | cache.ttl | 24 hours | Cache time-to-live in milliseconds | | cache.storage | 'memory' | Storage backend | | cache.filePath | - | File path for file-based cache | | autoRefresh | false | Enable automatic background refresh | | refreshInterval | 24 hours | Refresh interval in milliseconds | | fallbackToBuiltin | true | Use built-in models on fetch failure |

Registry Methods

await registry.initialize();

const model = registry.getModel('gpt-4o');

const price = registry.getPrice('claude-3-5-sonnet-20241022');

const models = registry.listModels({
  provider: 'anthropic',
  supportsVision: true,
});

const providers = registry.listProviders();

const provider = registry.getProvider('openai');

console.log(registry.getModelCount());
console.log(registry.isInitialized());

await registry.refresh();

registry.shutdown();

Global Functions

For convenience, the package provides global functions that use a default registry:

import {
  initializeModels,
  getModel,
  getPrice,
  listModels,
  getModelRegistry,
  shutdownModels,
} from '@cogitator-ai/models';

await initializeModels();

const model = getModel('gpt-4o-mini');
const price = getPrice('gpt-4o-mini');
const allModels = listModels();

const registry = getModelRegistry();
const count = registry.getModelCount();

shutdownModels();

Model Information

ModelInfo Type

interface ModelInfo {
  id: string;
  provider: string;
  displayName: string;
  pricing: ModelPricing;
  contextWindow: number;
  maxOutputTokens?: number;
  capabilities?: ModelCapabilities;
  deprecated?: boolean;
  aliases?: string[];
}

interface ModelPricing {
  input: number;
  output: number;
  inputCached?: number;
  outputCached?: number;
}

interface ModelCapabilities {
  supportsVision?: boolean;
  supportsTools?: boolean;
  supportsFunctions?: boolean;
  supportsStreaming?: boolean;
  supportsJson?: boolean;
}

Example Model

const model = getModel('gpt-4o');
// {
//   id: 'gpt-4o',
//   provider: 'openai',
//   displayName: 'GPT-4o',
//   pricing: { input: 2.5, output: 10 },
//   contextWindow: 128000,
//   maxOutputTokens: 16384,
//   capabilities: {
//     supportsVision: true,
//     supportsTools: true,
//     supportsStreaming: true,
//     supportsJson: true,
//   }
// }

Filtering Models

Use ModelFilter to query specific models:

interface ModelFilter {
  provider?: string;
  supportsTools?: boolean;
  supportsVision?: boolean;
  minContextWindow?: number;
  maxPricePerMillion?: number;
  excludeDeprecated?: boolean;
}

Filter Examples

const openaiModels = listModels({
  provider: 'openai',
});

const visionModels = listModels({
  supportsVision: true,
});

const toolModels = listModels({
  supportsTools: true,
  excludeDeprecated: true,
});

const largeContext = listModels({
  minContextWindow: 100000,
});

const cheapModels = listModels({
  maxPricePerMillion: 1.0,
});

const anthropicVision = listModels({
  provider: 'anthropic',
  supportsVision: true,
  supportsTools: true,
});

Providers

Built-in Providers

import { BUILTIN_PROVIDERS } from '@cogitator-ai/models';

| Provider | Website | | ------------ | ---------------------- | | OpenAI | openai.com | | Anthropic | anthropic.com | | Google | ai.google.dev | | Ollama | ollama.com | | Azure OpenAI | azure.microsoft.com | | AWS Bedrock | aws.amazon.com/bedrock | | Mistral AI | mistral.ai | | Cohere | cohere.com | | Groq | groq.com | | Together AI | together.ai | | Fireworks AI | fireworks.ai | | DeepInfra | deepinfra.com | | Perplexity | perplexity.ai | | Replicate | replicate.com | | xAI | x.ai |

Provider Information

interface ProviderInfo {
  id: string;
  name: string;
  website?: string;
  models: string[];
}

const providers = registry.listProviders();
const openai = registry.getProvider('openai');
console.log(openai?.models.length);

Built-in Models

Fallback models are available when LiteLLM data cannot be fetched:

import {
  BUILTIN_MODELS,
  OPENAI_MODELS,
  ANTHROPIC_MODELS,
  GOOGLE_MODELS,
} from '@cogitator-ai/models';

OpenAI Models

  • gpt-4o
  • gpt-4o-mini
  • o1
  • o1-mini
  • o3-mini

Anthropic Models

  • claude-sonnet-4-20250514
  • claude-3-5-sonnet-20241022
  • claude-3-5-haiku-20241022
  • claude-3-opus-20240229

Google Models

  • gemini-2.5-pro
  • gemini-2.5-flash
  • gemini-2.0-flash
  • gemini-1.5-pro
  • gemini-1.5-flash

Caching

Memory Cache

const registry = new ModelRegistry({
  cache: {
    ttl: 60 * 60 * 1000,
    storage: 'memory',
  },
});

File Cache

const registry = new ModelRegistry({
  cache: {
    ttl: 24 * 60 * 60 * 1000,
    storage: 'file',
    filePath: './cache/models.json',
  },
});

ModelCache Class

import { ModelCache } from '@cogitator-ai/models';

const cache = new ModelCache({
  ttl: 3600000,
  storage: 'file',
  filePath: './models-cache.json',
});

const models = await cache.get();

await cache.set(models);

const staleData = await cache.getStale();

Data Fetching

LiteLLM Integration

import { fetchLiteLLMData, transformLiteLLMData } from '@cogitator-ai/models';

const rawData = await fetchLiteLLMData();

const models = transformLiteLLMData(rawData);

LiteLLM Data Structure

interface LiteLLMModelEntry {
  max_tokens?: number;
  max_input_tokens?: number;
  max_output_tokens?: number;
  input_cost_per_token?: number;
  output_cost_per_token?: number;
  litellm_provider?: string;
  supports_function_calling?: boolean;
  supports_vision?: boolean;
  deprecation_date?: string;
}

Examples

Cost Calculator

import { getPrice } from '@cogitator-ai/models';

function calculateCost(modelId: string, inputTokens: number, outputTokens: number): number | null {
  const price = getPrice(modelId);
  if (!price) return null;

  const inputCost = (inputTokens / 1_000_000) * price.input;
  const outputCost = (outputTokens / 1_000_000) * price.output;

  return inputCost + outputCost;
}

const cost = calculateCost('gpt-4o', 10000, 2000);
console.log(`Cost: $${cost?.toFixed(4)}`);

Model Selector

import { listModels } from '@cogitator-ai/models';

function selectBestModel(options: {
  needsVision?: boolean;
  needsTools?: boolean;
  maxCost?: number;
  minContext?: number;
}): string | null {
  const models = listModels({
    supportsVision: options.needsVision,
    supportsTools: options.needsTools,
    maxPricePerMillion: options.maxCost,
    minContextWindow: options.minContext,
    excludeDeprecated: true,
  });

  if (models.length === 0) return null;

  models.sort((a, b) => {
    const aPrice = (a.pricing.input + a.pricing.output) / 2;
    const bPrice = (b.pricing.input + b.pricing.output) / 2;
    return aPrice - bPrice;
  });

  return models[0].id;
}

const cheapTool = selectBestModel({
  needsTools: true,
  maxCost: 2.0,
});

Provider Dashboard

import { getModelRegistry, initializeModels } from '@cogitator-ai/models';

async function showDashboard() {
  await initializeModels();
  const registry = getModelRegistry();

  console.log(`Total models: ${registry.getModelCount()}`);
  console.log();

  for (const provider of registry.listProviders()) {
    const models = registry.listModels({ provider: provider.id });
    console.log(`${provider.name}: ${models.length} models`);

    const avgPrice =
      models.reduce((sum, m) => sum + (m.pricing.input + m.pricing.output) / 2, 0) / models.length;
    console.log(`  Avg price: $${avgPrice.toFixed(2)}/M tokens`);
  }
}

Type Reference

import type {
  ModelInfo,
  ModelPricing,
  ModelCapabilities,
  ModelFilter,
  ProviderInfo,
  CacheOptions,
  RegistryOptions,
  LiteLLMModelEntry,
  LiteLLMModelData,
} from '@cogitator-ai/models';

License

MIT