@hober/x402-ai
v0.1.0
Published
AI inference layer for x402 — cost estimation, model equivalence, and inference-specific payment protocol
Maintainers
Readme
@hober/x402-ai
AI inference layer for the x402 payment protocol. Estimate costs across 17 providers and 100+ models before paying, find equivalent models on cheaper providers, and generate/parse x402 402 responses with AI-specific metadata. Zero runtime dependencies.
Installation
npm install @hober/x402-ai
# or
bun add @hober/x402-ai
# or
yarn add @hober/x402-aiThree Capabilities
1. Cross-Provider Inference Cost Estimator
Estimate the cost of an AI request across multiple providers BEFORE paying.
import { estimateCost } from '@hober/x402-ai';
const result = estimateCost({
model: 'deepseek/v3.2',
inputTokens: 1500,
outputTokens: 500,
providers: ['deepseek', 'groq', 'fireworks'],
});
console.log(result.cheapest);
// { slug: 'deepseek/v3.2', provider: 'deepseek', totalCost: 0.00035, ... }
console.log(result.estimates);
// All matching models sorted by costThe estimator automatically finds equivalent models on other providers. To disable:
const result = estimateCost({
model: 'deepseek/v3.2',
inputTokens: 1500,
outputTokens: 500,
includeAlternatives: false,
});2. Model Equivalence Registry
Find which models across different providers are the same underlying model.
import { findAlternatives, areEquivalent, cheapestInFamily } from '@hober/x402-ai';
// Find all providers hosting DeepSeek V3
const alts = findAlternatives('deepseek/v3.2');
// [
// { slug: 'deepseek/v3.2', provider: 'deepseek', match: 'exact', ... },
// { slug: 'deepseek/v3', provider: 'deepseek', match: 'exact', ... },
// { slug: 'fireworks/deepseek-v3', provider: 'fireworks', match: 'exact', ... },
// { slug: 'deepinfra/deepseek-v3', provider: 'deepinfra', match: 'exact', ... },
// { slug: 'deepseek/coder-v3', provider: 'deepseek', match: 'equivalent', ... },
// ]
// Check if two models are equivalent
const match = areEquivalent('deepseek/v3.2', 'fireworks/deepseek-v3');
// 'exact'
// Find the cheapest provider for a model family
const cheapest = cheapestInFamily('llama-3.1-8b');
// { slug: 'deepinfra/llama-3.1-8b', provider: 'deepinfra', inputPrice: 0.02, ... }Match levels:
- exact -- Same model weights, same architecture
- equivalent -- Same base weights, minor tuning differences
- similar -- Same family, different size/distillation
3. Extended x402 Inference Protocol
Generate and parse x402 402 Payment Required responses with AI-specific metadata.
import {
createInferencePaymentSpec,
parseInferencePayment,
isInferencePayment,
} from '@hober/x402-ai';
// Generate a 402 response body with inference metadata
const spec = createInferencePaymentSpec({
model: 'deepseek/v3.2',
estimatedCost: 0.0042,
alternatives: [
{ model: 'fireworks/deepseek-v3', provider: 'fireworks', cost: 0.0050, match: 'exact' },
],
providerHealth: 0.98,
payTo: '0x2870C66dA1A8D26A73c61EfCc17C6Ef93e513eFF',
chain: 'base',
});
// Returns a full x402 payment spec with x-inference extension
// Parse a 402 response from any x402-compatible gateway
const response = await fetch('https://gateway.example.com/v1/chat/completions', {
method: 'POST',
body: JSON.stringify({ model: 'deepseek/v3.2', messages: [...] }),
});
if (response.status === 402) {
const body = await response.json();
const parsed = parseInferencePayment(body);
if (parsed.hasInferenceMetadata) {
console.log('Model:', parsed.inference.model);
console.log('Estimated cost:', parsed.inference.estimatedCost);
console.log('Alternatives:', parsed.inference.alternatives);
console.log('Provider health:', parsed.inference.providerHealth);
}
}API Reference
Cost Estimator
| Function | Description |
|----------|-------------|
| estimateCost(input) | Estimate cost across providers. Returns { cheapest, best, estimates[] } |
| listModels(options?) | List all models, optionally filtered by provider/features |
| getModelPricing(slug) | Get pricing for a single model by slug |
| getProviders() | Get all provider IDs in the catalog |
| getDataVersion() | Get the date the pricing data was last updated |
Model Equivalence
| Function | Description |
|----------|-------------|
| findAlternatives(slug) | Find equivalent models on other providers |
| getEquivalenceGroups() | Get all equivalence groups |
| getGroupForModel(slug) | Get the equivalence group for a model |
| areEquivalent(slugA, slugB) | Check if two models are equivalent |
| cheapestInFamily(family) | Find the cheapest provider for a model family |
x402 Protocol
| Function | Description |
|----------|-------------|
| createInferencePaymentSpec(input) | Generate an x402 payment spec with x-inference extension |
| parseInferencePayment(body) | Parse a 402 response, extracting inference metadata |
| isInferencePayment(body) | Check if a 402 response has x-inference metadata |
Data Sources
Pricing data is extracted from the Hober gateway's model catalog covering 17 providers:
DeepSeek, Qwen (Alibaba), GLM (Zhipu AI), Kimi (Moonshot), MiniMax, StepFun, OpenAI, Anthropic, Google, xAI, Mistral, Fireworks AI, Groq, Cohere, Cerebras, Perplexity, DeepInfra
Prices are in USD per 1 million tokens. Data version is embedded in the package and queryable via getDataVersion().
Updating Pricing Data
To refresh the static pricing data:
- Export model data from the Hober gateway's
seed-models.ts - Update
src/data/pricing.jsonwith current prices - Update
src/data/equivalence.jsonif new cross-provider models are added - Update the
versionfield in both JSON files - Rebuild:
bun run build
License
MIT
