nirixa
v2.0.0
Published
AI Observability & Cost Intelligence — track token costs, latency, and hallucination risk for every LLM call
Maintainers
Readme
nirixa
AI Observability & Cost Intelligence for JavaScript & TypeScript. Track token costs, latency, and hallucination risk for every LLM call — with zero friction.
npm install nirixa
# or
pnpm add nirixaQuick Start
import { NirixaClient } from 'nirixa'
import OpenAI from 'openai'
const nirixa = new NirixaClient({ apiKey: 'nirixa-your-key' })
const openai = new OpenAI()
// Wrap your existing call — response is completely unchanged
const response = await nirixa.track({
feature: '/api/chat',
fn: () => openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
}),
})
console.log(response.choices[0].message.content)Three Ways to Integrate
1. wrap() — Transparent client proxy (recommended)
Wrap a provider client once and use it exactly like the original. Model, provider, and prompt are auto-extracted from every call — no duplication.
import { NirixaClient } from 'nirixa'
import OpenAI from 'openai'
const nirixa = new NirixaClient({ apiKey: 'nirixa-your-key' })
const openai = new OpenAI()
const ai = nirixa.wrap(openai, { feature: '/api/chat', user: userId })
// Use ai exactly like openai — tracking is automatic
const response = await ai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
})Works with any provider:
import Anthropic from '@anthropic-ai/sdk'
const claude = nirixa.wrap(new Anthropic(), { feature: '/api/analyze' })
const response = await claude.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Summarize this...' }],
})2. track() — Explicit per-call wrapping
const prompt = 'Summarize this document...'
const response = await nirixa.track({
feature: '/api/summarize',
user: 'user-123',
prompt, // optional: improves hallucination scoring
fn: () => openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: prompt }],
}),
})model and provider are auto-detected from the response — no need to pass them.
3. Auto-patch — Zero code changes
Patch provider SDKs globally at app startup. Every call is tracked without touching existing code.
import { NirixaClient, patchOpenAI, patchAll } from 'nirixa'
const nirixa = new NirixaClient({ apiKey: 'nirixa-your-key' })
// Patch a specific provider
patchOpenAI(nirixa, '/api/chat')
// Or patch every installed provider at once
patchAll(nirixa)
// [nirixa] Patched 4 providers: OpenAI, Anthropic, Groq, GeminiModule-level API
Skip the new NirixaClient() and use the module-level singleton:
import * as nirixa from 'nirixa'
nirixa.init({ apiKey: 'nirixa-your-key' })
const response = await nirixa.track({
feature: '/api/chat',
fn: () => openai.chat.completions.create({ ... }),
})
const ai = nirixa.wrap(openai, { feature: '/api/chat' })
await nirixa.flush() // drain all pending ingests before exitSupported Providers
| Provider | Auto-detected via | Patch function |
|--------------|--------------------------------|--------------------|
| OpenAI | choices + usage | patchOpenAI |
| Anthropic | content + usage | patchAnthropic |
| Groq | OpenAI-compatible shape | patchGroq |
| Google Gemini| usageMetadata | patchGemini |
| Mistral | OpenAI-compatible shape | patchMistral |
| Together AI | OpenAI-compatible shape | patchTogether |
| Ollama | prompt_eval_count | patchOllama |
| AWS Bedrock | ResponseMetadata | — |
Configuration
const nirixa = new NirixaClient({
apiKey: 'nirixa-your-key', // Required
host: 'https://api.nirixa.in', // Default
scoreHallucinations: true, // Hallucination risk scoring (LOW/MEDIUM/HIGH)
asyncIngest: true, // Non-blocking — zero added latency
debug: false, // Log each tracked call to console
})What Gets Tracked
| Metric | Description | |--------------------|------------------------------------------| | Token cost | Per-call USD cost by feature and model | | Latency | p50 / p95 / p99 response times | | Hallucination risk | LOW / MEDIUM / HIGH heuristic scoring | | Prompt drift | Output variance over time | | Error rate | Failed calls by feature |
flush() — Before process exit
In scripts or short-lived processes, call flush() to ensure all async ingests complete:
await nirixa.flush()
process.exit(0)Runtime Support
- Node.js 18+ (native
fetch) - Bun and Deno
- Edge runtimes (Vercel Edge, Cloudflare Workers)
- Browser (proxy the ingest endpoint)
Links
- Dashboard: nirixa.in
- Python SDK:
pip install nirixa - Docs: nirixa.in/docs
- Email: [email protected]
