npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

llm-observer

v1.10.0

Published

Privacy-first, local-only LLM cost tracker. Track OpenAI, Anthropic, Gemini costs without sending data to the cloud.

Readme

LLM Observer 🛡️

Privacy-first, local-only LLM cost tracking for developers.

Stop sending your prompt data to SaaS observability tools. LLM Observer runs entirely on your machine — tracks every OpenAI, Anthropic, Gemini, Mistral, and Groq call, calculates exact costs, and visualises everything in a real-time dashboard at localhost:4001.

Your API keys, prompts, and responses never leave your machine.


Quick start

npx llm-observer start

That's it. Proxy starts on port 4000, dashboard on port 4001.


How it works

Point your existing LLM code at the local proxy instead of the provider directly:

OpenAI (Node.js)

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'your-actual-key',           // still goes here, stored locally
  baseURL: 'http://localhost:4000/v1/openai',
});

Anthropic (Node.js)

import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic({
  apiKey: 'your-actual-key',
  baseURL: 'http://localhost:4000/v1/anthropic',
});

Google Gemini

baseURL: 'http://localhost:4000/v1/google'

Mistral / Groq / Ollama (local)

baseURL: 'http://localhost:4000/v1/mistral'
baseURL: 'http://localhost:4000/v1/groq'
baseURL: 'http://localhost:4000/v1/custom/http%3A%2F%2Flocalhost%3A11434'

Every request is intercepted, logged, costed, and shown in the dashboard — zero changes to your application logic.


Features

🔒 100% Private

All data stored in a local SQLite database at ~/.llm-observer/data.db. No telemetry. No third-party servers. Your prompts and API keys never leave your machine.

📊 Real-time dashboard

Live cost counter, request log with filters, latency tracking, model breakdown charts, and cost trajectory over 7 days — all at http://localhost:4001.

🛡️ Budget guard

Set a daily budget per project. When spend hits the limit, the proxy automatically blocks new requests before you wake up to a $1,000 bill.

llm-observer budget set 5.00 --daily

🚨 Anomaly detection

Automatic spike detection — if your spend velocity exceeds 5× your rolling average, an alert fires via webhook (Slack, Discord, or any HTTP endpoint).

💡 Cost optimizer

Identifies duplicate prompts, suggests cheaper model alternatives, and calculates potential monthly savings from switching specific call patterns.

🔌 6 providers supported

OpenAI · Anthropic · Google Gemini · Mistral · Groq · Custom/Local (Ollama, LM Studio, any OpenAI-compatible endpoint)

📦 80+ models priced

Full pricing database for GPT-4o, GPT-4o-mini, o1, o3, Claude Opus/Sonnet/Haiku, Gemini 2.5 Pro/Flash, Mistral, Groq, DeepSeek, Llama, Qwen, and more.


CLI commands

llm-observer start              # Start proxy + dashboard
llm-observer stop               # Stop all services
llm-observer status             # Show current status and today's spend

llm-observer stats              # Cost breakdown by model (today/week/month)
llm-observer logs               # Live tail of requests
llm-observer logs --provider openai --min-cost 0.01

llm-observer projects list      # List all projects
llm-observer projects create    # Create a new project (interactive)

llm-observer budget set 10.00   # Set $10/day budget on default project
llm-observer config view        # View current configuration

llm-observer export --format csv --range 30d   # Export last 30 days
llm-observer export --format json --range all

llm-observer activate <key>     # Activate Pro license
llm-observer upgrade            # View Pro plans and pricing

Dashboard pages

| Page | What it shows | |---|---| | Overview | Today's spend vs budget, request count, avg latency, error rate, 7-day cost chart | | Live Traffic | Every request in real-time via SSE — provider, model, tokens, cost, latency, status | | Insights | Cost optimizer suggestions, duplicate prompt detection, model downgrade opportunities | | Projects | Multi-project cost isolation — separate budgets per app or environment | | Alerts | Webhook alert rules for budget thresholds and anomaly spikes | | Settings | API key management, proxy config, license activation |


Pricing

| Plan | Price | Features | |---|---|---| | Free | $0 forever | 1 project · 7-day log retention · Budget guard · Anomaly alerts | | Pro | $19/mo | Unlimited projects · 90-day retention · Cost optimizer · CSV/PDF export · Priority support | | Pro (India) | ₹1,499/mo | Same as Pro, billed via Razorpay | | Team | $49/seat/mo | Everything in Pro + encrypted team sync + shared dashboard |

llm-observer upgrade            # View plans
llm-observer upgrade --india    # Indian pricing via Razorpay
llm-observer activate <key>     # Activate after purchase

Why not Helicone, Langfuse, or LangSmith?

| | LLM Observer | Helicone | Langfuse | LangSmith | |---|---|---|---|---| | Data stays local | ✅ Always | ❌ Cloud | ❌ Cloud | ❌ Cloud | | No account required | ✅ | ❌ | ❌ | ❌ | | Works offline | ✅ | ❌ | ❌ | ❌ | | Your prompts exposed | Never | To their servers | To their servers | To their servers | | Free tier | Unlimited local | Limited | Limited | Limited |

If you're working on anything sensitive — client data, proprietary prompts, internal tooling — LLM Observer is the only observability tool where your data genuinely never leaves your machine.


Requirements

  • Node.js 18+
  • macOS / Linux / Windows

Configuration

Config stored at ~/.llm-observer/config.json.

llm-observer config view          # See all settings
llm-observer config set webhook_url https://hooks.slack.com/...
llm-observer config set proxy_port 4000
llm-observer config set dashboard_port 4001

Or set via environment variables:

PROXY_PORT=4000 DASHBOARD_PORT=4001 npx llm-observer start

Data & privacy

  • All data stored locally at ~/.llm-observer/data.db (SQLite)
  • Free tier: 7-day automatic log retention
  • Pro tier: 90-day retention
  • To delete all data: rm ~/.llm-observer/data.db
  • Zero telemetry. Zero analytics. Zero outbound connections except to your configured LLM provider.

Links


License

MIT © Ranjit Behera