npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

llmflow

v0.3.2

Published

See what your LLM calls cost. One command. No signup.

Readme

LLMFlow

See what your LLM calls cost. One command. No signup.

LLMFlow is a local observability tool for LLM applications. Point your SDK at it, see your costs, tokens, and latency in real-time.

npx llmflow

Dashboard: localhost:3000 · Proxy: localhost:8080

LLMFlow Dashboard


Quick Start

1. Start LLMFlow

# Option A: npx (recommended)
npx llmflow

# Option B: Clone and run
git clone https://github.com/HelgeSverre/llmflow.git
cd llmflow && npm install && npm start

# Option C: Docker
docker run -p 3000:3000 -p 8080:8080 helgesverre/llmflow

2. Point Your SDK

# Python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8080/v1")
// JavaScript
const client = new OpenAI({ baseURL: "http://localhost:8080/v1" });
// PHP
$client = OpenAI::factory()->withBaseUri('http://localhost:8080/v1')->make();

3. View Dashboard

Open localhost:3000 to see your traces, costs, and token usage.


Who Is This For?

  • Solo developers building with OpenAI, Anthropic, etc.
  • Hobbyists who want to see what their AI projects cost
  • Anyone who doesn't want to pay for or set up a SaaS observability tool

Features

| Feature | Description | | ------------------- | ---------------------------------------------------------- | | Cost Tracking | Real-time pricing for 2000+ models | | Request Logging | See every request/response with latency | | Multi-Provider | OpenAI, Anthropic, Gemini, Ollama, Groq, Mistral, and more | | OpenTelemetry | Accept traces from LangChain, LlamaIndex, etc. | | Zero Config | Just run it, point your SDK, done | | Local Storage | SQLite database, no external services |


Supported Providers

Use path prefixes or the X-LLMFlow-Provider header:

| Provider | URL | | ------------ | ------------------------------------- | | OpenAI | http://localhost:8080/v1 (default) | | Anthropic | http://localhost:8080/anthropic/v1 | | Gemini | http://localhost:8080/gemini/v1 | | Ollama | http://localhost:8080/ollama/v1 | | Groq | http://localhost:8080/groq/v1 | | Mistral | http://localhost:8080/mistral/v1 | | Azure OpenAI | http://localhost:8080/azure/v1 | | Cohere | http://localhost:8080/cohere/v1 | | Together | http://localhost:8080/together/v1 | | OpenRouter | http://localhost:8080/openrouter/v1 | | Perplexity | http://localhost:8080/perplexity/v1 |


OpenTelemetry Support

If you're using LangChain, LlamaIndex, or other instrumented frameworks:

# Python - point OTLP exporter to LLMFlow
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

exporter = OTLPSpanExporter(endpoint="http://localhost:3000/v1/traces")
// JavaScript
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";

new OTLPTraceExporter({ url: "http://localhost:3000/v1/traces" });

Configuration

| Variable | Default | Description | | ---------------- | ------------ | ---------------------- | | PROXY_PORT | 8080 | Proxy port | | DASHBOARD_PORT | 3000 | Dashboard port | | DATA_DIR | ~/.llmflow | Data directory | | MAX_TRACES | 10000 | Max traces to retain | | VERBOSE | 0 | Enable verbose logging |

Set provider API keys as environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) if you want the proxy to forward requests.


Advanced Features

For advanced usage, see the docs/ folder:


License

MIT © Helge Sverre