npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@visibe.ai/node

v0.1.48

Published

AI Agent Observability — Track OpenAI, LangChain, LangGraph, Bedrock, Vercel AI, Anthropic

Readme

Visibe SDK for Node.js

Observability for AI agents. Track costs, performance, and errors across your entire AI stack — whether you're using LangChain, LangGraph, Vercel AI, Anthropic, AWS Bedrock, or direct OpenAI calls.

npm version Node TypeScript


Table of Contents


📦 Getting Started

1. Create an account

Sign up at app.visibe.ai and create a project.

2. Get an API key

In your project, go to Settings → API Keys and generate a new key. It will look like sk_live_....

3. Install the SDK

npm install @visibe.ai/node

4. Set your API key

export VISIBE_API_KEY=sk_live_your_api_key_here

Or in a .env file:

VISIBE_API_KEY=sk_live_your_api_key_here

5. Instrument your app

import { init } from '@visibe.ai/node'

init()

That's it. Every OpenAI, Anthropic, LangChain, LangGraph, Vercel AI, and Bedrock client created after this call is automatically traced — no other code changes needed.


🧩 Integrations

| Framework | Auto (init()) | Manual (instrument()) | |-----------|:-:|:-:| | OpenAI | ✅ | ✅ | | Anthropic | ✅ | ✅ | | LangChain | ✅ | ✅ | | LangGraph | ✅ | ✅ | | Vercel AI | ✅ | — | | AWS Bedrock | ✅ | ✅ |

Also works with OpenAI-compatible providers: Azure OpenAI, Groq, Together.ai, DeepSeek, and others.

OpenAI

import { init } from '@visibe.ai/node'
import OpenAI from 'openai'

init()

const client = new OpenAI()
const response = await client.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hello!' }],
})
// Automatically traced — cost, tokens, duration, and content captured.

Streaming is also supported:

const stream = await client.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Count to 5' }],
  stream: true,
})
for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content ?? '')
}
// Token usage and cost captured when the stream is exhausted.

Anthropic

import { init } from '@visibe.ai/node'
import Anthropic from '@anthropic-ai/sdk'

init()

const client = new Anthropic()
const response = await client.messages.create({
  model: 'claude-3-5-sonnet-20241022',
  max_tokens: 100,
  messages: [{ role: 'user', content: 'Hello!' }],
})
// Automatically traced.

LangChain

import { init } from '@visibe.ai/node'

init()

// require() AFTER init() so the instrumentation is already active
const { ChatOpenAI }      = require('@langchain/openai')
const { PromptTemplate }  = require('@langchain/core/prompts')
const { StringOutputParser } = require('@langchain/core/output_parsers')
const { RunnableSequence }   = require('@langchain/core/runnables')

const chain = RunnableSequence.from([
  PromptTemplate.fromTemplate('Summarize: {text}'),
  new ChatOpenAI({ model: 'gpt-4o-mini' }),
  new StringOutputParser(),
])

const result = await chain.invoke({ text: 'AI observability matters.' })
// Full chain traced — LLM calls, token counts, and duration captured.

You can also use the LangChainCallback directly for explicit control:

import { LangChainCallback } from '@visibe.ai/node/integrations/langchain'
import { randomUUID } from 'node:crypto'

const traceId  = randomUUID()
const callback = new LangChainCallback({ visibe, traceId, agentName: 'my-agent' })

const model = new ChatOpenAI({ model: 'gpt-4o-mini', callbacks: [callback] })
await model.invoke([new HumanMessage('Hello!')])

LangGraph

import { init } from '@visibe.ai/node'

init()  // must come BEFORE graph compilation

const { StateGraph, END } = require('@langchain/langgraph')
const { ChatOpenAI }      = require('@langchain/openai')
const { HumanMessage }    = require('@langchain/core/messages')

const model = new ChatOpenAI({ model: 'gpt-4o-mini' })

const graph = new StateGraph({
  channels: { messages: { value: (x, y) => x.concat(y), default: () => [] } },
})
  .addNode('research', async (state) => ({
    messages: [await model.invoke([new HumanMessage('Research this topic')])],
  }))
  .addNode('summarise', async (state) => ({
    messages: [await model.invoke([new HumanMessage('Summarise the research')])],
  }))
  .addEdge('__start__', 'research')
  .addEdge('research', 'summarise')
  .addEdge('summarise', END)
  .compile()

await graph.invoke({ messages: [] })
// Each node's LLM calls traced, total cost and token counts rolled up per graph run.

Vercel AI

import { init } from '@visibe.ai/node'

init()  // must come BEFORE require('ai')

// require() AFTER init() so patchVercelAI() has replaced the exports
const { generateText }  = require('ai')
const { openai }        = require('@ai-sdk/openai')

const result = await generateText({
  model: openai('gpt-4o-mini'),
  prompt: 'Write a haiku about observability.',
})
// Automatically traced — provider, model, tokens, and cost captured.

streamText and generateObject are also automatically patched.

AWS Bedrock

import { init } from '@visibe.ai/node'
import { BedrockRuntimeClient, ConverseCommand } from '@aws-sdk/client-bedrock-runtime'

init()

const client = new BedrockRuntimeClient({ region: 'us-east-1' })
const response = await client.send(new ConverseCommand({
  modelId: 'anthropic.claude-3-haiku-20240307-v1:0',
  messages: [{ role: 'user', content: [{ text: 'Hello!' }] }],
}))
// Automatically traced. Works with all models available via Bedrock —
// Claude, Nova, Llama, Mistral, and more.

Supports ConverseCommand, ConverseStreamCommand, InvokeModelCommand, and InvokeModelWithResponseStreamCommand.


⚙️ Configuration

import { init } from '@visibe.ai/node'

init({
  apiKey:       'sk_live_abc123',          // or set VISIBE_API_KEY env var
  frameworks:   ['openai', 'langgraph'],   // limit to specific frameworks
  contentLimit: 500,                       // max chars for LLM content in traces
  debug:        true,                      // enable debug logging
})

Options

| Option | Type | Description | Default | |--------|------|-------------|---------| | apiKey | string | Your Visibe API key | VISIBE_API_KEY env var | | apiUrl | string | Override API endpoint | https://api.visibe.ai | | frameworks | string[] | Limit auto-instrumentation to specific frameworks | All detected | | contentLimit | number | Max chars for LLM/tool content in spans | 1000 | | debug | boolean | Enable debug logging | false | | sessionId | string | Tag all traces with a session ID | — |

Environment Variables

| Variable | Description | Default | |----------|-------------|---------| | VISIBE_API_KEY | Your API key (required) | — | | VISIBE_API_URL | Override API endpoint | https://api.visibe.ai | | VISIBE_CONTENT_LIMIT | Max chars for LLM/tool content in spans | 1000 | | VISIBE_DEBUG | Enable debug logging (1 to enable) | 0 |


📊 What Gets Tracked

| Metric | Description | |--------|-------------| | Cost | Total spend + per-call cost breakdown using current model pricing | | Tokens | Input/output tokens per LLM call | | Duration | Total time + time per step | | Tools | Which tools were used, duration, success/failure | | Errors | When and where things failed, with error type, message, and HTTP status | | Spans | Full execution timeline with LLM calls, tool calls, agent starts, and errors | | Model | Which model was used for each call | | Provider | Which provider served the request (openai, anthropic, amazon, etc.) |


📖 API Reference

init()

Auto-instruments all detected AI framework clients. Call this once at the top of your application, before creating any clients.

import { init } from '@visibe.ai/node'

init()

All OpenAI, Anthropic, Bedrock, LangChain, LangGraph, and Vercel AI clients created after init() are automatically traced. No other code changes required.

Important: For LangChain, LangGraph, and Vercel AI, use require() (not import) after init() so that the module hook can patch the exports before your code runs.

instrument() / uninstrument()

Manually instrument a specific client instance. Useful when you don't want global auto-instrumentation or need to control which clients are traced.

import { Visibe } from '@visibe.ai/node'
import OpenAI from 'openai'

const visibe = new Visibe({ apiKey: 'sk_live_abc123' })
const client = new OpenAI()

visibe.instrument(client, { name: 'my-agent' })
// Each LLM call on this client now creates its own trace.

visibe.uninstrument(client)
// Removes the instrumentation — client returns to normal behavior.

Supported client types: OpenAI, Anthropic, BedrockRuntimeClient. If you pass an unsupported object, a warning is logged so you know tracing won't be captured.

track()

Groups multiple LLM calls into a single named trace. Wraps a function — every instrumented call made inside it is captured under one trace with combined cost and token totals.

import { Visibe } from '@visibe.ai/node'
import OpenAI from 'openai'

const visibe = new Visibe({ apiKey: 'sk_live_abc123' })
const client = new OpenAI()

const result = await visibe.track(client, 'my-conversation', async () => {
  const first = await client.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [{ role: 'user', content: 'What is AI?' }],
  })

  const second = await client.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [{ role: 'user', content: 'Tell me more' }],
  })

  return second
})
// Both calls appear as spans under one trace named "my-conversation".

The client is auto-instrumented for the duration of the callback if it wasn't already. Errors thrown inside the callback are captured as error spans and re-thrown.

runWithSession()

Like track(), but doesn't require a specific client. Groups all already-instrumented LLM calls made inside the callback into one trace.

import { init, Visibe } from '@visibe.ai/node'

init()

const visibe = new Visibe({ apiKey: 'sk_live_abc123' })

await visibe.runWithSession('research-task', async () => {
  // Any instrumented client used in here — OpenAI, Anthropic,
  // Bedrock, LangChain — is captured under one trace.
  const openai = new OpenAI()
  await openai.chat.completions.create({ model: 'gpt-4o-mini', messages: [...] })

  const anthropic = new Anthropic()
  await anthropic.messages.create({ model: 'claude-3-5-sonnet-20241022', ... })
})
// Both calls grouped under the "research-task" trace.

This is the cleanest API when init() has already been called and all clients are pre-instrumented.

middleware()

Express/Connect/Fastify-compatible middleware that automatically creates one trace per HTTP request. Every LLM call made during a request is captured under that request's trace.

import express from 'express'
import { Visibe } from '@visibe.ai/node'

const visibe = new Visibe({ apiKey: 'sk_live_abc123' })
const app = express()

app.use(visibe.middleware())

app.post('/chat', async (req, res) => {
  const response = await client.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: req.body.messages,
  })
  res.json(response)
})
// Each POST /chat request creates a trace named "POST /chat" with all LLM spans inside.

Custom trace naming:

app.use(visibe.middleware({
  name: (req) => `${req.method} ${req.url}`,
}))

// Or a fixed name:
app.use(visibe.middleware({ name: 'api-gateway' }))

Concurrent requests are fully isolated — each request gets its own trace via AsyncLocalStorage, regardless of how many are in flight.

shutdown()

Flushes all buffered spans and stops the SDK. Call this before your process exits if you want to guarantee all data is sent.

import { shutdown } from '@visibe.ai/node'

await shutdown()

The SDK also registers SIGTERM and SIGINT handlers automatically, so for typical web servers you don't need to call this manually.


🌐 Express / Fastify Middleware

The middleware() function works with any framework that supports the (req, res, next) pattern:

Express:

import express from 'express'
app.use(visibe.middleware())

Fastify (with @fastify/middie):

import Fastify from 'fastify'
import middie from '@fastify/middie'

const app = Fastify()
await app.register(middie)
app.use(visibe.middleware())

Each request gets its own trace. The trace captures:

  • All LLM calls made during the request
  • HTTP status code (4xx/5xx responses marked as failed)
  • Response body for error responses (when captured)
  • Total cost, tokens, and duration

📦 ESM & CommonJS

The SDK ships with both CommonJS and ESM builds. It works out of the box in either environment.

// ESM
import { init, shutdown } from '@visibe.ai/node'

// CommonJS
const { init, shutdown } = require('@visibe.ai/node')

Note: Module-level auto-patching (where init() replaces the constructor so new clients are auto-instrumented) works in CommonJS. In ESM, module namespaces are sealed, so you'll need to call visibe.instrument(client) manually after creating each client.


🛡️ Safety Guarantees

The SDK is designed to never interfere with your application:

  • No crashes. Every SDK operation is wrapped in try/catch. If something goes wrong internally, your app continues running normally.
  • No blocking. API calls to the Visibe backend are fire-and-forget. They don't add latency to your LLM calls.
  • No data loss. Spans are buffered and sent in batches every 2 seconds. Transient network failures in middleware are retried once automatically.
  • No leaks. The internal timer is unref()'d so it won't prevent your process from exiting.
  • No API key, no problem. If no API key is set, the SDK initializes silently and does nothing — no errors, no warnings, no network calls.

🔗 Resources


📃 License

MIT — see LICENSE for details.