@visibe.ai/node
v0.1.48
Published
AI Agent Observability — Track OpenAI, LangChain, LangGraph, Bedrock, Vercel AI, Anthropic
Maintainers
Readme
Visibe SDK for Node.js
Observability for AI agents. Track costs, performance, and errors across your entire AI stack — whether you're using LangChain, LangGraph, Vercel AI, Anthropic, AWS Bedrock, or direct OpenAI calls.
Table of Contents
- Getting Started
- Integrations
- Configuration
- What Gets Tracked
- API Reference
- Express / Fastify Middleware
- ESM & CommonJS
- Resources
📦 Getting Started
1. Create an account
Sign up at app.visibe.ai and create a project.
2. Get an API key
In your project, go to Settings → API Keys and generate a new key. It will look like sk_live_....
3. Install the SDK
npm install @visibe.ai/node4. Set your API key
export VISIBE_API_KEY=sk_live_your_api_key_hereOr in a .env file:
VISIBE_API_KEY=sk_live_your_api_key_here5. Instrument your app
import { init } from '@visibe.ai/node'
init()That's it. Every OpenAI, Anthropic, LangChain, LangGraph, Vercel AI, and Bedrock client created after this call is automatically traced — no other code changes needed.
🧩 Integrations
| Framework | Auto (init()) | Manual (instrument()) |
|-----------|:-:|:-:|
| OpenAI | ✅ | ✅ |
| Anthropic | ✅ | ✅ |
| LangChain | ✅ | ✅ |
| LangGraph | ✅ | ✅ |
| Vercel AI | ✅ | — |
| AWS Bedrock | ✅ | ✅ |
Also works with OpenAI-compatible providers: Azure OpenAI, Groq, Together.ai, DeepSeek, and others.
OpenAI
import { init } from '@visibe.ai/node'
import OpenAI from 'openai'
init()
const client = new OpenAI()
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
})
// Automatically traced — cost, tokens, duration, and content captured.Streaming is also supported:
const stream = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Count to 5' }],
stream: true,
})
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? '')
}
// Token usage and cost captured when the stream is exhausted.Anthropic
import { init } from '@visibe.ai/node'
import Anthropic from '@anthropic-ai/sdk'
init()
const client = new Anthropic()
const response = await client.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 100,
messages: [{ role: 'user', content: 'Hello!' }],
})
// Automatically traced.LangChain
import { init } from '@visibe.ai/node'
init()
// require() AFTER init() so the instrumentation is already active
const { ChatOpenAI } = require('@langchain/openai')
const { PromptTemplate } = require('@langchain/core/prompts')
const { StringOutputParser } = require('@langchain/core/output_parsers')
const { RunnableSequence } = require('@langchain/core/runnables')
const chain = RunnableSequence.from([
PromptTemplate.fromTemplate('Summarize: {text}'),
new ChatOpenAI({ model: 'gpt-4o-mini' }),
new StringOutputParser(),
])
const result = await chain.invoke({ text: 'AI observability matters.' })
// Full chain traced — LLM calls, token counts, and duration captured.You can also use the LangChainCallback directly for explicit control:
import { LangChainCallback } from '@visibe.ai/node/integrations/langchain'
import { randomUUID } from 'node:crypto'
const traceId = randomUUID()
const callback = new LangChainCallback({ visibe, traceId, agentName: 'my-agent' })
const model = new ChatOpenAI({ model: 'gpt-4o-mini', callbacks: [callback] })
await model.invoke([new HumanMessage('Hello!')])LangGraph
import { init } from '@visibe.ai/node'
init() // must come BEFORE graph compilation
const { StateGraph, END } = require('@langchain/langgraph')
const { ChatOpenAI } = require('@langchain/openai')
const { HumanMessage } = require('@langchain/core/messages')
const model = new ChatOpenAI({ model: 'gpt-4o-mini' })
const graph = new StateGraph({
channels: { messages: { value: (x, y) => x.concat(y), default: () => [] } },
})
.addNode('research', async (state) => ({
messages: [await model.invoke([new HumanMessage('Research this topic')])],
}))
.addNode('summarise', async (state) => ({
messages: [await model.invoke([new HumanMessage('Summarise the research')])],
}))
.addEdge('__start__', 'research')
.addEdge('research', 'summarise')
.addEdge('summarise', END)
.compile()
await graph.invoke({ messages: [] })
// Each node's LLM calls traced, total cost and token counts rolled up per graph run.Vercel AI
import { init } from '@visibe.ai/node'
init() // must come BEFORE require('ai')
// require() AFTER init() so patchVercelAI() has replaced the exports
const { generateText } = require('ai')
const { openai } = require('@ai-sdk/openai')
const result = await generateText({
model: openai('gpt-4o-mini'),
prompt: 'Write a haiku about observability.',
})
// Automatically traced — provider, model, tokens, and cost captured.streamText and generateObject are also automatically patched.
AWS Bedrock
import { init } from '@visibe.ai/node'
import { BedrockRuntimeClient, ConverseCommand } from '@aws-sdk/client-bedrock-runtime'
init()
const client = new BedrockRuntimeClient({ region: 'us-east-1' })
const response = await client.send(new ConverseCommand({
modelId: 'anthropic.claude-3-haiku-20240307-v1:0',
messages: [{ role: 'user', content: [{ text: 'Hello!' }] }],
}))
// Automatically traced. Works with all models available via Bedrock —
// Claude, Nova, Llama, Mistral, and more.Supports ConverseCommand, ConverseStreamCommand, InvokeModelCommand, and InvokeModelWithResponseStreamCommand.
⚙️ Configuration
import { init } from '@visibe.ai/node'
init({
apiKey: 'sk_live_abc123', // or set VISIBE_API_KEY env var
frameworks: ['openai', 'langgraph'], // limit to specific frameworks
contentLimit: 500, // max chars for LLM content in traces
debug: true, // enable debug logging
})Options
| Option | Type | Description | Default |
|--------|------|-------------|---------|
| apiKey | string | Your Visibe API key | VISIBE_API_KEY env var |
| apiUrl | string | Override API endpoint | https://api.visibe.ai |
| frameworks | string[] | Limit auto-instrumentation to specific frameworks | All detected |
| contentLimit | number | Max chars for LLM/tool content in spans | 1000 |
| debug | boolean | Enable debug logging | false |
| sessionId | string | Tag all traces with a session ID | — |
Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| VISIBE_API_KEY | Your API key (required) | — |
| VISIBE_API_URL | Override API endpoint | https://api.visibe.ai |
| VISIBE_CONTENT_LIMIT | Max chars for LLM/tool content in spans | 1000 |
| VISIBE_DEBUG | Enable debug logging (1 to enable) | 0 |
📊 What Gets Tracked
| Metric | Description | |--------|-------------| | Cost | Total spend + per-call cost breakdown using current model pricing | | Tokens | Input/output tokens per LLM call | | Duration | Total time + time per step | | Tools | Which tools were used, duration, success/failure | | Errors | When and where things failed, with error type, message, and HTTP status | | Spans | Full execution timeline with LLM calls, tool calls, agent starts, and errors | | Model | Which model was used for each call | | Provider | Which provider served the request (openai, anthropic, amazon, etc.) |
📖 API Reference
init()
Auto-instruments all detected AI framework clients. Call this once at the top of your application, before creating any clients.
import { init } from '@visibe.ai/node'
init()All OpenAI, Anthropic, Bedrock, LangChain, LangGraph, and Vercel AI clients created after init() are automatically traced. No other code changes required.
Important: For LangChain, LangGraph, and Vercel AI, use
require()(notimport) afterinit()so that the module hook can patch the exports before your code runs.
instrument() / uninstrument()
Manually instrument a specific client instance. Useful when you don't want global auto-instrumentation or need to control which clients are traced.
import { Visibe } from '@visibe.ai/node'
import OpenAI from 'openai'
const visibe = new Visibe({ apiKey: 'sk_live_abc123' })
const client = new OpenAI()
visibe.instrument(client, { name: 'my-agent' })
// Each LLM call on this client now creates its own trace.
visibe.uninstrument(client)
// Removes the instrumentation — client returns to normal behavior.Supported client types: OpenAI, Anthropic, BedrockRuntimeClient. If you pass an unsupported object, a warning is logged so you know tracing won't be captured.
track()
Groups multiple LLM calls into a single named trace. Wraps a function — every instrumented call made inside it is captured under one trace with combined cost and token totals.
import { Visibe } from '@visibe.ai/node'
import OpenAI from 'openai'
const visibe = new Visibe({ apiKey: 'sk_live_abc123' })
const client = new OpenAI()
const result = await visibe.track(client, 'my-conversation', async () => {
const first = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'What is AI?' }],
})
const second = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Tell me more' }],
})
return second
})
// Both calls appear as spans under one trace named "my-conversation".The client is auto-instrumented for the duration of the callback if it wasn't already. Errors thrown inside the callback are captured as error spans and re-thrown.
runWithSession()
Like track(), but doesn't require a specific client. Groups all already-instrumented LLM calls made inside the callback into one trace.
import { init, Visibe } from '@visibe.ai/node'
init()
const visibe = new Visibe({ apiKey: 'sk_live_abc123' })
await visibe.runWithSession('research-task', async () => {
// Any instrumented client used in here — OpenAI, Anthropic,
// Bedrock, LangChain — is captured under one trace.
const openai = new OpenAI()
await openai.chat.completions.create({ model: 'gpt-4o-mini', messages: [...] })
const anthropic = new Anthropic()
await anthropic.messages.create({ model: 'claude-3-5-sonnet-20241022', ... })
})
// Both calls grouped under the "research-task" trace.This is the cleanest API when init() has already been called and all clients are pre-instrumented.
middleware()
Express/Connect/Fastify-compatible middleware that automatically creates one trace per HTTP request. Every LLM call made during a request is captured under that request's trace.
import express from 'express'
import { Visibe } from '@visibe.ai/node'
const visibe = new Visibe({ apiKey: 'sk_live_abc123' })
const app = express()
app.use(visibe.middleware())
app.post('/chat', async (req, res) => {
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: req.body.messages,
})
res.json(response)
})
// Each POST /chat request creates a trace named "POST /chat" with all LLM spans inside.Custom trace naming:
app.use(visibe.middleware({
name: (req) => `${req.method} ${req.url}`,
}))
// Or a fixed name:
app.use(visibe.middleware({ name: 'api-gateway' }))Concurrent requests are fully isolated — each request gets its own trace via AsyncLocalStorage, regardless of how many are in flight.
shutdown()
Flushes all buffered spans and stops the SDK. Call this before your process exits if you want to guarantee all data is sent.
import { shutdown } from '@visibe.ai/node'
await shutdown()The SDK also registers SIGTERM and SIGINT handlers automatically, so for typical web servers you don't need to call this manually.
🌐 Express / Fastify Middleware
The middleware() function works with any framework that supports the (req, res, next) pattern:
Express:
import express from 'express'
app.use(visibe.middleware())Fastify (with @fastify/middie):
import Fastify from 'fastify'
import middie from '@fastify/middie'
const app = Fastify()
await app.register(middie)
app.use(visibe.middleware())Each request gets its own trace. The trace captures:
- All LLM calls made during the request
- HTTP status code (4xx/5xx responses marked as failed)
- Response body for error responses (when captured)
- Total cost, tokens, and duration
📦 ESM & CommonJS
The SDK ships with both CommonJS and ESM builds. It works out of the box in either environment.
// ESM
import { init, shutdown } from '@visibe.ai/node'
// CommonJS
const { init, shutdown } = require('@visibe.ai/node')Note: Module-level auto-patching (where
init()replaces the constructor so new clients are auto-instrumented) works in CommonJS. In ESM, module namespaces are sealed, so you'll need to callvisibe.instrument(client)manually after creating each client.
🛡️ Safety Guarantees
The SDK is designed to never interfere with your application:
- No crashes. Every SDK operation is wrapped in try/catch. If something goes wrong internally, your app continues running normally.
- No blocking. API calls to the Visibe backend are fire-and-forget. They don't add latency to your LLM calls.
- No data loss. Spans are buffered and sent in batches every 2 seconds. Transient network failures in middleware are retried once automatically.
- No leaks. The internal timer is
unref()'d so it won't prevent your process from exiting. - No API key, no problem. If no API key is set, the SDK initializes silently and does nothing — no errors, no warnings, no network calls.
🔗 Resources
- visibe.ai — Product website
- app.visibe.ai — Dashboard (sign up, manage API keys, view traces)
- npm Package — Latest version on npm
📃 License
MIT — see LICENSE for details.
