npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

brokle-langchain

v0.3.0

Published

LangChain.js integration with automatic Brokle tracing

Readme

Brokle LangChain.js Integration

Automatic tracing for LangChain.js applications with comprehensive observability for chains, LLMs, tools, and agents.

Features

  • Full LangChain Coverage: LLMs, chains, tools, agents, retrievers
  • Automatic Tracing: Drop-in callback handler
  • GenAI 1.28+ Compliant: Full OTEL semantic conventions
  • Context Support: User ID, session ID, tags, metadata
  • Error Tracking: Automatic error recording
  • Nested Spans: Proper parent-child relationships
  • TypeScript Native: Full type safety

Installation

npm install brokle brokle-langchain langchain @opentelemetry/api

Quick Start

import { ChatOpenAI } from 'langchain/chat_models/openai';
import { PromptTemplate } from 'langchain/prompts';
import { LLMChain } from 'langchain/chains';
import { BrokleLangChainCallback } from 'brokle-langchain';
import { getClient } from 'brokle';

// 1. Initialize Brokle
const brokleClient = getClient({
  apiKey: process.env.BROKLE_API_KEY,
});

// 2. Create LangChain callback
const callback = new BrokleLangChainCallback({
  userId: 'user-123',
  sessionId: 'session-456',
  tags: ['production'],
});

// 3. Use with any LangChain component
const model = new ChatOpenAI({ modelName: 'gpt-4' });
const prompt = PromptTemplate.fromTemplate('What is {topic}?');
const chain = new LLMChain({ llm: model, prompt });

// 4. Run with callbacks - automatic tracing!
const result = await chain.invoke(
  { topic: 'artificial intelligence' },
  { callbacks: [callback] }
);

// 5. Flush before exit
await callback.flush();

What Gets Traced

LLM Calls

Every LLM call is automatically traced with:

Request Attributes:

  • gen_ai.provider.name = Provider (openai, anthropic, etc.)
  • gen_ai.operation.name = "chat"
  • gen_ai.request.model = Model name
  • gen_ai.input.messages = Prompt(s) as JSON

Response Attributes:

  • gen_ai.output.messages = Completion(s) as JSON
  • gen_ai.usage.input_tokens = Prompt tokens
  • gen_ai.usage.output_tokens = Completion tokens
  • brokle.usage.total_tokens = Total tokens
  • gen_ai.response.model = Actual model used

Span Name: chat {model} Span Type: generation

Chains

Chain execution is traced with:

Attributes:

  • chain.type = Chain type (e.g., "llm_chain")
  • chain.input = Input data (JSON)
  • chain.output = Output data (JSON)
  • brokle.span_type = "span"

Span Name: chain {type}

Tools

Tool calls are traced with:

Attributes:

  • tool.name = Tool name
  • tool.input = Tool input
  • tool.output = Tool output
  • brokle.span_type = "tool"

Span Name: tool {name}

Advanced Usage

With User/Session Context

const callback = new BrokleLangChainCallback({
  userId: 'user-789',
  sessionId: 'session-abc',
  tags: ['customer-support', 'premium'],
  metadata: {
    tenant: 'acme-corp',
    region: 'us-east-1',
  },
});

const result = await chain.invoke(
  { input: 'How do I reset my password?' },
  { callbacks: [callback] }
);

All traces will include these context attributes for filtering and analysis.

Override Context Per-Request

You can override context values per request using metadata:

const callback = new BrokleLangChainCallback({
  userId: 'default-user',
  sessionId: 'default-session',
});

const result = await chain.invoke(
  { input: 'Question' },
  {
    callbacks: [callback],
    metadata: {
      brokleUserId: 'override-user-123',
      brokleSessionId: 'override-session-456',
    },
  }
);

With Agents

import { initializeAgentExecutorWithOptions } from 'langchain/agents';
import { Calculator } from 'langchain/tools/calculator';

const callback = new BrokleLangChainCallback({
  userId: 'user-123',
  tags: ['agent', 'tools'],
});

const tools = [new Calculator()];
const executor = await initializeAgentExecutorWithOptions(tools, model, {
  agentType: 'zero-shot-react-description',
});

const result = await executor.invoke(
  { input: 'What is 25 * 4?' },
  { callbacks: [callback] }
);

// Agent traces include:
// - Main agent span
// - LLM calls (reasoning)
// - Tool calls (calculator)
// - Final answer
await callback.flush();

With Retrievers

import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { OpenAIEmbeddings } from 'langchain/embeddings/openai';

const callback = new BrokleLangChainCallback({
  tags: ['retrieval', 'rag'],
});

const vectorStore = await MemoryVectorStore.fromTexts(
  ['Text 1', 'Text 2'],
  [{ id: 1 }, { id: 2 }],
  new OpenAIEmbeddings()
);

const retriever = vectorStore.asRetriever();

const docs = await retriever.getRelevantDocuments('query', {
  callbacks: [callback],
});

// Retrieval traced automatically
await callback.flush();

With ConversationChain

import { BufferMemory } from 'langchain/memory';
import { ConversationChain } from 'langchain/chains';

const callback = new BrokleLangChainCallback({
  sessionId: 'conversation-123',
});

const memory = new BufferMemory();
const chain = new ConversationChain({ llm: model, memory });

// First message
await chain.invoke(
  { input: 'Hi, my name is Alice' },
  { callbacks: [callback] }
);

// Second message (with context)
await chain.invoke(
  { input: 'What is my name?' },
  { callbacks: [callback] }
);

// Both messages traced with same session
await callback.flush();

Configuration Options

interface BrokleLangChainCallbackConfig {
  /** User ID for filtering (optional) */
  userId?: string;

  /** Session ID for filtering (optional) */
  sessionId?: string;

  /** Tags for categorization (optional) */
  tags?: string[];

  /** Custom metadata (optional) */
  metadata?: Record<string, unknown>;

  /** Enable debug logging (optional) */
  debug?: boolean;
}

Debug Logging

const callback = new BrokleLangChainCallback({
  debug: true, // Enable debug logs
});

// Logs will show:
// [Brokle LangChain] LLM started: {runId} ({model})
// [Brokle LangChain] LLM ended: {runId}
// [Brokle LangChain] Chain started: {runId} ({type})
// etc.

Lifecycle Management

Flush Before Exit

// Serverless/Lambda
export const handler = async (event) => {
  const callback = new BrokleLangChainCallback();

  const result = await chain.invoke({ input: event.query }, { callbacks: [callback] });

  await callback.flush(); // Important: flush before return
  return result;
};

Cleanup on Error

const callback = new BrokleLangChainCallback();

try {
  const result = await chain.invoke({ input }, { callbacks: [callback] });
  await callback.flush();
} catch (error) {
  await callback.cleanup(); // End any open spans
  await callback.flush();
  throw error;
}

Nested Span Example

LangChain operations create proper parent-child span relationships:

Trace: user-query
├── Span: chain llm_chain
│   ├── Span: chat gpt-4 (LLM call)
│   └── Span: chat gpt-4 (LLM call)
└── Span: tool calculator

This allows you to see the complete execution flow in Brokle dashboard.

Integration with Brokle Client

You can combine LangChain callbacks with Brokle client for custom context:

import { getClient, Attrs } from 'brokle';

const brokle = getClient();
const callback = new BrokleLangChainCallback();

await brokle.traced('user-question', async (span) => {
  // Set custom attributes on parent span
  span.setAttribute(Attrs.USER_ID, 'user-999');
  span.setAttribute(Attrs.TAGS, JSON.stringify(['important']));

  // Run LangChain - creates child spans
  const result = await chain.invoke({ input: 'Question' }, { callbacks: [callback] });

  return result;
});

await callback.flush();

How It Works

Callback Lifecycle

LangChain Run Starts
  ↓
handleLLMStart() → Create OTEL span (keep open)
  ↓
LLM API Call
  ↓
handleLLMEnd() → Extract attributes → End span
  ↓
Span exported to Brokle backend (batched, compressed)

Key Implementation Details

  1. Manual Span Control: Spans are started in handleLLMStart() and kept open until handleLLMEnd()
  2. Correct Signature: Uses extraParams object for tags/metadata (LangChain.js pattern)
  3. Context Merging: Merges config context with run-level context
  4. Error Handling: Properly records exceptions in spans

Supported LangChain Components

  • LLMs: OpenAI, Anthropic, Google, Cohere, etc.
  • Chat Models: ChatOpenAI, ChatAnthropic, etc.
  • Chains: LLMChain, ConversationChain, etc.
  • Agents: All agent types
  • Tools: Calculator, Search, Custom tools
  • Retrievers: Vector stores, Document loaders
  • Memory: Buffer, Summary, etc.

Requirements

  • Node.js >= 18.0.0
  • LangChain >= 0.1.0
  • Brokle SDK >= 0.1.0

Limitations

  • Streaming responses: Final attributes may not be available
  • Some custom chains may require manual instrumentation

License

MIT

Links