@handit.ai/handit-ai
v1.3.0
Published
High-performance execution tracing for Node.js with LangChain/LangGraph/OpenAI/Vercel AI auto-instrumentation
Downloads
18
Readme
@handit.ai/handit-ai
High-performance execution tracing for Node.js applications with zero-config setup.
Installation
npm install @handit.ai/handit-aiQuick Start
Basic Function Tracing
const handit = require('@handit.ai/handit-ai');
// Configure (optional)
handit.configure({
HANDIT_API_KEY: 'your-api-key',
HANDIT_ENDPOINT: 'https://your-handit-endpoint.com/api/ingest/events'
});
// Trace any function
const processData = handit.tracing({ agent: "data-processor" }, async (data) => {
// Your business logic here
await new Promise(resolve => setTimeout(resolve, 100));
return { processed: true, count: data.length };
});
// Use within a session
async function main() {
await handit.session({ tag: "my-app" }, async () => {
const result = await processData([1, 2, 3, 4, 5]);
console.log('Result:', result);
});
}
main().catch(console.error);Express.js Integration (Recommended Style)
const express = require('express');
const handit = require('@handit.ai/handit-ai');
const app = express();
// Your preferred style - simple session management, no ugly wrappers!
app.post('/api/process', async (req, res) => {
const sessionId = handit.startTracing({
agent: 'process_endpoint',
attrs: { path: req.path, method: req.method }
});
try {
// Your business logic here - all HTTP calls auto-traced
const result = await processRequest(req.body);
res.json(result);
} finally {
handit.endTracing(); // Simple and clean!
}
});
// Alternative: Using decorator style (if preferred)
const handleUser = handit.tracing({ agent: "user-handler" }, async (req, res) => {
const user = await getUserById(req.params.id);
res.json(user);
});
app.get('/users/:id', handleUser);
app.listen(3000, () => {
console.log('Server running on port 3000');
});Features
- ✅ Zero-config setup - Auto-instrumentation on import
- ✅ Simple session management -
startTracing()/endTracing()for clean APIs - ✅ Function tracing -
@tracingdecorator for any function - ✅ LangChain/LangGraph auto-instrumentation - Automatic tracing of LLM calls
- ✅ HTTP instrumentation - Automatic axios, fetch, Express tracking
- ✅ Full content capture - Complete prompts, responses, and HTTP bodies
- ✅ Session management - Contextual tracing with session IDs
- ✅ High performance - Native Rust core with minimal overhead
- ✅ TypeScript support - Full type definitions included
- ✅ Async/await support - Works with promises and async functions
- ✅ Streaming support - Handles streaming LLM responses
- ✅ Event export - JSONL file output and HTTP endpoint support
- ✅ Extended timeouts - 50-second timeout for AI model calls
API Reference
Core Functions
// Simple session management (recommended for APIs)
const sessionId = handit.startTracing({ agent: "session-name", attrs: { custom: "data" } });
try {
// Your traced code here
} finally {
handit.endTracing();
}
// Alternative aliases
handit.begin({ agent: "session-name" }); // Same as startTracing
handit.finish(); // Same as endTracing
// Context manager style (for complex flows)
await handit.session({ tag: "session-name" }, async () => {
// Your traced code here
});
// Trace a function
const tracedFn = handit.tracing({ agent: "agent-name" }, yourFunction);
// Configure Handit
handit.configure({
HANDIT_API_KEY: 'your-key',
HANDIT_ENDPOINT: 'https://api.handit.ai/ingest',
HANDIT_CAPTURE_ONLY_CWD: false
});LangChain/LangGraph Integration
LangChain and LangGraph calls are automatically instrumented when used within an active tracing session:
const handit = require('@handit.ai/handit-ai');
const { ChatOpenAI } = require('@langchain/openai');
const { ChatAnthropic } = require('@langchain/anthropic');
handit.configure({
HANDIT_OUTPUT_FILE: './llm_traces.jsonl'
});
// Example 1: Simple LangChain usage
app.post('/chat', async (req, res) => {
handit.startTracing({ agent: 'chat-endpoint' });
try {
const llm = new ChatOpenAI({ modelName: 'gpt-4' });
const result = await llm.invoke(req.body.message);
// ✨ LLM call automatically traced with full prompt and response!
res.json({ response: result.content });
} finally {
handit.endTracing();
}
});
// Example 2: LangGraph workflow
app.post('/agent', async (req, res) => {
handit.startTracing({ agent: 'langgraph-agent' });
try {
const graph = createMyLangGraphAgent();
const result = await graph.invoke(req.body);
// ✨ All graph nodes and LLM calls automatically traced!
res.json({ result });
} finally {
handit.endTracing();
}
});
// Example 3: Streaming responses
app.post('/stream', async (req, res) => {
handit.startTracing({ agent: 'streaming-chat' });
try {
const llm = new ChatOpenAI({ streaming: true });
const stream = await llm.stream(req.body.message);
for await (const chunk of stream) {
res.write(chunk.content);
}
// ✨ Complete streamed response captured automatically!
res.end();
} finally {
handit.endTracing();
}
});Supported LangChain/LangGraph components:
- ✅
@langchain/openai- ChatOpenAI (invoke & stream) - ✅
@langchain/anthropic- ChatAnthropic (invoke & stream) - ✅
@langchain/langgraphorlanggraph- CompiledGraph (invoke & stream)
What gets captured:
- Full input messages/prompts
- Model name and parameters
- Complete responses (including streaming)
- Execution timing and duration
- Errors and exceptions
Configuration Options
HANDIT_API_KEY: Your Handit API keyHANDIT_ENDPOINT: Custom endpoint URL (default: Handit cloud)HANDIT_OUTPUT_FILE: Local JSONL file path (e.g., './traces.jsonl')HANDIT_CAPTURE_ONLY_CWD: Only trace files in current working directoryHANDIT_EXCLUDE: Regex pattern for modules to excludeHANDIT_INCLUDE: Regex pattern for modules to includeHANDIT_REDACT: Regex pattern for sensitive data redaction
Automatic API Export (Batched)
Events are automatically sent to Handit's API in batches by default:
// Zero configuration - events automatically sent to Handit cloud!
const handit = require('@handit.ai/handit-ai');
// Just use startTracing/endTracing - events auto-sent!
handit.startTracing({ agent: 'my-agent' });
// ... your code ...
handit.endTracing();Custom endpoint:
handit.configure({
HANDIT_ENDPOINT: 'https://your-api.com/api/ingest/events', // Custom endpoint
HANDIT_API_KEY: 'your-api-key',
HANDIT_OUTPUT_FILE: './local_backup.jsonl' // Optional: also save locally
});Disable API export (local only):
handit.configure({
HANDIT_ENDPOINT: null, // Disable API export
HANDIT_OUTPUT_FILE: './local_only.jsonl'
});How it works: // ✅ Default endpoint - Handit cloud (https://handit-api-oss-299768392189.us-central1.run.app/api/ingest/events) // ✅ Automatic batching - 200 events or 1 second intervals // ✅ Gzip compression - Efficient network transfer // ✅ Bearer token auth - Secure API access (with HANDIT_API_KEY) // ✅ Local backup - Also saved to file (if HANDIT_OUTPUT_FILE is set) // ✅ Auto-flush on exit - No data loss
Advanced Configuration (optional environment variables):
process.env.HANDIT_FLUSH_EVERY_EVENTS = '500'; // Batch size (default: 200)
process.env.HANDIT_FLUSH_EVERY_MS = '2000'; // Time interval (default: 1000ms)
process.env.HANDIT_MAX_BUFFER_EVENTS = '20000'; // Max buffer (default: 10000)
process.env.HANDIT_MAX_BUFFER_BYTES = '16777216'; // Max bytes (default: 8MB)
process.env.HANDIT_HTTP_TIMEOUT = '60'; // HTTP timeout (default: 30s)Batch Behavior:
- Events are buffered in memory and sent in batches
- Automatic flush triggers:
- Every 200 events (configurable)
- Every 1 second (configurable)
- When buffer reaches 10,000 events or 8MB
- On process exit (SIGINT, SIGTERM, exit)
- Each HTTP request contains ~512KB of gzipped JSONL
- Events are also written to local file for durability (if configured)
Event Types
The library captures several types of events:
- session_start: Session initialization
- call: Function call with arguments
- return: Function return with values and duration
- exception: Error/exception events
- http_request: Outbound HTTP requests
- http_response: HTTP responses with timing
Architecture
Built with a hybrid Rust + JavaScript architecture:
- Rust Core: High-performance event capture and processing
- JavaScript API: Familiar Node.js interface and auto-instrumentation
- Native Bindings: Zero-copy data transfer between JS and Rust
- JSONL Export: Structured event logging for analysis
Examples
See the /examples directory for complete working examples:
basic_demo.js- Simple function tracingexpress_demo.js- Express.js server with tracinghttp_demo.js- HTTP client instrumentationlangchain_langgraph_demo.js- LangChain/LangGraph auto-instrumentationopenai_demo.js- OpenAI API tracing
Requirements
- Node.js >= 16
- Supported platforms: macOS, Linux, Windows
License
Apache-2.0
Support
- GitHub: handit-ai/handit-full-tracer
- Documentation: docs.handit.ai
