risicare
v0.3.0
Published
AI agent observability and self-healing for Node.js — trace LLM calls, detect errors, get AI-generated fixes
Maintainers
Readme
risicare
AI agent observability and self-healing for Node.js and TypeScript.
Monitor your AI agents in production. Trace every LLM call, detect errors automatically, and get AI-generated fixes — with 3 lines of setup.
Quickstart
npm install risicareimport { init, agent, shutdown } from 'risicare';
import { patchOpenAI } from 'risicare/openai';
import OpenAI from 'openai';
// 1. Initialize
init({
apiKey: 'rsk-...',
endpoint: 'https://app.risicare.ai',
});
// 2. Patch your LLM client
const openai = patchOpenAI(new OpenAI());
// 3. Wrap your agent — all LLM calls inside are traced automatically
const myAgent = agent({ name: 'research-agent' }, async (query: string) => {
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: query }],
});
return response.choices[0].message.content;
});
// Run it — traces appear in your dashboard instantly
const result = await myAgent('What is quantum computing?');
await shutdown();That's it. Your agent's LLM calls, latency, token usage, and costs now appear in the Risicare dashboard.
Features
- 12 LLM providers — OpenAI, Anthropic, Google, Mistral, Groq, Cohere, Together, Ollama, HuggingFace, Cerebras, Bedrock, Vercel AI
- 4 framework integrations — LangChain, LangGraph, Instructor, LlamaIndex
- Self-healing — Automatic error diagnosis and AI-generated fix suggestions
- Evaluation scores — Rate agent quality with
score()and 13 built-in scorers - Streaming support —
tracedStream()for async iterator tracing - Context propagation — Automatic across
async/await,Promise,setTimeout,EventEmitter - Zero runtime dependencies — No bloat in your node_modules
- Dual CJS/ESM — Works with
require()andimport - Full TypeScript — Strict types and IntelliSense out of the box
- Non-blocking — Async batch export with circuit breaker and retry
- Zero overhead when disabled — Frozen NOOP_SPAN singleton, no allocations
LLM Providers
import { patchOpenAI } from 'risicare/openai';
import { patchAnthropic } from 'risicare/anthropic';
import { patchGoogle } from 'risicare/google';
// ... and 9 more
const openai = patchOpenAI(new OpenAI());
// Every call is now traced — model, tokens, latency, costAll 12 providers:
openai · anthropic · google · mistral · groq · cohere · together · ollama · huggingface · cerebras · bedrock · vercel-ai
Framework Integrations
// LangChain
import { RisicareCallbackHandler } from 'risicare/langchain';
const handler = new RisicareCallbackHandler();
await chain.invoke(input, { callbacks: [handler] });
// LangGraph
import { instrumentLangGraph } from 'risicare/langgraph';
const tracedGraph = instrumentLangGraph(compiledGraph);
// Instructor
import { patchInstructor } from 'risicare/instructor';
const client = patchInstructor(instructor);
// LlamaIndex
import { RisicareLlamaIndexHandler } from 'risicare/llamaindex';Core API
import {
init, shutdown, // Lifecycle
agent, session, // Identity & grouping
traceThink, traceDecide, traceAct, // Decision phases
reportError, score, // Self-healing & evaluation
tracedStream, // Streaming
} from 'risicare';
init({ apiKey, endpoint }) // Initialize SDK
agent({ name }, fn) // Wrap function with agent identity
session({ sessionId, userId }, fn) // Group traces into user sessions
traceThink('analyze', async () => {...}) // Tag reasoning phase
traceDecide('choose', async () => {...}) // Tag decision phase
traceAct('execute', async () => {...}) // Tag action phase
reportError(error) // Report caught errors for diagnosis
score(traceId, 'quality', 0.92) // Record evaluation score [0.0-1.0]
tracedStream(asyncIterable, 'stream') // Trace async iterators
await shutdown() // Flush pending spans and closeSelf-Healing
When your agent fails, Risicare automatically:
- Classifies the error (154 codes across TOOL, MEMORY, REASONING, OUTPUT, etc.)
- Diagnoses the root cause using AI analysis
- Generates a fix you can review and apply
try {
await myAgent(input);
} catch (error) {
reportError(error); // Triggers automatic diagnosis pipeline
}Decision Phases
Structure your traces to see how your agent thinks, decides, and acts:
const myAgent = agent({ name: 'planner', role: 'coordinator' }, async (input) => {
const analysis = await traceThink('analyze', async () => {
return await openai.chat.completions.create({ /* ... */ });
});
const decision = await traceDecide('choose-action', async () => {
return pickBestAction(analysis);
});
return await traceAct('execute', async () => {
return executeAction(decision);
});
});Sessions
Group traces from the same user conversation:
const result = await session(
{ sessionId: 'sess-abc123', userId: 'user-456' },
() => myAgent(userMessage)
);Requirements
- Node.js 18+
- TypeScript 5.0+ (optional, types included)
