npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

risicare

v0.3.0

Published

AI agent observability and self-healing for Node.js — trace LLM calls, detect errors, get AI-generated fixes

Readme

risicare

AI agent observability and self-healing for Node.js and TypeScript.

npm version npm downloads TypeScript License: MIT

Monitor your AI agents in production. Trace every LLM call, detect errors automatically, and get AI-generated fixes — with 3 lines of setup.

Quickstart

npm install risicare
import { init, agent, shutdown } from 'risicare';
import { patchOpenAI } from 'risicare/openai';
import OpenAI from 'openai';

// 1. Initialize
init({
  apiKey: 'rsk-...',
  endpoint: 'https://app.risicare.ai',
});

// 2. Patch your LLM client
const openai = patchOpenAI(new OpenAI());

// 3. Wrap your agent — all LLM calls inside are traced automatically
const myAgent = agent({ name: 'research-agent' }, async (query: string) => {
  const response = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: query }],
  });
  return response.choices[0].message.content;
});

// Run it — traces appear in your dashboard instantly
const result = await myAgent('What is quantum computing?');
await shutdown();

That's it. Your agent's LLM calls, latency, token usage, and costs now appear in the Risicare dashboard.

Features

  • 12 LLM providers — OpenAI, Anthropic, Google, Mistral, Groq, Cohere, Together, Ollama, HuggingFace, Cerebras, Bedrock, Vercel AI
  • 4 framework integrations — LangChain, LangGraph, Instructor, LlamaIndex
  • Self-healing — Automatic error diagnosis and AI-generated fix suggestions
  • Evaluation scores — Rate agent quality with score() and 13 built-in scorers
  • Streaming supporttracedStream() for async iterator tracing
  • Context propagation — Automatic across async/await, Promise, setTimeout, EventEmitter
  • Zero runtime dependencies — No bloat in your node_modules
  • Dual CJS/ESM — Works with require() and import
  • Full TypeScript — Strict types and IntelliSense out of the box
  • Non-blocking — Async batch export with circuit breaker and retry
  • Zero overhead when disabled — Frozen NOOP_SPAN singleton, no allocations

LLM Providers

import { patchOpenAI } from 'risicare/openai';
import { patchAnthropic } from 'risicare/anthropic';
import { patchGoogle } from 'risicare/google';
// ... and 9 more

const openai = patchOpenAI(new OpenAI());
// Every call is now traced — model, tokens, latency, cost

All 12 providers:

openai · anthropic · google · mistral · groq · cohere · together · ollama · huggingface · cerebras · bedrock · vercel-ai

Framework Integrations

// LangChain
import { RisicareCallbackHandler } from 'risicare/langchain';
const handler = new RisicareCallbackHandler();
await chain.invoke(input, { callbacks: [handler] });

// LangGraph
import { instrumentLangGraph } from 'risicare/langgraph';
const tracedGraph = instrumentLangGraph(compiledGraph);

// Instructor
import { patchInstructor } from 'risicare/instructor';
const client = patchInstructor(instructor);

// LlamaIndex
import { RisicareLlamaIndexHandler } from 'risicare/llamaindex';

Core API

import {
  init, shutdown,                         // Lifecycle
  agent, session,                         // Identity & grouping
  traceThink, traceDecide, traceAct,      // Decision phases
  reportError, score,                     // Self-healing & evaluation
  tracedStream,                           // Streaming
} from 'risicare';

init({ apiKey, endpoint })                // Initialize SDK
agent({ name }, fn)                       // Wrap function with agent identity
session({ sessionId, userId }, fn)        // Group traces into user sessions
traceThink('analyze', async () => {...})  // Tag reasoning phase
traceDecide('choose', async () => {...})  // Tag decision phase
traceAct('execute', async () => {...})    // Tag action phase
reportError(error)                        // Report caught errors for diagnosis
score(traceId, 'quality', 0.92)           // Record evaluation score [0.0-1.0]
tracedStream(asyncIterable, 'stream')     // Trace async iterators
await shutdown()                          // Flush pending spans and close

Self-Healing

When your agent fails, Risicare automatically:

  1. Classifies the error (154 codes across TOOL, MEMORY, REASONING, OUTPUT, etc.)
  2. Diagnoses the root cause using AI analysis
  3. Generates a fix you can review and apply
try {
  await myAgent(input);
} catch (error) {
  reportError(error); // Triggers automatic diagnosis pipeline
}

Decision Phases

Structure your traces to see how your agent thinks, decides, and acts:

const myAgent = agent({ name: 'planner', role: 'coordinator' }, async (input) => {
  const analysis = await traceThink('analyze', async () => {
    return await openai.chat.completions.create({ /* ... */ });
  });

  const decision = await traceDecide('choose-action', async () => {
    return pickBestAction(analysis);
  });

  return await traceAct('execute', async () => {
    return executeAction(decision);
  });
});

Sessions

Group traces from the same user conversation:

const result = await session(
  { sessionId: 'sess-abc123', userId: 'user-456' },
  () => myAgent(userMessage)
);

Requirements

  • Node.js 18+
  • TypeScript 5.0+ (optional, types included)

Documentation

License

MIT