npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

coolhand-node

v0.1.1

Published

Monitor and LLM API calls for analysis

Readme

Coolhand Node.js Monitor

Monitor and log LLM API calls from multiple providers (OpenAI, Anthropic, Google AI, Cohere, Hugging Face, and more) to the Coolhand analytics platform.

Installation

npm install coolhand-node

Getting Started

  1. Get API Key: Visit coolhandlabs.com and get an API key
  2. Install: npm install coolhand-node
  3. Initialize: Add require('coolhand-node/auto-monitor') to your main file
  4. Configure: Set COOLHAND_API_KEY in your environment variables
  5. Deploy: Your AI calls are now automatically monitored!

Quick Start

Option 1: Universal Global Monitoring (Recommended)

🔥 RECOMMENDED - Zero Configuration AI Monitoring

Note: Global monitoring works in Node.js server environments. For React frontend apps, see our React Integration Guide.

Set it and forget it! Monitor ALL AI API calls across your entire application with just one line of code, so you'll never be surprised by new LLM calls added to your production codebase.

// Add this ONE line at the top of your main application file
require('coolhand-node/auto-monitor');

// That's it! ALL AI API calls are now automatically monitored:
// ✅ OpenAI SDK calls
// ✅ LangChain operations
// ✅ Anthropic API calls
// ✅ Custom AI libraries
// ✅ Direct fetch/axios requests to AI APIs
// ✅ ANY library making AI API calls

// NO code changes needed in your existing services!

Environment Variables:

# .env
COOLHAND_API_KEY=your_api_key_here
COOLHAND_DEBUG=false  # Set to true for debug mode

Or manual initialization:

import { initializeGlobalMonitoring } from 'coolhand-node';

// Initialize once at application startup
initializeGlobalMonitoring({
  apiKey: 'your-api-key',
  debug: false
});

// Now ALL outbound AI API calls are automatically monitored

✨ Why Global Monitoring is Recommended:

  • 🚫 Zero refactoring - No code changes to existing services
  • 📊 Complete coverage - Monitors ALL AI libraries automatically
  • 🔒 Security built-in - Automatic credential sanitization
  • Performance optimized - Negligible overhead
  • 🛡️ Future-proof - Automatically captures new AI calls added by your team

Option 2: Instance-Based Monitoring (Explicit Control)

For cases where you need explicit control over which AI calls are monitored:

const Coolhand = require('coolhand-node');

// Initialize the monitor
const monitor = new Coolhand({
    apiKey: 'your-api-key',
    debug: false  // Enable debug mode if needed
});

Feedback API

Collect feedback on LLM responses to improve model performance:

import { Coolhand } from 'coolhand-node';

const coolhand = new Coolhand({
  apiKey: 'your-api-key'
});

// Create feedback for an LLM response
const feedback = await coolhand.createFeedback({
  llm_request_log_id: 123,
  llm_provider_unique_id: 'req_xxxxxxx',
  client_unique_id: 'workorder-chat-456',
  creator_unique_id: 'user-789'
  original_output: 'Here is the original LLM response!',
  revised_output: 'Here is the human edit of the original LLM response.',
  explanation: 'Tone of the original response read like AI-generated open source README docs',
  like: true,
});

Field Guide: All fields are optional, but here's how to get the best results:

Matching Fields

  • llm_request_log_id 🎯 Exact Match - ID from the Coolhand API response when the original LLM request was logged. Provides exact matching.
  • llm_provider_unique_id 🎯 Exact Match - The x-request-id from the LLM API response (e.g., "req_xxxxxxx")
  • original_output 🔍 Fuzzy Match - The original LLM response text. Provides fuzzy matching but isn't 100% reliable.
  • client_unique_id 🔗 Your Internal Matcher - Connect to an identifier from your system for internal matching

Quality Data

  • revised_outputBest Signal - End user revision of the LLM response. The highest value data for improving quality scores.
  • explanation 💬 Medium Signal - End user explanation of why the response was good or bad. Valuable qualitative data.
  • like 👍 Low Signal - Boolean like/dislike. Lower quality signal but easy for users to provide.
  • creator_unique_id 👤 User Tracking - Unique ID to match feedback to the end user who created it

Framework Integration

📚 Framework Integration Guide - Complete documentation for all supported frameworks

Supported Frameworks: Works with any Node.js framework (Express.js, NestJS, Fastify, Koa, AWS Lambda, Vercel Functions), extensively tested with Next.js/T3 Stack

Quick Links by Framework:

Configuration Options

Global Monitoring Options

| Option | Type | Default | Description | |--------|------|---------|-------------| | apiKey | string | required | Your Coolhand API key for authentication | | silent | boolean | true | Whether to suppress console output | | debug | boolean | false | Enable debug mode (API calls will be mocked) | | patternsFile | string | undefined | Path to custom API patterns file |

Environment Variables

| Variable | Type | Default | Description | |----------|------|---------|-------------| | COOLHAND_API_KEY | string | required | Your Coolhand API key | | COOLHAND_SILENT | 'true' | 'false' | 'true' | Whether to suppress console output | | COOLHAND_DEBUG | 'true' | 'false' | 'false' | Enable debug mode | | COOLHAND_PATTERNS_FILE | string | undefined | Path to custom API patterns file |

Instance-Based Monitoring Options

Same options as global monitoring, passed to the Coolhand constructor.

TypeScript Support

Full TypeScript support with exported types:

import { Coolhand, CoolhandOptions, CoolhandCallData, CoolhandStats } from 'coolhand-node';

const monitor = new Coolhand({
  apiKey: 'your-api-key',
  silent: true,
  debug: false
});

What Gets Logged

The monitor captures:

  • Request Data: Method, URL, headers, request body
  • Response Data: Status code, headers, response body
  • Metadata: Timestamp, protocol used
  • LLM-Specific: Model used, token counts, temperature settings

Headers containing API keys are automatically sanitized for security.

Supported Libraries

The monitor works with any Node.js library that makes HTTP(S) requests to LLM APIs, including:

  • OpenAI official SDK
  • Anthropic SDK
  • Google AI SDK
  • LangChain
  • Direct fetch() calls
  • https/http module usage
  • Any other HTTP client

Custom AI Providers

Add support for custom AI providers by creating a patterns file:

const monitor = new Coolhand({
    apiKey: 'your-api-key',
    patternsFile: './my-patterns.json'
});

Example patterns file (my-patterns.json):

{
  "patterns": [
    {
      "name": "My Custom AI",
      "domains": ["api.mycustomai.com"],
      "paths": ["/v1/generate", "/v1/chat"],
      "headers": {
        "authorization": "[REDACTED]",
        "api-key": "[REDACTED]"
      }
    }
  ]
}

Monitoring Statistics

Track monitoring statistics in your application:

const { getGlobalStats } = require('coolhand-node');

setInterval(() => {
  const stats = getGlobalStats();
  console.log(`AI Calls: ${stats.interceptedCalls}, Total Requests: ${stats.totalRequests}`);
}, 60000);

Debug Mode

Enable debug mode for development and testing:

// Global monitoring with debug mode
require('coolhand-node/auto-monitor'); // Set COOLHAND_DEBUG=true in .env

// Or instance-based with debug mode
const monitor = new Coolhand({
  apiKey: 'your-api-key',
  debug: true
});

When debug mode is enabled:

  • API calls to Coolhand will be mocked
  • Debug messages will show what would have been sent
  • No data will be sent to Coolhand servers

Advanced Usage

Modular Architecture

Access individual services for advanced use cases:

import { PatternMatchingService, LoggingService } from 'coolhand-node';

// Use pattern matching independently
const patternService = new PatternMatchingService('./custom-patterns.json');
const match = patternService.matchesAPIPattern(requestOptions);

// Use logging service independently
const loggingService = new LoggingService({
  apiKey: 'your-key',
  silent: false,
  debug: false
});

API Key

🆓 Sign up for free at coolhandlabs.com to get your API key and start monitoring your LLM usage.

What you get:

  • Complete LLM request and response logging
  • Usage analytics and insights
  • Feedback collection and quality scoring
  • No credit card required to start

Error Handling

The monitor handles errors gracefully:

  • Failed API logging attempts are logged to console but don't interrupt your application
  • Invalid API keys will be reported but won't crash your app
  • Network issues are handled with appropriate error messages

Security

  • API keys in request headers are automatically redacted
  • No sensitive data is exposed in logs
  • Debug mode prevents data from being sent to external servers

Documentation

  • Framework Integration Guide - Complete setup for all frameworks. (Well, some are more complete than others.)
  • Global Monitoring Guide - Advanced global monitoring features. Even easier than asking your favorite LLM coding tool to do it for you.
  • React Integration Guide - Frontend integration patterns. We won't ask about how you are planning to keep your API keys secret.

Other Languages

Community