npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@helicone/async

v2.0.2

Published

A Node.js wrapper for logging llm traces directly to Helicone, bypassing the proxy, with OpenLLMetry

Readme

@helicone/async

A Node.js wrapper for logging LLM traces directly to Helicone, bypassing the proxy, with OpenLLMetry. This package enables you to monitor and analyze your OpenAI API usage without requiring a proxy server.

Features

  • Direct logging to Helicone without proxy
  • OpenLLMetry support for standardized LLM telemetry
  • Custom property tracking
  • Environment variable configuration
  • TypeScript support

Installation

Stable Version

npm install @helicone/async

Quick Start

  1. Create a Helicone account and get your API key from helicone.ai/developer

  2. Set up your environment variables:

export HELICONE_API_KEY=<your API key>
export OPENAI_API_KEY=<your OpenAI API key>
  1. Basic usage:
const { HeliconeAsyncOpenAI } = require("helicone");

const openai = new HeliconeAsyncOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  heliconeMeta: {
    apiKey: process.env.HELICONE_API_KEY,
  },
});

const chatCompletion = await openai.chat.completion.create({
  model: "gpt-3.5-turbo",
  messages: [{ role: "user", content: "Hello world" }],
});

console.log(chatCompletion.data.choices[0].message);

Configuration Options

HeliconeMeta Options

The heliconeMeta object supports several configuration options:

interface HeliconeMeta {
  apiKey?: string; // Your Helicone API key
  custom_properties?: Record<string, any>; // Custom properties to track
  cache?: boolean; // Enable/disable caching
  retry?: boolean; // Enable/disable retries
  user_id?: string; // Track requests by user
}

Example with Custom Properties

const openai = new HeliconeAsyncOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  heliconeMeta: {
    apiKey: process.env.HELICONE_API_KEY,
    custom_properties: {
      project: "my-project",
      environment: "production",
    },
    user_id: "user-123",
  },
});

Advanced Usage

With Async/Await

async function generateResponse() {
  try {
    const response = await openai.chat.completion.create({
      model: "gpt-4",
      messages: [
        { role: "system", content: "You are a helpful assistant." },
        { role: "user", content: "What is the capital of France?" },
      ],
      max_tokens: 150,
    });

    return response.data.choices[0].message;
  } catch (error) {
    console.error("Error:", error);
  }
}

Error Handling

try {
  const completion = await openai.chat.completion.create({
    model: "gpt-3.5-turbo",
    messages: [{ role: "user", content: "Hello" }],
  });
} catch (error) {
  if (error.response) {
    console.error(error.response.status);
    console.error(error.response.data);
  } else {
    console.error(error.message);
  }
}

Best Practices

  1. Always store API keys in environment variables
  2. Implement proper error handling
  3. Use custom properties to track important metadata
  4. Set appropriate timeout values for your use case
  5. Consider implementing retry logic for production environments

Contributing

We welcome contributions! Please see our contributing guidelines for details.

License

Apache-2.0

Support