npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@charivo/llm-client-openai

v0.0.1

Published

OpenAI LLM client for Charivo (local/testing)

Readme

@charivo/llm-client-openai

OpenAI ChatGPT client for Charivo LLM system.

Features

  • 🤖 OpenAI Integration - Works with GPT-4, GPT-3.5, and other OpenAI models
  • 💬 Streaming Support - Optional streaming responses
  • 🎯 Type-Safe - Full TypeScript support
  • 🔧 Configurable - Customize model, temperature, and other parameters

Installation

pnpm add @charivo/llm-client-openai @charivo/core

Usage

Basic Setup

import { OpenAILLMClient } from "@charivo/llm-client-openai";

const client = new OpenAILLMClient({
  apiKey: "your-openai-api-key",
  model: "gpt-4"
});

const response = await client.call([
  { role: "user", content: "Hello!" }
]);

console.log(response); // "Hello! How can I help you today?"

With LLMManager (Recommended)

import { OpenAILLMClient } from "@charivo/llm-client-openai";
import { createLLMManager } from "@charivo/llm-core";

const client = new OpenAILLMClient({
  apiKey: "your-openai-api-key",
  model: "gpt-4-turbo-preview"
});

const llmManager = createLLMManager(client);

llmManager.setCharacter({
  id: "assistant",
  name: "Hiyori",
  personality: "Cheerful and helpful"
});

const response = await llmManager.generateResponse({
  id: "1",
  content: "Hello!",
  timestamp: new Date(),
  type: "user"
});

Custom Configuration

const client = new OpenAILLMClient({
  apiKey: "your-openai-api-key",
  model: "gpt-4",
  temperature: 0.7,      // Creativity (0.0 - 2.0)
  maxTokens: 1000,       // Max response length
  topP: 0.9,             // Nucleus sampling
  frequencyPenalty: 0.5, // Reduce repetition
  presencePenalty: 0.5   // Encourage new topics
});

API Reference

Constructor

new OpenAILLMClient(config: OpenAILLMClientConfig)

Config Options:

  • apiKey: string - Your OpenAI API key (required)
  • model?: string - Model name (default: "gpt-4")
  • temperature?: number - Sampling temperature 0-2 (default: 0.7)
  • maxTokens?: number - Max tokens in response (default: 1000)
  • topP?: number - Nucleus sampling 0-1 (default: 1.0)
  • frequencyPenalty?: number - Frequency penalty 0-2 (default: 0)
  • presencePenalty?: number - Presence penalty 0-2 (default: 0)

Methods

call(messages)

Send messages and get a response.

const response = await client.call([
  { role: "user", content: "Hi" },
  { role: "assistant", content: "Hello!" },
  { role: "user", content: "How are you?" }
]);

Note: This client wraps the OpenAI provider and is suitable for development/testing. For production, use @charivo/llm-client-remote with a server-side provider to keep API keys secure.

Supported Models

GPT-4 Models

  • gpt-4 - Most capable, best for complex tasks
  • gpt-4-turbo-preview - Faster, cheaper, 128K context
  • gpt-4-0125-preview - Latest GPT-4 Turbo
  • gpt-4-1106-preview - Previous GPT-4 Turbo

GPT-3.5 Models

  • gpt-3.5-turbo - Fast and cost-effective
  • gpt-3.5-turbo-16k - Extended context window

Configuration Guide

Temperature

Controls randomness (0.0 - 2.0):

  • 0.0-0.3: Focused, deterministic (good for facts)
  • 0.4-0.7: Balanced (good for conversation)
  • 0.8-1.0: Creative (good for storytelling)
  • 1.1-2.0: Very creative (experimental)
// Factual assistant
const factualClient = new OpenAILLMClient({
  apiKey: "...",
  temperature: 0.2
});

// Creative storyteller
const creativeClient = new OpenAILLMClient({
  apiKey: "...",
  temperature: 0.9
});

Max Tokens

Limits response length:

  • Short responses: 100-300
  • Normal conversation: 500-1000
  • Long-form content: 1500-2000
const client = new OpenAILLMClient({
  apiKey: "...",
  maxTokens: 500 // Concise responses
});

Penalties

Reduce repetition and encourage diversity:

const client = new OpenAILLMClient({
  apiKey: "...",
  frequencyPenalty: 0.5, // Reduce word repetition
  presencePenalty: 0.5   // Encourage new topics
});

Error Handling

try {
  const response = await client.call(messages);
} catch (error) {
  if (error.code === "insufficient_quota") {
    console.error("OpenAI quota exceeded");
  } else if (error.code === "invalid_api_key") {
    console.error("Invalid API key");
  } else {
    console.error("OpenAI error:", error);
  }
}

Environment Variables

Use environment variables for API keys:

# .env
OPENAI_API_KEY=sk-...
const client = new OpenAILLMClient({
  apiKey: process.env.OPENAI_API_KEY!
});

Cost Optimization

  1. Use GPT-3.5 for simple tasks: 10x cheaper than GPT-4
  2. Limit maxTokens: Reduce response length
  3. Cache responses: Store common responses
  4. Use temperature wisely: Lower temperature = more deterministic = better caching
// Cost-effective setup
const client = new OpenAILLMClient({
  apiKey: "...",
  model: "gpt-3.5-turbo", // Cheaper
  maxTokens: 300,         // Shorter responses
  temperature: 0.3        // More cacheable
});

License

MIT