npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

llmprogram

v0.1.0

Published

A powerful TypeScript framework for creating, executing, and managing LLM programs defined in YAML. Enables building complex AI workflows with structured inputs/outputs, validation, and multiple execution environments.

Downloads

7

Readme

LLM Program (TypeScript)

llmprogram is a TypeScript package that provides a structured and powerful way to create and run programs that use Large Language Models (LLMs). It uses a YAML-based configuration to define the behavior of your LLM programs, making them easy to create, manage, and share.

This is the TypeScript equivalent of the Python llmprogram library.

Features

  • YAML-based Configuration: Define your LLM programs using simple and intuitive YAML files.
  • Input/Output Validation: Use JSON schemas to validate the inputs and outputs of your programs, ensuring data integrity.
  • Handlebars Templating: Use the power of Handlebars templates to create dynamic prompts for your LLMs.
  • Caching: Built-in support for Redis caching to save time and reduce costs.
  • Execution Logging: Automatically log program executions to a SQLite database for analysis and debugging.
  • Streaming: Support for streaming responses from the LLM.
  • Batch Processing: Process multiple inputs in parallel for improved performance.
  • CLI for Dataset Generation: A command-line interface to generate instruction datasets for LLM fine-tuning from your logged data.
  • Web Service: Expose your programs as REST API endpoints with automatic OpenAPI documentation.
  • Analytics: Comprehensive analytics tracking with DuckDB for token usage, LLM calls, program usage, and timing metrics.

Installation

npm install llmprogram

Usage

Program YAML File

Create a file named sentiment_analysis.yaml:

name: sentiment_analysis
description: Analyzes the sentiment of a given text.
version: 1.0.0

model:
  provider: openai
  name: gpt-4.1-mini
  temperature: 0.5
  max_tokens: 100
  response_format: json_object

system_prompt: |
  You are a sentiment analysis expert. Analyze the sentiment of the given text and return a JSON response with the following format:
  - sentiment (string): "positive", "negative", or "neutral"
  - score (number): A score from -1 (most negative) to 1 (most positive)

input_schema:
  type: object
  required:
    - text
  properties:
    text:
      type: string
      description: The text to analyze.

output_schema:
  type: object
  required:
    - sentiment
    - score
  properties:
    sentiment:
      type: string
      enum: ["positive", "negative", "neutral"]
    score:
      type: number
      minimum: -1
      maximum: 1

template: |
  Analyze the following text:
  {{text}}

Using the Library

import { LLMProgram } from 'llmprogram';

async function main() {
    const program = new LLMProgram('sentiment_analysis.yaml');
    const result = await program.call({ text: 'I love this new product! It is amazing.' });
    console.log(result);
}

main();

Using the CLI

# Set your OpenAI API key
export OPENAI_API_KEY='your-api-key'

# Run with inputs from a JSON file
llmprogram run sentiment_analysis.yaml --inputs sentiment_inputs.json

# Run with inputs from command line
llmprogram run sentiment_analysis.yaml --input-json '{"text": "I love this product!"}'

CLI Commands

run

Run an LLM program with inputs from command line or files.

Usage:

# Run with inputs from a JSON file
llmprogram run program.yaml --inputs inputs.json

# Run with inputs from command line
llmprogram run program.yaml --input-json '{"text": "I love this product!"}'

# Save output to a file
llmprogram run program.yaml --inputs inputs.json --output result.json

generate-dataset

Generate an instruction dataset for LLM fine-tuning from a SQLite log file.

Usage:

llmprogram generate-dataset <database_path> <output_path>

analytics

Show analytics data collected from LLM program executions.

Usage:

# Show all analytics data
llmprogram analytics

# Show analytics for a specific program
llmprogram analytics --program sentiment_analysis

# Show analytics for a specific model
llmprogram analytics --model gpt-4

Web Service

The package includes a built-in web service that exposes your LLM programs as REST API endpoints.

Running the Web Service

# Run the web service with default settings
llmprogram-web

# Run the web service with custom directory
llmprogram-web --directory /path/to/your/programs

# Run the web service on a different host/port
llmprogram-web --host 0.0.0.0 --port 8080

API Endpoints

  • GET / - Root endpoint with API information
  • GET /programs - List all available programs
  • GET /programs/{program_name} - Get detailed information about a specific program
  • POST /programs/{program_name}/run - Run a specific program
  • GET /analytics/llm-calls - Get LLM call statistics
  • GET /analytics/program-usage - Get program usage statistics
  • GET /analytics/token-usage - Get token usage statistics