npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

llm-manager

v1.5.5

Published

**LlmManager** is a modular, extensible TypeScript library for orchestrating interactions with Large Language Models (LLMs) such as OpenAI's GPT. It provides a high-level, type-safe API for running prompts, managing conversation history, and supporting st

Readme

LlmManager

LlmManager is a modular, extensible TypeScript library for orchestrating interactions with Large Language Models (LLMs) such as OpenAI's GPT. It provides a high-level, type-safe API for running prompts, managing conversation history, and supporting structured outputs with customizable templates.


Features

  • Pluggable Services: Easily swap out API clients, storage, and other logic.
  • Template-Driven: Define and use prompt templates for translation, summarization, Q&A, and more.
  • Instruction & Variable Templates: Use and extend instructionTemplates and variableTemplates for preferences, context, and more.
  • Conversation-Aware: Supports chat history and context management.
  • Type-Safe: Strong TypeScript types throughout.
  • Testable: Designed for both real API and mock/test environments.
  • Extensible: Add new prompt types, output structures, or integrations with minimal effort.

Installation

npm install llmmanager

Quick Start

import { createLlmManager } from 'llmmanager';
import { v4 as uuidV4 } from 'uuid';

// Define your services (API client, conversation store, etc.)
const services = {
  options: {
    get: async () => ({
      useChatHistory: true,
      useApiKey: true,
      activeModel: 'gpt-3.5-turbo',
      relatedQuestions: false,
    }),
  },
  conversation: {
    create: async ({ templateId, userInput }) => ({
      id: uuidV4(),
      templateId,
      userInput,
      messages: [],
      inProgress: null,
    }),
    get: async (id) => { /* ... */ },
    save: async (id, conversation) => { /* ... */ },
  },
  apiClient: {
    call: async (params) => {
      // Call OpenAI or your LLM provider here
    },
  },
};

// Only 'services' is required. Templates are built-in by default.
const llmManager = createLlmManager({ services });

const result = await llmManager.run({
  templateId: 'translate',
  userInput: 'Hello world!',
  variables: { language: { value: 'fr' } },
});

console.log(result.content); // "Bonjour le monde!"

Templates & Presets

LlmManager comes with built-in templates and presets for common tasks:

  • instructionTemplates: For user/system preferences and context, e.g.:
import { instructionTemplates } from 'llmmanager/presets/instructions';

console.log(instructionTemplates.userLanguage);
// {
//   blueprint: 'User prefers to receive responses in {{language}}',
//   variables: { language: { type: 'language', title: 'Language', value: 'en' } }
// }
  • variableTemplates: For tone, content type, word length, etc.:
import * as variableTemplates from 'llmmanager/presets/variables';

console.log(variableTemplates.textTone.optimistic);
// { label: 'Optimistic', emoji: '🌞', value: 'optimistic' }
  • templates: Main LLM prompt templates (Q&A, translation, summarization, etc.)

You can add your own templates or modify existing ones in src/presets/templates.js, src/presets/instructions.js, and src/presets/variables.js.


Customizing Instructions & Variables

You can extend or override the built-in instructionTemplates and variableTemplates by passing them as optional parameters:

import { createLlmManager } from 'llmmanager';
import { instructionTemplates as defaultInstructions } from 'llmmanager/presets/instructions';
import * as defaultVariables from 'llmmanager/presets/variables';

const customInstructions = {
  ...defaultInstructions,
  customPreference: {
    blueprint: 'Custom: {{custom}}',
    variables: { custom: { type: 'custom', title: 'Custom', value: 'myValue' } },
  },
};

const customVariables = {
  ...defaultVariables,
  customType: {
    myOption: { label: 'My Option', value: 'myValue' },
  },
};

const llmManager = createLlmManager({
  services,
  variableTemplates: customVariables, // optional
  instructionTemplates: customInstructions, // optional
});

Conversation History

Enable or disable chat history per session:

const services = {
  options: {
    get: async () => ({ useChatHistory: true, ... }),
  },
  // ...
};

When enabled, LlmManager will pass previous messages to the LLM for context-aware responses.


Testing

LlmManager is designed for robust testing:

  • Unit & Integration Tests: Located in __tests__
  • Mocked API: Easily swap between real and mock API calls for fast, cost-effective testing
  • Example Scripts: See examples/ for real and mock usage

Run all tests:

npm test

Run chat history test:

npm test __tests__/chat-history.test.ts

Scripts

Common scripts in package.json:

  • npm run build — Build the project
  • npm test — Run all tests
  • npm run test:unit — Run unit tests
  • npm run test:integration — Run integration tests
  • npm run test:real-api — Run integration tests against the real OpenAI API
  • npm run example — Run the main example
  • npm run example:chat-history — Run the chat history example

Extending

You can add new templates, structures, or swap out services for your own storage, API, or business logic. All core logic is modular and type-safe.


License

ISC


Contributing

Pull requests and issues are welcome! Please open an issue to discuss your idea or bug before submitting a PR.


Acknowledgements


LlmManager — Orchestrate LLMs with confidence, structure, and testability.