llm-manager
v1.5.5
Published
**LlmManager** is a modular, extensible TypeScript library for orchestrating interactions with Large Language Models (LLMs) such as OpenAI's GPT. It provides a high-level, type-safe API for running prompts, managing conversation history, and supporting st
Readme
LlmManager
LlmManager is a modular, extensible TypeScript library for orchestrating interactions with Large Language Models (LLMs) such as OpenAI's GPT. It provides a high-level, type-safe API for running prompts, managing conversation history, and supporting structured outputs with customizable templates.
Features
- Pluggable Services: Easily swap out API clients, storage, and other logic.
- Template-Driven: Define and use prompt templates for translation, summarization, Q&A, and more.
- Instruction & Variable Templates: Use and extend instructionTemplates and variableTemplates for preferences, context, and more.
- Conversation-Aware: Supports chat history and context management.
- Type-Safe: Strong TypeScript types throughout.
- Testable: Designed for both real API and mock/test environments.
- Extensible: Add new prompt types, output structures, or integrations with minimal effort.
Installation
npm install llmmanagerQuick Start
import { createLlmManager } from 'llmmanager';
import { v4 as uuidV4 } from 'uuid';
// Define your services (API client, conversation store, etc.)
const services = {
options: {
get: async () => ({
useChatHistory: true,
useApiKey: true,
activeModel: 'gpt-3.5-turbo',
relatedQuestions: false,
}),
},
conversation: {
create: async ({ templateId, userInput }) => ({
id: uuidV4(),
templateId,
userInput,
messages: [],
inProgress: null,
}),
get: async (id) => { /* ... */ },
save: async (id, conversation) => { /* ... */ },
},
apiClient: {
call: async (params) => {
// Call OpenAI or your LLM provider here
},
},
};
// Only 'services' is required. Templates are built-in by default.
const llmManager = createLlmManager({ services });
const result = await llmManager.run({
templateId: 'translate',
userInput: 'Hello world!',
variables: { language: { value: 'fr' } },
});
console.log(result.content); // "Bonjour le monde!"Templates & Presets
LlmManager comes with built-in templates and presets for common tasks:
- instructionTemplates: For user/system preferences and context, e.g.:
import { instructionTemplates } from 'llmmanager/presets/instructions';
console.log(instructionTemplates.userLanguage);
// {
// blueprint: 'User prefers to receive responses in {{language}}',
// variables: { language: { type: 'language', title: 'Language', value: 'en' } }
// }- variableTemplates: For tone, content type, word length, etc.:
import * as variableTemplates from 'llmmanager/presets/variables';
console.log(variableTemplates.textTone.optimistic);
// { label: 'Optimistic', emoji: '🌞', value: 'optimistic' }- templates: Main LLM prompt templates (Q&A, translation, summarization, etc.)
You can add your own templates or modify existing ones in src/presets/templates.js, src/presets/instructions.js, and src/presets/variables.js.
Customizing Instructions & Variables
You can extend or override the built-in instructionTemplates and variableTemplates by passing them as optional parameters:
import { createLlmManager } from 'llmmanager';
import { instructionTemplates as defaultInstructions } from 'llmmanager/presets/instructions';
import * as defaultVariables from 'llmmanager/presets/variables';
const customInstructions = {
...defaultInstructions,
customPreference: {
blueprint: 'Custom: {{custom}}',
variables: { custom: { type: 'custom', title: 'Custom', value: 'myValue' } },
},
};
const customVariables = {
...defaultVariables,
customType: {
myOption: { label: 'My Option', value: 'myValue' },
},
};
const llmManager = createLlmManager({
services,
variableTemplates: customVariables, // optional
instructionTemplates: customInstructions, // optional
});Conversation History
Enable or disable chat history per session:
const services = {
options: {
get: async () => ({ useChatHistory: true, ... }),
},
// ...
};When enabled, LlmManager will pass previous messages to the LLM for context-aware responses.
Testing
LlmManager is designed for robust testing:
- Unit & Integration Tests: Located in
__tests__ - Mocked API: Easily swap between real and mock API calls for fast, cost-effective testing
- Example Scripts: See
examples/for real and mock usage
Run all tests:
npm testRun chat history test:
npm test __tests__/chat-history.test.tsScripts
Common scripts in package.json:
npm run build— Build the projectnpm test— Run all testsnpm run test:unit— Run unit testsnpm run test:integration— Run integration testsnpm run test:real-api— Run integration tests against the real OpenAI APInpm run example— Run the main examplenpm run example:chat-history— Run the chat history example
Extending
You can add new templates, structures, or swap out services for your own storage, API, or business logic. All core logic is modular and type-safe.
License
ISC
Contributing
Pull requests and issues are welcome! Please open an issue to discuss your idea or bug before submitting a PR.
Acknowledgements
LlmManager — Orchestrate LLMs with confidence, structure, and testability.
