@dharmax/llm-utils
v0.1.1
Published
A generic and sophisticated code-to-llm bridge including routing, prompt management, and session context.
Maintainers
Readme
@dharmax/llm-utils
@dharmax/llm-utils is a small TypeScript toolkit for building LLM-powered command-line and service workflows with a stable set of primitives:
Askerfor request executionCompletionEnginefor provider adapter dispatchModelRouterfor lightweight model selectionPromptEnginefor multi-part prompt templatesLLMSessionfor short conversational state and usage trackingProviderDiscoveryfor normalizing configured providers
It is designed to be usable by both humans and AI agents:
- Humans get a compact library with explicit types and small building blocks.
- Agents get predictable entry points, deterministic template loading, and a simple adapter registration model.
What This Package Is
This package is not an all-in-one framework. It is a thin orchestration layer that helps you:
- represent providers and models consistently
- route tasks to a suitable model
- render prompt templates from reusable files or sources
- execute requests through pluggable adapters
- keep lightweight session history and metrics
What This Package Is Not
- It is not a full agent runtime.
- It is not a workflow database.
- It does not manage prompt storage for you.
- It does not auto-install provider SDKs.
- It does not currently include a deep integration test suite against live provider APIs.
Installation
npm install @dharmax/llm-utilsCore Concepts
Providers and Adapters
A provider is a logical backend such as OpenAI, Anthropic, Google, Ollama, or a custom internal service.
CompletionEngine dispatches generation calls to registered adapters. Built-in adapters are registered for:
openaianthropicgoogleollama
You can also register your own adapter:
import { CompletionEngine } from '@dharmax/llm-utils';
CompletionEngine.registerAdapter({
id: 'mock',
async generate(options) {
return {
text: `Echo: ${options.prompt}`,
ok: true,
model: {
providerId: 'mock',
modelId: options.modelId
}
};
}
});Provider State
Asker can be constructed from normalized provider state:
import { Asker } from '@dharmax/llm-utils';
const asker = new Asker({
providerState: {
providers: {
openai: {
id: 'openai',
available: true,
apiKey: process.env.OPENAI_API_KEY,
models: [
{
id: 'gpt-4o-mini',
providerId: 'openai',
quality: 'high',
capabilities: {
logic: 0.8,
strategy: 0.7,
prose: 0.8,
data: 0.7
}
}
]
}
},
routingPolicy: {},
knowledge: {}
}
});Prompt Templates
PromptEngine loads two parts per prompt name:
<name>.system<name>.prompt
Each part can optionally start with JSON frontmatter:
--- json
{"taskType":"code-generation","format":"json"}
---The prompt body supports {{ variable }} placeholders.
Usage
1. Direct Request Execution
import { Asker } from '@dharmax/llm-utils';
const result = await asker.ask('Summarize this file', 'summarization', {
system: 'Return a concise answer.'
});
if (!result.ok) {
throw new Error(result.error);
}
console.log(result.text);2. Template-Driven Execution
The recommended path is a pluggable context manager implementation package. @dharmax/llm-utils owns the protocol and prompt integration; a separate package owns retrieval, budgeting, and compression policy.
import { Asker, PromptEngine } from '@dharmax/llm-utils';
import { HeuristicContextManager } from '@dharmax/context-manager';
const promptEngine = new PromptEngine(templateSource);
const contextManager = new HeuristicContextManager(contextStore, {
defaultMaxTokens: 500
});
const asker = new Asker(
providerConfigs,
taskTypes,
contextManager,
promptEngine
);For local development in this repo, the sibling package lives at ../context-manager.
The legacy constructor remains supported for template workflows that still use the block-oriented ContextManager:
import {
Asker,
ContextManager,
PromptEngine
} from '@dharmax/llm-utils';
const promptEngine = new PromptEngine(templateSource);
const contextManager = new ContextManager(contextStore);
const asker = new Asker(
providerConfigs,
taskTypes,
contextManager,
promptEngine
);
await asker.refreshMapping(availableModels);
const result = await asker.prompt('draft-response', {}, {
inputText: 'Explain the architecture',
taskType: 'architecture'
});3. Session-Based Execution
LLMSession adds short history retention, compressed managed context, and usage metrics:
import { LLMSession } from '@dharmax/llm-utils';
const session = new LLMSession(asker, { project: 'demo' });
const result = await session.prompt('reply', { inputText: 'What changed?' });
console.log(result.text);
console.log(session.getContext());4. Discovery
ProviderDiscovery normalizes configured providers and probes Ollama:
import { ProviderDiscovery } from '@dharmax/llm-utils';
const providerState = await ProviderDiscovery.discover(
{
providers: {
ollama: { host: '127.0.0.1:11434' },
openai: { apiKey: process.env.OPENAI_API_KEY }
}
},
{
models: {
openai: [{ id: 'gpt-4o-mini', providerId: 'openai' }]
}
}
);Public API
Main exports:
AskerCompletionEngineContextResult,ContextRequest, andPromptContextManagerLegacyContextManagerAdapterContextCompressorContextManagerLLMSessionMetricsEngineModelRouterPromptEngineProviderDiscovery- built-in adapters such as
OpenAIAdapterandOllamaProvider - all core TypeScript types from
types.mjs
Development
Build:
npm run buildTest:
npm testThe test suite exercises the built package in dist/, not only the source files. That helps catch packaging and export regressions.
Current Limitations
- The built-in
ContextManagerremains a legacy block-oriented API for compatibility. - The default external context package is still heuristic rather than semantic.
- Prompt template manifests are JSON-frontmatter only.
- The package currently uses minimal fake-free unit coverage rather than live provider integration tests.
LLMSessionkeeps a short in-memory history and is not persistence-backed.
Repository Hygiene
Local workflow and editor artifacts are intentionally ignored:
.ai-workflow/.codex.idea/epics.mdkanban.md
License
MIT
