@smartledger/lumen-llm
v1.0.2
Published
Universal LLM provider abstraction for LumenChat with structured JSON responses
Maintainers
Readme
@lumenchat/llm
Universal LLM provider abstraction for LumenChat with structured JSON responses and cryptographic signatures.
Features
- Provider Abstraction: Unified interface for OpenAI, Anthropic, Llama, etc.
- Structured JSON: Schema-validated JSON responses
- Cryptographic Signing: Sign responses with agent keys
- Extensible: Easy to add new providers
- Type Safety: Clear interfaces and error handling
Installation
npm install @lumenchat/llm @lumenchat/signaturesUsage
Basic Structured JSON Generation
import { generateStructuredJSON, OpenAIProvider } from '@lumenchat/llm';
const schema = {
type: 'object',
properties: {
answer: { type: 'string' },
confidence: { type: 'number' }
},
required: ['answer', 'confidence']
};
const messages = [
{ role: 'user', content: 'What is 2+2?' }
];
const provider = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY });
const response = await generateStructuredJSON(messages, schema, {}, provider);
// {
// answer: 'The answer is 4',
// confidence: 1.0,
// _llm: {
// provider: 'openai',
// model: 'gpt-4o-mini',
// elapsed: 1234,
// temperature: 0.4
// }
// }Signed Structured Responses
import { signedStructuredResponse } from '@lumenchat/llm';
import { createAgentKeys } from '@lumenchat/signatures';
const agentKeys = createAgentKeys('MathAgent');
const response = await signedStructuredResponse(
messages,
schema,
agentKeys,
{ temperature: 0.3 }
);
// {
// answer: 'The answer is 4',
// confidence: 1.0,
// _llm: { ... },
// _signature: {
// signature: '3044022...',
// publicKey: '02abc...',
// address: '1XYZ...',
// timestamp: '2025-11-20T...',
// algorithm: 'BSV-ECDSA-DER',
// responseHash: 'a1b2c3...',
// agentIdentity: 'MathAgent'
// }
// }Custom Provider
import { BaseLLMProvider } from '@lumenchat/llm';
class CustomProvider extends BaseLLMProvider {
async complete(messages, options = {}) {
// Your implementation
const response = await yourAPI.generate(messages);
return this.normalizeResponse(response);
}
normalizeResponse(rawResponse) {
return {
success: true,
content: rawResponse.data,
rawContent: JSON.stringify(rawResponse),
provider: 'custom',
model: 'your-model',
elapsed: 0
};
}
getName() {
return 'custom';
}
async isAvailable() {
return true;
}
getCapabilities() {
return {
structuredOutput: true,
streaming: false,
contextWindow: 8000,
maxTokens: 2048
};
}
}Provider Factory
import { createProvider } from '@lumenchat/llm';
const provider = createProvider('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4o-mini',
temperature: 0.3
});
const response = await provider.complete(messages);API Reference
generateStructuredJSON(messages, schema, options, provider)
Generate schema-validated JSON response.
Parameters:
messages(Array): Chat messages with role and contentschema(Object): JSON Schema for response validationoptions(Object): Generation options (temperature, model, etc.)provider(BaseLLMProvider): LLM provider instance (default: OpenAIProvider)
Returns: Promise<Object> - Response with _llm metadata
signedStructuredResponse(messages, schema, agentKey, options, provider)
Generate signed structured JSON response.
Parameters:
messages(Array): Chat messagesschema(Object): JSON SchemaagentKey(Object): Agent keys from @lumenchat/signaturesoptions(Object): Generation optionsprovider(BaseLLMProvider): LLM provider instance
Returns: Promise<Object> - Response with _llm and _signature
createProvider(type, config)
Factory function to create provider instances.
Parameters:
type(string): Provider type ('openai', 'anthropic', 'llama')config(Object): Provider configuration
Returns: BaseLLMProvider instance
BaseLLMProvider
Abstract base class for LLM providers.
Methods:
async complete(messages, options)- Generate completionnormalizeResponse(rawResponse)- Normalize to standard formatgetName()- Get provider nameasync isAvailable()- Check availabilitygetCapabilities()- Get provider capabilities
OpenAIProvider
OpenAI implementation of BaseLLMProvider.
Constructor Options:
apiKey(string): OpenAI API key (or use OPENAI_API_KEY env)model(string): Model name (default: 'gpt-4o-mini')temperature(number): Default temperature (default: 0.4)
Response Format
All structured responses include:
{
// Schema fields...
field1: value1,
field2: value2,
// LLM metadata
_llm: {
provider: 'openai',
model: 'gpt-4o-mini',
elapsed: 1234, // milliseconds
temperature: 0.4
},
// Signature (if using signedStructuredResponse)
_signature: {
signature: '3044022...',
publicKey: '02abc...',
address: '1XYZ...',
timestamp: '2025-11-20T...',
algorithm: 'BSV-ECDSA-DER',
responseHash: 'a1b2c3...',
agentIdentity: 'AgentName'
}
}Environment Variables
OPENAI_API_KEY: OpenAI API key
License
PROPRIETARY - Copyright © 2025 Gregory J. Ward and SmartLedger.Technology
