@contentgrowth/llm-service
v0.7.1
Published
Unified LLM Service for Content Growth
Readme
@contentgrowth/llm-service
Unified LLM Service for Content Growth applications. This package provides a standardized interface for interacting with various LLM providers (OpenAI, Gemini) and supports "Bring Your Own Key" (BYOK) functionality via pluggable configuration.
Installation
npm install @contentgrowth/llm-serviceUsage
Basic Usage
The service requires an environment object (usually from Cloudflare Workers) to access bindings.
import { LLMService } from '@contentgrowth/llm-service';
// In your Worker
export default {
async fetch(request, env, ctx) {
const llmService = new LLMService(env);
// Chat
const response = await llmService.chat('Hello, how are you?', 'tenant-id');
console.log(response.text);
// Chat Completion (with system prompt)
const result = await llmService.chatCompletion(
[{ role: 'user', content: 'Write a poem' }],
'tenant-id',
'You are a poetic assistant'
);
console.log(result.content);
}
}Configuration & BYOK
The service uses a ConfigManager to determine which LLM provider and API key to use for a given tenant.
Default Behavior (Cloudflare KV + Durable Objects)
By default, the service expects the env object passed to the constructor to contain:
TENANT_LLM_CONFIG: A KV Namespace binding.TENANT_DO: A Durable Object Namespace binding.
It uses these to fetch tenant-specific configurations.
Custom Configuration (Pluggable Providers)
If your project stores tenant keys differently (e.g., in a SQL database, environment variables, or a different service), you can implement a custom ConfigProvider.
import { LLMService, ConfigManager, BaseConfigProvider } from '@contentgrowth/llm-service';
// 1. Define your custom provider
class MyDatabaseConfigProvider extends BaseConfigProvider {
async getConfig(tenantId, env) {
// Fetch config from your database or other source
// You can use 'env' here if you need access to bindings
const apiKey = await getApiKeyFromDB(tenantId);
return {
provider: 'openai', // or 'gemini'
apiKey: apiKey,
models: {
default: 'gpt-4o',
// ... optional overrides
},
// Optional capabilities
capabilities: { chat: true, image: true }
};
}
}
// 2. Register the provider at application startup
ConfigManager.setConfigProvider(new MyDatabaseConfigProvider());
// 3. Use LLMService as normal - it will now use your provider
const service = new LLMService(env);JSON Mode & Structured Outputs
The service supports native JSON mode for OpenAI and Gemini, guaranteeing valid JSON responses without escaping issues.
Basic JSON Mode
const response = await llmService.chatCompletion(
messages,
tenantId,
'You are a helpful assistant. Always respond in JSON.',
{ responseFormat: 'json' } // ← Enable JSON mode
);
// Response includes auto-parsed JSON
console.log(response.parsedContent); // Already parsed object
console.log(response.content); // Raw JSON stringJSON Mode with Schema Validation (Structured Outputs)
Define a schema to guarantee the response structure:
const schema = {
type: 'object',
properties: {
answer: { type: 'string' },
confidence: { type: 'number' },
sources: {
type: 'array',
items: { type: 'string' },
nullable: true
}
},
required: ['answer', 'confidence']
};
const response = await llmService.chatCompletion(
messages,
tenantId,
systemPrompt,
{
responseFormat: 'json_schema',
responseSchema: schema,
schemaName: 'question_answer'
}
);
// Guaranteed to match schema
const { answer, confidence, sources } = response.parsedContent;Convenience Method
For JSON-only responses, use chatCompletionJson() to get parsed objects directly:
// Returns parsed object directly (not response wrapper)
const data = await llmService.chatCompletionJson(
messages,
tenantId,
systemPrompt,
schema // optional
);
console.log(data.answer); // Direct access to fields
console.log(data.confidence); // No .parsedContent neededFlexible Call Signatures
The chatCompletion() method intelligently detects whether you're passing tools, options, or both:
// All these work!
await chatCompletion(messages, tenant, prompt);
await chatCompletion(messages, tenant, prompt, tools);
await chatCompletion(messages, tenant, prompt, { responseFormat: 'json' });
await chatCompletion(messages, tenant, prompt, tools, { responseFormat: 'json' });Supported Options
responseFormat:'text'(default),'json', or'json_schema'responseSchema: JSON schema object (required forjson_schemamode)schemaName: Name for the schema (optional, forjson_schemamode)strictSchema: Enforce strict validation (default:true)autoParse: Auto-parse JSON responses (default:true)temperature: Override temperaturemaxTokens: Override max tokenstier: Model tier ('default','fast','smart')
Testing
Running JSON Mode Tests
Create
.envfile (copy from.env.example):cp .env.example .envAdd your API keys to
.env:LLM_PROVIDER=openai # or gemini OPENAI_API_KEY=sk-your-key-here GEMINI_API_KEY=your-gemini-key-hereRun tests:
npm run test:json # Run comprehensive test suite npm run examples:json # Run interactive examples
See TESTING.md for detailed testing documentation.
Publishing
To publish this package to NPM:
Update Version: Update the
versioninpackage.json.Login to NPM:
npm loginPublish:
# For public access npm publish --access public
Development
Directory Structure
src/llm-service.js: Main service class.src/llm/config-manager.js: Configuration resolution logic.src/llm/config-provider.js: Abstract provider interfaces.src/llm/providers/: Individual LLM provider implementations.
Testing
Run the local test script to verify imports and configuration:
node test-custom-config.js