npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@contentgrowth/llm-service

v0.7.1

Published

Unified LLM Service for Content Growth

Readme

@contentgrowth/llm-service

Unified LLM Service for Content Growth applications. This package provides a standardized interface for interacting with various LLM providers (OpenAI, Gemini) and supports "Bring Your Own Key" (BYOK) functionality via pluggable configuration.

Installation

npm install @contentgrowth/llm-service

Usage

Basic Usage

The service requires an environment object (usually from Cloudflare Workers) to access bindings.

import { LLMService } from '@contentgrowth/llm-service';

// In your Worker
export default {
  async fetch(request, env, ctx) {
    const llmService = new LLMService(env);
    
    // Chat
    const response = await llmService.chat('Hello, how are you?', 'tenant-id');
    console.log(response.text);
    
    // Chat Completion (with system prompt)
    const result = await llmService.chatCompletion(
      [{ role: 'user', content: 'Write a poem' }],
      'tenant-id',
      'You are a poetic assistant'
    );
    console.log(result.content);
  }
}

Configuration & BYOK

The service uses a ConfigManager to determine which LLM provider and API key to use for a given tenant.

Default Behavior (Cloudflare KV + Durable Objects)

By default, the service expects the env object passed to the constructor to contain:

  • TENANT_LLM_CONFIG: A KV Namespace binding.
  • TENANT_DO: A Durable Object Namespace binding.

It uses these to fetch tenant-specific configurations.

Custom Configuration (Pluggable Providers)

If your project stores tenant keys differently (e.g., in a SQL database, environment variables, or a different service), you can implement a custom ConfigProvider.

import { LLMService, ConfigManager, BaseConfigProvider } from '@contentgrowth/llm-service';

// 1. Define your custom provider
class MyDatabaseConfigProvider extends BaseConfigProvider {
  async getConfig(tenantId, env) {
    // Fetch config from your database or other source
    // You can use 'env' here if you need access to bindings
    const apiKey = await getApiKeyFromDB(tenantId);
    
    return {
      provider: 'openai', // or 'gemini'
      apiKey: apiKey,
      models: { 
        default: 'gpt-4o',
        // ... optional overrides
      },
      // Optional capabilities
      capabilities: { chat: true, image: true }
    };
  }
}

// 2. Register the provider at application startup
ConfigManager.setConfigProvider(new MyDatabaseConfigProvider());

// 3. Use LLMService as normal - it will now use your provider
const service = new LLMService(env);

JSON Mode & Structured Outputs

The service supports native JSON mode for OpenAI and Gemini, guaranteeing valid JSON responses without escaping issues.

Basic JSON Mode

const response = await llmService.chatCompletion(
  messages,
  tenantId,
  'You are a helpful assistant. Always respond in JSON.',
  { responseFormat: 'json' }  // ← Enable JSON mode
);

// Response includes auto-parsed JSON
console.log(response.parsedContent); // Already parsed object
console.log(response.content);       // Raw JSON string

JSON Mode with Schema Validation (Structured Outputs)

Define a schema to guarantee the response structure:

const schema = {
  type: 'object',
  properties: {
    answer: { type: 'string' },
    confidence: { type: 'number' },
    sources: { 
      type: 'array', 
      items: { type: 'string' },
      nullable: true 
    }
  },
  required: ['answer', 'confidence']
};

const response = await llmService.chatCompletion(
  messages,
  tenantId,
  systemPrompt,
  {
    responseFormat: 'json_schema',
    responseSchema: schema,
    schemaName: 'question_answer'
  }
);

// Guaranteed to match schema
const { answer, confidence, sources } = response.parsedContent;

Convenience Method

For JSON-only responses, use chatCompletionJson() to get parsed objects directly:

// Returns parsed object directly (not response wrapper)
const data = await llmService.chatCompletionJson(
  messages,
  tenantId,
  systemPrompt,
  schema  // optional
);

console.log(data.answer);      // Direct access to fields
console.log(data.confidence);  // No .parsedContent needed

Flexible Call Signatures

The chatCompletion() method intelligently detects whether you're passing tools, options, or both:

// All these work!
await chatCompletion(messages, tenant, prompt);
await chatCompletion(messages, tenant, prompt, tools);
await chatCompletion(messages, tenant, prompt, { responseFormat: 'json' });
await chatCompletion(messages, tenant, prompt, tools, { responseFormat: 'json' });

Supported Options

  • responseFormat: 'text' (default), 'json', or 'json_schema'
  • responseSchema: JSON schema object (required for json_schema mode)
  • schemaName: Name for the schema (optional, for json_schema mode)
  • strictSchema: Enforce strict validation (default: true)
  • autoParse: Auto-parse JSON responses (default: true)
  • temperature: Override temperature
  • maxTokens: Override max tokens
  • tier: Model tier ('default', 'fast', 'smart')

Testing

Running JSON Mode Tests

  1. Create .env file (copy from .env.example):

    cp .env.example .env
  2. Add your API keys to .env:

    LLM_PROVIDER=openai  # or gemini
    OPENAI_API_KEY=sk-your-key-here
    GEMINI_API_KEY=your-gemini-key-here
  3. Run tests:

    npm run test:json      # Run comprehensive test suite
    npm run examples:json  # Run interactive examples

See TESTING.md for detailed testing documentation.

Publishing

To publish this package to NPM:

  1. Update Version: Update the version in package.json.

  2. Login to NPM:

    npm login
  3. Publish:

    # For public access
    npm publish --access public

Development

Directory Structure

  • src/llm-service.js: Main service class.
  • src/llm/config-manager.js: Configuration resolution logic.
  • src/llm/config-provider.js: Abstract provider interfaces.
  • src/llm/providers/: Individual LLM provider implementations.

Testing

Run the local test script to verify imports and configuration:

node test-custom-config.js