npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@prism-lang/llm

v1.2.3

Published

LLM provider integrations for Prism

Readme

@prism-lang/llm

LLM provider integrations for the Prism programming language. Supports multiple providers with automatic fallback and confidence extraction.

📚 Full Documentation | 🤖 LLM Guide | 🔧 API Reference

Installation

npm install @prism-lang/llm

Features

  • Multiple Providers: Claude (Anthropic), Gemini (Google), Mock provider
  • Automatic Fallback: Configurable provider priority
  • Mock Provider: For testing without API calls
  • Confidence Integration: Works seamlessly with Prism's confidence system
  • Environment Configuration: Automatic setup from environment variables

Quick Start

import { LLMConfigManager, LLMProviderRegistry, LLMRequest } from '@prism-lang/llm';

// Automatic setup from environment
const providers = LLMConfigManager.createFromEnvironment();
const registry = new LLMProviderRegistry();

// Register providers
for (const [name, provider] of Object.entries(providers)) {
  registry.register(name, provider);
}
registry.setDefault(LLMConfigManager.getDefaultProvider());

// Make a request
const request = new LLMRequest('What is the weather like?');
const response = await registry.complete(request);

console.log(response.content);     // "I cannot check current weather..."
console.log(response.confidence);  // 0.95

Environment Configuration

Set your API keys in environment variables:

export CLAUDE_API_KEY=your-claude-key      # or ANTHROPIC_API_KEY
export GEMINI_API_KEY=your-gemini-key      # or GOOGLE_API_KEY

The library automatically detects available providers and sets the default based on priority:

  1. Claude (if CLAUDE_API_KEY or ANTHROPIC_API_KEY is set)
  2. Gemini (if GEMINI_API_KEY or GOOGLE_API_KEY is set)
  3. Mock (always available as fallback)

Usage in Prism

When used within Prism code, the LLM integration is automatic:

// Uses the default provider
response = llm("Analyze this code for security issues")

// Specify a provider
response = llm("Translate to Spanish", { 
  model: "gemini",
  temperature: 0.3 
})

// Confidence is automatically attached
conf = <~ response
console.log("Confidence: " + conf)

Providers

Claude (Anthropic)

import { ClaudeProvider, LLMRequest } from '@prism-lang/llm';

const claude = new ClaudeProvider(apiKey);
const request = new LLMRequest('Your prompt here');
const response = await claude.complete(request);

Gemini (Google)

import { GeminiProvider, LLMRequest } from '@prism-lang/llm';

const gemini = new GeminiProvider(apiKey);
const request = new LLMRequest('Your prompt here');
const response = await gemini.complete(request);

Mock Provider

import { MockLLMProvider } from '@prism-lang/llm';

const mock = new MockLLMProvider();
mock.setMockResponse('Test response', 0.85);
mock.setLatency(100); // Simulate network delay

API Reference

LLMRequest

new LLMRequest(prompt: string, options?: {
  temperature?: number;
  maxTokens?: number;
  timeout?: number;
  structuredOutput?: boolean;
  includeReasoning?: boolean;
})

LLMResponse

interface LLMResponse {
  content: string;
  confidence: number;
  tokensUsed: number;
  model: string;
  metadata?: {
    reasoning?: string;
    usage?: object;
  };
}

Provider Registry

const registry = new LLMProviderRegistry();
registry.register('claude', claudeProvider);
registry.setDefault('claude');

// Use specific provider
const response = await registry.complete(request, 'claude');

// Use default provider
const response = await registry.complete(request);

Integration with @prism-lang/confidence

The LLM package integrates seamlessly with confidence extraction:

import { GeminiProvider, LLMRequest } from '@prism-lang/llm';
import { ConfidenceExtractor } from '@prism-lang/confidence';

const provider = new GeminiProvider(apiKey);
const extractor = new ConfidenceExtractor();

// Get unstructured response and extract confidence
const request = new LLMRequest('What will happen to crypto prices?', {
  structuredOutput: false
});
const response = await provider.complete(request);

// Extract confidence from the response text
const extracted = await extractor.fromResponseAnalysis(response.content);
console.log('Extracted confidence:', extracted.value);
console.log('Hedging indicators:', extracted.metadata?.hedgingIndicators);

Testing

The mock provider is perfect for testing:

describe('My LLM feature', () => {
  it('should handle responses', async () => {
    const mock = new MockLLMProvider();
    mock.setMockResponse('Expected response', 0.9);
    
    registry.register('mock', mock);
    registry.setDefault('mock');
    
    // Your test code here
  });
});

Related Packages

License

MIT