@renderify/llm
v0.6.0
Published
LLM providers for Renderify
Maintainers
Readme
@renderify/llm
LLM provider implementations and registry for Renderify.
@renderify/llm provides built-in interpreters for OpenAI, Anthropic, Google, Ollama, and LM Studio, plus a provider registry API for custom providers.
Install
pnpm add @renderify/llm
# or
npm i @renderify/llmBuilt-in Providers
OpenAILLMInterpreter(provider: "openai")AnthropicLLMInterpreter(provider: "anthropic")GoogleLLMInterpreter(provider: "google")OllamaLLMInterpreter(provider: "ollama")LMStudioLLMInterpreter(provider: "lmstudio")
Factory API
createLLMInterpreter({ provider, providerOptions })LLMProviderRegistrycreateDefaultLLMProviderRegistry()
Quick Start
import { createLLMInterpreter } from "@renderify/llm";
const llm = createLLMInterpreter({
provider: "openai",
providerOptions: {
apiKey: process.env.RENDERIFY_LLM_API_KEY,
model: "gpt-4o-mini",
},
});
const response = await llm.generateResponse({
prompt: "return a simple RuntimePlan JSON",
context: {},
});
console.log(response.text);Local Models Example
import { createLLMInterpreter } from "@renderify/llm";
const ollama = createLLMInterpreter({
provider: "ollama",
providerOptions: {
baseUrl: "http://127.0.0.1:11434",
model: "qwen2.5-coder:7b",
},
});
const lmstudio = createLLMInterpreter({
provider: "lmstudio",
providerOptions: {
baseUrl: "http://127.0.0.1:1234/v1",
model: "qwen2.5-coder-7b-instruct",
},
});Reliability Controls
Each provider supports request reliability options through providerOptions.reliability:
const llm = createLLMInterpreter({
provider: "openai",
providerOptions: {
apiKey: process.env.RENDERIFY_LLM_API_KEY,
reliability: {
maxRetries: 2,
retryBaseDelayMs: 250,
retryMaxDelayMs: 2000,
retryJitterMs: 50,
retryStatusCodes: [408, 429, 500, 502, 503, 504],
retryOnNetworkError: true,
circuitBreakerFailureThreshold: 5,
circuitBreakerCooldownMs: 15000,
},
},
});Defaults include bounded retry/backoff and circuit breaking for repeated upstream failures.
Custom Provider
import { LLMProviderRegistry, createLLMInterpreter } from "@renderify/llm";
const registry = new LLMProviderRegistry();
registry.register({
name: "my-provider",
create: () => {
const templates = new Map();
return {
configure() {},
async generateResponse() {
return { text: "{}", tokensUsed: 0 };
},
setPromptTemplate(name, content) {
templates.set(name, content);
},
getPromptTemplate(name) {
return templates.get(name);
},
};
},
});
const llm = createLLMInterpreter({ provider: "my-provider", registry });Notes
- Provider implementations follow the
LLMInterpreterinterface from@renderify/core. - Streaming support is available through
generateResponseStream()when provided by the selected interpreter.
