190proof
v1.0.82
Published
A unified interface for interacting with multiple AI providers including **OpenAI**, **Anthropic**, **Google**, **Groq**, and **AWS Bedrock**. This package provides a consistent API for making requests to different LLM providers while handling retries, st
Readme
190proof
A unified interface for interacting with multiple AI providers including OpenAI, Anthropic, Google, Groq, and AWS Bedrock. This package provides a consistent API for making requests to different LLM providers while handling retries, streaming, and multimodal inputs.
Features
Fully-local unified interface across multiple AI providers that includes:
- 🛠️ Consistent function/tool calling across all providers
- 💬 Consistent message alternation & system instructions
- 🖼️ Image format & size normalization
- 🔄 Automatic retries with configurable attempts
- 📡 Streaming by default
- ☁️ Cloud service providers supported (Azure, AWS Bedrock)
Installation
npm install 190proofUsage
Basic Example
import { callWithRetries } from "190proof";
import { GPTModel, GenericPayload } from "190proof/interfaces";
const payload: GenericPayload = {
model: GPTModel.GPT4O_MINI,
messages: [
{
role: "user",
content: "Tell me a joke.",
},
],
};
const response = await callWithRetries("my-request-id", payload);
console.log(response.content);Using Different Providers
import { callWithRetries } from "190proof";
import {
ClaudeModel,
GeminiModel,
GroqModel,
GenericPayload,
} from "190proof/interfaces";
// Anthropic
const claudePayload: GenericPayload = {
model: ClaudeModel.SONNET_4,
messages: [{ role: "user", content: "Hello!" }],
};
// Google
const geminiPayload: GenericPayload = {
model: GeminiModel.GEMINI_2_0_FLASH,
messages: [{ role: "user", content: "Hello!" }],
};
// Groq
const groqPayload: GenericPayload = {
model: GroqModel.LLAMA_3_70B_8192,
messages: [{ role: "user", content: "Hello!" }],
};
const response = await callWithRetries("request-id", claudePayload);With Function Calling
const payload: GenericPayload = {
model: GPTModel.GPT4O,
messages: [
{
role: "user",
content: "What is the capital of France?",
},
],
functions: [
{
name: "get_country_capital",
description: "Get the capital of a given country",
parameters: {
type: "object",
properties: {
country_name: {
type: "string",
description: "The name of the country",
},
},
required: ["country_name"],
},
},
],
};
const response = await callWithRetries("function-call-example", payload);
// response.function_call contains { name: string, arguments: Record<string, any> }With Images
const payload: GenericPayload = {
model: ClaudeModel.SONNET_4,
messages: [
{
role: "user",
content: "What's in this image?",
files: [
{
mimeType: "image/jpeg",
url: "https://example.com/image.jpg",
},
],
},
],
};
const response = await callWithRetries("image-example", payload);With System Messages
const payload: GenericPayload = {
model: GeminiModel.GEMINI_2_0_FLASH,
messages: [
{
role: "system",
content: "You are a helpful assistant that speaks in a friendly tone.",
},
{
role: "user",
content: "Tell me about yourself.",
},
],
};
const response = await callWithRetries("system-message-example", payload);Supported Models
OpenAI Models
gpt-3.5-turbo-0613gpt-3.5-turbo-16k-0613gpt-3.5-turbo-0125gpt-4-1106-previewgpt-4-0125-previewgpt-4-turbo-2024-04-09gpt-4ogpt-4o-minio1-previewo1-minio3-minigpt-4.1gpt-4.1-minigpt-4.1-nanogpt-5gpt-5-mini
Anthropic Models
claude-3-haiku-20240307claude-3-sonnet-20240229claude-3-opus-20240229claude-3-5-haiku-20241022claude-3-5-sonnet-20241022claude-sonnet-4-20250514claude-opus-4-20250514claude-opus-4-1claude-haiku-4-5claude-sonnet-4-5claude-opus-4-5
Google Models
gemini-1.5-pro-latestgemini-exp-1206gemini-2.0-flashgemini-2.0-flash-exp-image-generationgemini-2.0-flash-thinking-expgemini-2.0-flash-thinking-exp-01-21gemini-2.5-flash-preview-04-17gemini-3-flash-preview
Groq Models
llama3-70b-8192deepseek-r1-distill-llama-70b
Environment Variables
Set the following environment variables for the providers you want to use:
# OpenAI
OPENAI_API_KEY=your-openai-api-key
# Anthropic
ANTHROPIC_API_KEY=your-anthropic-api-key
# Google
GEMINI_API_KEY=your-gemini-api-key
# Groq
GROQ_API_KEY=your-groq-api-key
# AWS Bedrock (for Anthropic via Bedrock)
AWS_ACCESS_KEY_ID=your-aws-access-key
AWS_SECRET_ACCESS_KEY=your-aws-secret-keyAPI Reference
callWithRetries(identifier, payload, config?, retries?, chunkTimeoutMs?)
Main function to make requests to any supported AI provider.
Parameters
identifier:string | string[]- Unique identifier for the request (used for logging)payload:GenericPayload- Request payload containing model, messages, and optional functionsconfig:OpenAIConfig | AnthropicAIConfig- Optional configuration for the specific providerretries:number- Number of retry attempts (default: 5)chunkTimeoutMs:number- Timeout for streaming chunks in ms (default: 15000)
Returns
Promise<ParsedResponseMessage>:
interface ParsedResponseMessage {
role: "assistant";
content: string | null;
function_call: FunctionCall | null;
files: File[]; // For models that return files (e.g., image generation)
}Configuration Options
OpenAI Config
interface OpenAIConfig {
service: "azure" | "openai";
apiKey: string;
baseUrl: string;
orgId?: string;
modelConfigMap?: Record<
GPTModel,
{
resource: string;
deployment: string;
apiVersion: string;
apiKey: string;
endpoint?: string;
}
>;
}Anthropic Config
interface AnthropicAIConfig {
service: "anthropic" | "bedrock";
}License
ISC
