llm-unify
v1.2.4
Published
LlmUnify is a library that abstracts connections to major LLM model providers, simplifying their invocation and interoperability.
Downloads
7
Readme
LlmUnify
LlmUnify is a Typescript library designed to simplify and standardize interactions with multiple Large Language Model (LLM) providers. By offering a unified interface, it enables seamless integration and allows you to switch between providers or models without modifying your code. The library abstracts the complexity of invoking LLMs, supports streaming responses, and can be easily configured via environment variables or method arguments.
Supported Providers
- Ollama (via direct API calls)
- IBM WatsonX (via
@ibm-cloud/watsonx-aiandibm-cloud-sdk-core) - AWS Bedrock (via
@aws-sdk/client-bedrock-runtime)
Future versions will include support for additional providers.
Installation
Install the core library from a remote repository:
To install the library directly from npm, run:
npm install llm-unifyQuickstart
Configuration and Authentication
LlmUnify retrieves provider-specific credentials and configuration from environment variables, with the option to override them using method arguments.
Example .env configuration:
LLM_UNIFY_OLLAMA_HOST=your_ollama_host
LLM_UNIFY_WATSONX_HOST=your_watsonx_endpoint
LLM_UNIFY_WATSONX_API_KEY=your_watsonx_apikey
LLM_UNIFY_WATSONX_PROJECT_ID=your_watsonx_projectidMinimal Example
import { LlmOptions, LlmUnify } from 'llm-unify'
import * as dotenv from 'dotenv'
//this loads your .env variables where LLM_UNIFY_OLLAMA_HOST is configured
dotenv.config()
async function generate() {
// Define options for text generation
let options = new LlmOptions({
temperature: 0.7,
prompt: "Write a motivational poem:",
})
//Generate a response specifying provider and model
let result = await LlmUnify.generate(
"ollama:llama3.1", // Provider and model name separated by ":" just change the provider name in to "watsonx" and the correct model name to call watsonx
options,
)
console.log(result.generated_text)
}
generate()Reusing Connectors
For repeated calls to the same provider, you can use a reusable connector:
import { LlmOptions, LlmUnify } from 'llm-unify'
import * as dotenv from 'dotenv'
//this loads your .env variables where LLM_UNIFY_OLLAMA_HOST is configured
dotenv.config()
async function generateStream() {
// Create a connector for a specific provider
let connector = LlmUnify.getConnector(
"ollama", //just change it to "watsonx" to call watsox models
)
// Define options for text generation
let options = new LlmOptions({ prompt: "List three ways to stay productive:" })
//Generate a response in streaming mode
for await (const response of connector.generateStream("llama3.1" /* just change it with the correct model name */, options)) {
console.log(response.generated_text);
}
}
generateStream()
