npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

llm-unify

v1.2.4

Published

LlmUnify is a library that abstracts connections to major LLM model providers, simplifying their invocation and interoperability.

Downloads

7

Readme

LlmUnify

LlmUnify is a Typescript library designed to simplify and standardize interactions with multiple Large Language Model (LLM) providers. By offering a unified interface, it enables seamless integration and allows you to switch between providers or models without modifying your code. The library abstracts the complexity of invoking LLMs, supports streaming responses, and can be easily configured via environment variables or method arguments.

Supported Providers

  • Ollama (via direct API calls)
  • IBM WatsonX (via @ibm-cloud/watsonx-ai and ibm-cloud-sdk-core)
  • AWS Bedrock (via @aws-sdk/client-bedrock-runtime)

Future versions will include support for additional providers.

Installation

Install the core library from a remote repository:

To install the library directly from npm, run:

npm install llm-unify

Quickstart

Configuration and Authentication

LlmUnify retrieves provider-specific credentials and configuration from environment variables, with the option to override them using method arguments.

Example .env configuration:

LLM_UNIFY_OLLAMA_HOST=your_ollama_host
LLM_UNIFY_WATSONX_HOST=your_watsonx_endpoint
LLM_UNIFY_WATSONX_API_KEY=your_watsonx_apikey
LLM_UNIFY_WATSONX_PROJECT_ID=your_watsonx_projectid

Minimal Example

import { LlmOptions, LlmUnify } from 'llm-unify'
import * as dotenv from 'dotenv'

//this loads your .env variables where LLM_UNIFY_OLLAMA_HOST is configured
dotenv.config()

async function generate() {
    // Define options for text generation
    let options = new LlmOptions({
        temperature: 0.7,
        prompt: "Write a motivational poem:",
    })

    //Generate a response specifying provider and model
    let result = await LlmUnify.generate(
        "ollama:llama3.1",  // Provider and model name separated by ":" just change the provider name in to "watsonx" and the correct model name to call watsonx
        options,
    )
    console.log(result.generated_text)
}

generate()

Reusing Connectors

For repeated calls to the same provider, you can use a reusable connector:

import { LlmOptions, LlmUnify } from 'llm-unify'
import * as dotenv from 'dotenv'

//this loads your .env variables where LLM_UNIFY_OLLAMA_HOST is configured
dotenv.config()



async function generateStream() {

    // Create a connector for a specific provider
    let connector = LlmUnify.getConnector(
        "ollama", //just change it to "watsonx" to call watsox models
    )

    // Define options for text generation
    let options = new LlmOptions({ prompt: "List three ways to stay productive:" })

    //Generate a response in streaming mode

    for await (const response of connector.generateStream("llama3.1" /* just change it with the correct model name */, options)) {
        console.log(response.generated_text);
    }
}

generateStream()