npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@orchard9ai/comm9-api-client

v1.0.1

Published

TypeScript client for comm9 LLM routing service with OpenAI compatibility

Readme

@orchard9ai/comm9-api-client

TypeScript client for comm9 LLM routing service with full OpenAI compatibility and provider routing.

Features

  • 🔄 Provider Routing - Route requests to specific LLM providers (Ollama, vLLM)
  • 🚀 OpenAI Compatible - Drop-in replacement for OpenAI API clients
  • 📡 Streaming Support - Real-time chat completions with Server-Sent Events
  • 🔧 TypeScript First - Fully typed with auto-generated types from OpenAPI spec
  • ⚛️ React Query Ready - Built-in hooks for React applications
  • 🌐 Universal - Works in Node.js and browser environments

Installation

npm install @orchard9ai/comm9-api-client

For React applications:

npm install @orchard9ai/comm9-api-client @tanstack/react-query

Quick Start

Configuration

import { configure } from '@orchard9ai/comm9-api-client';

// Configure the client
configure({
  baseURL: 'https://your-comm9-instance.com',
  auth: { type: 'bearer', token: 'your-jwt-token' },
  defaultProvider: 'ollama',
});

Basic Usage

import { createChatCompletion, listModels } from '@orchard9ai/comm9-api-client';

// Chat completion
const response = await createChatCompletion({
  model: 'llama3.2',
  provider: 'ollama', // comm9 extension
  messages: [
    { role: 'user', content: 'Hello!' }
  ],
  max_tokens: 100,
});

// List available models
const models = await listModels();
console.log(models.data); // Array of available models

React Usage

import { useCreateChatCompletion, useListModels } from '@orchard9ai/comm9-api-client';

function ChatComponent() {
  const { mutate: sendMessage, data, isLoading } = useCreateChatCompletion();
  const { data: models } = useListModels();

  const handleSend = () => {
    sendMessage({
      data: {
        model: 'llama3.2',
        provider: 'ollama',
        messages: [{ role: 'user', content: 'Hello!' }],
      }
    });
  };

  return (
    <div>
      <button onClick={handleSend} disabled={isLoading}>
        Send Message
      </button>
      {data && <p>{data.choices[0].message.content}</p>}
    </div>
  );
}

Streaming

import { createStreamingClient } from '@orchard9ai/comm9-api-client';

const streamingClient = createStreamingClient();

// Stream chat completion
const stream = streamingClient.chatCompletionStream({
  model: 'llama3.2',
  provider: 'ollama',
  messages: [{ role: 'user', content: 'Tell me a story' }],
});

for await (const chunk of stream) {
  console.log(chunk.choices[0].delta.content);
}

API Reference

Chat Completions

await createChatCompletion({
  model: 'llama3.2',
  provider: 'ollama', // optional: 'ollama' | 'vllm'
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello!' }
  ],
  max_tokens: 100,
  temperature: 0.7,
  stream: false, // set to true for streaming
});

Models

// List all models
const allModels = await listModels();

// List models from specific provider
const ollamaModels = await listModels({ provider: 'ollama' });

Embeddings

const embeddings = await createEmbedding({
  model: 'nomic-embed-text',
  provider: 'ollama',
  input: 'Text to embed',
});

Provider Routing

comm9 extends the OpenAI API with a provider field to route requests to specific LLM providers:

// Route to Ollama
await createChatCompletion({
  model: 'llama3.2',
  provider: 'ollama',
  messages: [{ role: 'user', content: 'Hello' }],
});

// Route to vLLM
await createChatCompletion({
  model: 'microsoft/Phi-4-mini-reasoning',
  provider: 'vllm',
  messages: [{ role: 'user', content: 'Hello' }],
});

// Auto-route (uses first healthy provider)
await createChatCompletion({
  model: 'llama3.2',
  // provider omitted - comm9 will auto-route
  messages: [{ role: 'user', content: 'Hello' }],
});

Authentication

JWT Tokens

import { setDefaultAuth } from '@orchard9ai/comm9-api-client';

setDefaultAuth({ type: 'bearer', token: 'your-jwt-token' });

API Keys

setDefaultAuth({ type: 'apikey', token: 'your-api-key' });

Environment Variables

The client automatically uses these environment variables:

COMM9_API_URL=https://your-comm9-instance.com
COMM9_JWT_TOKEN=your-jwt-token
COMM9_API_KEY=your-api-key
COMM9_DEFAULT_PROVIDER=ollama

Error Handling

The client provides enhanced error handling with comm9-specific error types:

try {
  await createChatCompletion({ /* ... */ });
} catch (error) {
  if (error.name === 'Comm9APIError') {
    console.log(`Error type: ${error.type}`);
    console.log(`Message: ${error.message}`);
    console.log(`Param: ${error.param}`);
    console.log(`Status: ${error.status}`);
  }
}

Type Safety

All request and response types are automatically generated from the OpenAI specification with comm9 extensions:

import type {
  CreateChatCompletionRequest,
  CreateChatCompletionResponse,
  ChatMessage,
  Model,
} from '@orchard9ai/comm9-api-client';

License

MIT

Support