npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@tanvoid0/bot-client

v1.3.3

Published

A powerful, configurable npm package for easily switching between different AI models and providers. Zero-config setup for local providers (Ollama, LM Studio), with complete API integration for cloud providers (OpenAI, Anthropic, Gemini).

Readme

Bot Client

CI/CD Pipeline npm version npm downloads License: MIT

Multi-provider AI client: OpenAI, Anthropic, Gemini, Ollama, LM Studio. Zero-config for local; API keys for cloud. Includes Ollama API + CLI (pull, list, rm, show, ps, run) and an npx CLI for models and API keys.


Install

npm install @tanvoid0/bot-client

Quick start

import { aiFactory } from '@tanvoid0/bot-client';

const text = await aiFactory.generate('Say hello in one sentence.', { maxTokens: 100 });
console.log(text);

With Ollama running locally, this works without API keys. For cloud providers, set env vars (see Environment).


npx CLI

Manage Ollama models and API keys from the terminal:

npx @tanvoid0/bot-client help
npx @tanvoid0/bot-client ollama list
npx @tanvoid0/bot-client ollama pull llama3.1:8b
npx @tanvoid0/bot-client keys list
npx @tanvoid0/bot-client keys set BOT_CLIENT_OPENAI_KEY sk-...

| Command | Description | |--------|-------------| | ollama list / ollama ls | List models | | ollama pull <model> | Pull a model | | ollama rm <model> | Remove a model | | ollama show <model> | Show model info | | ollama ps | List running models | | ollama run <model> [prompt] | Run model (optional prompt) |

Uses the local Ollama API when the server is up; falls back to the ollama CLI.

Read/write .env in the current directory.

| Command | Description | |--------|-------------| | keys list / keys ls | List known API keys (masked) | | keys get <key> [--show] | Get value (masked unless --show) | | keys set <key> <value> | Set key in .env |

Known keys: BOT_CLIENT_PROVIDER, BOT_CLIENT_OPENAI_KEY, BOT_CLIENT_ANTHROPIC_KEY, BOT_CLIENT_GEMINI_KEY, OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY.


Environment

# Provider (optional): ollama | openai | anthropic | gemini | lmstudio
export BOT_CLIENT_PROVIDER=ollama

# Keys (recommended names)
export BOT_CLIENT_OPENAI_KEY="sk-..."
export BOT_CLIENT_ANTHROPIC_KEY="sk-ant-..."
export BOT_CLIENT_GEMINI_KEY="..."

# Legacy names (still supported)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="..."

Local providers (Ollama, LM Studio) need no keys; ensure the app is running on its default port.


API (library)

  • generate(prompt, options?)Promise<string>
  • process(request)Promise<AIResponse>
  • getAvailableProviders()string[]
  • getProvider(id)AIProvider | null
  • getAllProviders()AIProvider[]
  • getAllSupportedModels()string[] (all models across providers)
  • getProviderForModel(modelId)AIProvider | null
  • testProviders()Promise<Record<string, boolean>> (connection status per provider)
  • ready()Promise<void> (resolves when init is complete)

Create a factory with default provider, fallback, order, logger, or custom providers:

import { AIFactory } from '@tanvoid0/bot-client';

const factory = new AIFactory({
  defaultProvider: 'ollama',
  fallbackProvider: 'openai',
  providerOrder: ['ollama', 'lmstudio', 'openai'],
  logger: { info: console.log, warn: console.warn, error: console.error },
  retries: 1
});
await factory.ready();
const text = await factory.generate('Hello');

Use only specific providers (e.g. custom or pre-configured):

import { AIFactory, OllamaProvider, OpenAIProvider } from '@tanvoid0/bot-client';

const factory = new AIFactory({
  providers: [
    new OllamaProvider({ baseURL: 'http://localhost:11434' }),
    new OpenAIProvider({ apiKey: process.env.MY_KEY })
  ],
  defaultProvider: 'ollama'
});

Use the Ollama provider for API-first operations (fallback to ollama CLI when server is down):

import { aiFactory, OllamaProvider } from '@tanvoid0/bot-client';

const ollama = aiFactory.getProvider('ollama') as OllamaProvider | null;
if (ollama) {
  const list = await ollama.list();   // list models
  await ollama.pull('llama3.1:8b');   // pull model
  const info = await ollama.show('llama3.1:8b');
  const out = await ollama.run('llama3.1:8b', 'Hello');
}

Or instantiate with custom base URL / CLI path:

const provider = new OllamaProvider({
  baseURL: 'http://localhost:11434',
  ollamaExecutablePath: 'ollama',
  preferCLI: false  // true = always use CLI
});
await provider.pull('gemma3');
import { runOllamaCLI, isOllamaCLIAvailable } from '@tanvoid0/bot-client';

const ok = await isOllamaCLIAvailable();
const result = await runOllamaCLI('pull', ['llama3.1:8b'], { onStderr: (c) => process.stderr.write(c) });
// result: { ok, code, stdout, stderr }
  • AIRequest: prompt, modelId?, temperature?, maxTokens?, systemPrompt?, history?, metadata?, usageContext?
  • AIResponse: success, data?, error?, modelUsed?, providerId?, processingTime?, confidence?, tokensUsed?, cost?
  • AIFactoryConfig: defaultProvider?, fallbackProvider?, providerOrder?, logger?, providers?, retries?
  • Logger: optional debug, info, warn, error (all (message, ...args) => void)
  • AIError: message, provider, statusCode?, details?, code? (e.g. NO_API_KEY, RATE_LIMIT)
  • AIProvider: interface for custom providers; implement providerId, providerName, supportedModels, process, isModelSupported, testConnection, discoverModels

Providers

| Provider | Type | Notes | |---------|------|--------| | Ollama | Local | API + CLI; list/pull/rm/show/ps/run; tested | | LM Studio | Local | localhost:1234; tested | | OpenAI | Cloud | API key required | | Anthropic | Cloud | API key required | | Gemini | Cloud | API key required; tested |

The factory initializes all providers and keeps those that pass the connection test. Use getProvider('ollama') (etc.) to use a specific one.


Troubleshooting

  • Ollama: curl http://localhost:11434/api/tags or ollama list; start with ollama serve if needed.
  • LM Studio: Ensure a model is loaded and the server is on (e.g. localhost:1234).
  • Cloud: Ensure the right env key is set (BOT_CLIENT_OPENAI_KEY, etc.).

Normal during init: the client keeps only providers that succeed. Missing API key or stopped local server will show as failed for that provider.

const provider = aiFactory.getProvider('ollama');
if (provider) {
  const res = await provider.process({ prompt: 'Hello', modelId: 'llama3.1:8b' });
}

Development

npm install && npm run build && npm test
npm run cli -- help

See examples/ for more usage.


License

MIT