npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@openvole/paw-brain

v2.0.0

Published

Unified Brain Paw — multi-provider LLM adapter for OpenVole

Readme

@openvole/paw-brain

Unified Brain Paw for OpenVole — a single paw that supports multiple LLM providers.

Supported Providers

| Provider | BRAIN_PROVIDER | API Key Env | Model Env | Default Model | |----------|-----------------|-------------|-----------|---------------| | Anthropic | anthropic | ANTHROPIC_API_KEY | ANTHROPIC_MODEL | claude-sonnet-4-20250514 | | OpenAI | openai | OPENAI_API_KEY | OPENAI_MODEL | gpt-4o | | Google Gemini | gemini | GEMINI_API_KEY | GEMINI_MODEL | gemini-2.5-flash | | xAI | xai | XAI_API_KEY | XAI_MODEL | grok-3 | | Ollama | ollama | — | OLLAMA_MODEL | qwen3:latest |

Configuration

Option 1: Generic env vars

BRAIN_PROVIDER=gemini
BRAIN_API_KEY=your-api-key
BRAIN_MODEL=gemini-2.5-flash

Option 2: Provider-specific env vars

GEMINI_API_KEY=your-api-key
GEMINI_MODEL=gemini-2.5-flash

Provider-specific vars take precedence over generic BRAIN_* vars.

Option 3: Auto-detect

If BRAIN_PROVIDER is not set, paw-brain auto-detects the provider from available API keys in this order: Anthropic, OpenAI, Gemini, xAI, Ollama.

vole.config.json

{
  "brain": "@openvole/paw-brain",
  "paws": [
    {
      "name": "@openvole/paw-brain",
      "allow": {
        "network": ["*"],
        "env": ["BRAIN_PROVIDER", "BRAIN_API_KEY", "BRAIN_MODEL", "BRAIN_BASE_URL",
                "ANTHROPIC_API_KEY", "ANTHROPIC_MODEL",
                "OPENAI_API_KEY", "OPENAI_MODEL",
                "GEMINI_API_KEY", "GEMINI_MODEL",
                "XAI_API_KEY", "XAI_MODEL",
                "OLLAMA_HOST", "OLLAMA_MODEL"]
      }
    }
  ]
}

Switching providers

Just change BRAIN_PROVIDER and the corresponding API key — no config file changes needed:

# Switch from Gemini to Claude
BRAIN_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...

Fallback provider

If the primary provider errors (rate limit, timeout, outage), paw-brain can automatically retry with a fallback:

BRAIN_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...

BRAIN_FALLBACK=openai
OPENAI_API_KEY=sk-...
BRAIN_FALLBACK_MODEL=gpt-4o          # optional

The fallback is only used when the primary throws an error — not for empty responses or tool narration.

Cost tracking

paw-brain reports token usage (input/output tokens, model, provider) back to core via AgentPlan.usage. Core uses this for per-task cost estimation.

  • Cloud providers are priced from a built-in pricing table
  • Local Ollama models (no :cloud suffix) show as free in auto mode
  • Ollama cloud models (e.g. kimi-k2.5:cloud) are priced
  • Set costTracking: "enabled" in loop config to track all providers