npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@dugleelabs/copair

v1.3.0

Published

Model-agnostic AI coding agent for the terminal

Downloads

645

Readme

A model-agnostic AI coding agent for the terminal. Works like Claude Code but supports any LLM provider — commercial APIs, open source models, or self-hosted instances.

npm install -g @dugleelabs/copair
copair

Providers

| Provider | Type | |----------|------| | Anthropic (Claude) | Native | | OpenAI (GPT-4o, o1, etc.) | Native | | Google Gemini (incl. 2.0/3.0 thought signatures) | Native | | Ollama, vLLM, LM Studio, etc. | OpenAI-compatible |

Switch models mid-session with /model <name>. Context is summarized automatically before switching.

Quick Setup

Create ~/.copair/config.yaml:

version: 1
default_model: claude-sonnet

providers:
  anthropic:
    api_key: ${ANTHROPIC_API_KEY}
    models:
      claude-sonnet:
        id: claude-sonnet-4-20250514

  openai:
    api_key: ${OPENAI_API_KEY}
    models:
      gpt-4o:
        id: gpt-4o

  ollama:
    type: openai-compatible
    base_url: http://localhost:11434/v1
    models:
      llama3:
        id: llama3.1:8b
        supports_tool_calling: false
copair                    # start with default model
copair --model gpt-4o     # start with a specific model
copair --resume           # resume most recent session
copair --resume latest    # same as above
copair --resume auth-fix  # resume session by identifier
copair --verbose          # show INFO/WARN logs
copair --debug            # show all logs including DEBUG

Full configuration reference
Local models setup (Qwen 3.5, etc.)

Built-in Tools

The agent has direct access to your codebase:

| Tool | Description | |------|-------------| | Read | Read file contents with line offset/limit | | Write | Write file contents, creates parent dirs | | Edit | Exact string replacement (errors on non-unique match) | | Grep | Regex search across files | | Glob | File pattern matching | | Bash | Execute shell commands with timeout | | Git | git status, diff, log, commit | | WebSearch | Search via Tavily, Serper, or SearXNG | | UpdateKnowledge | Add entries to project knowledge base |

For models without native tool calling, Copair falls back to prompt-based tool extraction.

Permissions

permissions:
  mode: ask          # ask | auto-approve | deny
  allow_commands:    # bash commands that skip the prompt
    - git status
    - git diff
    - npm test

In ask mode you can approve once or always-allow for the session. Shell operators (;, &&, |, etc.) are never auto-approved even if the base command matches.

Permission docs

Token Tracking

After each response:

[tokens: 1,234 in / 567 out | session: 5,678 in / 2,345 out | ~$0.12]

On exit, a per-model cost breakdown is shown. Supports all OpenAI, Anthropic, and Google pricing. Falls back to tiktoken estimation when the API doesn't report usage.

Slash Commands

| Command | Description | |---------|-------------| | /help | List all available commands | | /model <name> | Switch model mid-session | | /clear | Clear conversation history | | /cost | Show session token usage and cost | | /workflow <name> | Run a workflow | | /commands | List custom commands | | /session list | List all sessions for current project | | /session resume <id> | Resume a previous session | | /session rename <name> | Rename current session | | /session delete <id> | Delete a stored session | | /session save | Force save current session | | /session info | Show current session metadata |

Custom commands are markdown files with frontmatter — drop them in ~/.copair/commands/ or .copair/commands/. Commands support nesting, positional arguments, and $VAR / {{var}} interpolation. They return their expanded markdown directly to the agent. → Custom commands

Sessions

Sessions persist across exits. On startup, Copair checks for previous sessions in .copair/sessions/ and offers to resume or start fresh. Sessions are auto-named from your git branch, first message, and files touched.

Previous sessions:
  1. auth-middleware-refactor-a3f2  (2h ago, 42 msgs, claude-sonnet)
  2. fix-login-bug-b7c1            (1d ago, 18 msgs, gemini-pro)
  3. Start fresh

Select [1-3]:

On exit, sessions are summarized using a local model (via Ollama if available) or the active model. Resumed sessions inject the summary as context instead of replaying full message history.

Session files are stored in .copair/sessions/ (gitignored automatically). Each session has a UUID directory containing session.json, messages.jsonl, and optionally summary.md.

# Optional session config in ~/.copair/config.yaml
context:
  summarization_model: qwen-7b   # model alias for summaries
  max_sessions: 20               # max sessions per project
  knowledge_max_size: 8192       # max bytes for knowledge base

Knowledge Base

Copair maintains a project-level knowledge base at COPAIR_KNOWLEDGE.md in your project root. The agent adds entries when it learns project-specific facts (conventions, patterns, architecture decisions). This file is committed to git and shared with your team.

The knowledge base is automatically included in the system prompt for all sessions in that project. Entries are timestamped and pruned when the file exceeds the configured max size.

Workflows

Multi-step YAML workflows that combine agent prompts, shell commands, and branching logic.

/workflow test-fix
/workflow test-fix test_command=pytest

Workflows support: prompt, shell, command, condition, and output step types. Ctrl+C cancels at any step. → Workflow docs

MCP Servers

Extend the agent with any Model Context Protocol server. MCP tools are discovered at startup and namespaced as server-name:tool-name.

mcp_servers:
  - name: filesystem
    command: npx
    args: ["-y", "@modelcontextprotocol/server-filesystem", "/path"]

MCP docs

Web Search

Supports Tavily, Serper, and self-hosted SearXNG. Anthropic models use native built-in search automatically.

web_search:
  provider: tavily
  api_key: ${TAVILY_API_KEY}

Web search docs

License

MIT