npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

llm-party-cli

v0.7.0

Published

Bring your models. We'll bring the party.

Readme

A peer orchestrator that puts Claude, Codex, Copilot, and GLM in the same terminal. You talk, they listen. They talk to each other. Nobody is the boss except you.

YOU > @claude review this function
[CLAUDE] The error handling on line 42 swallows exceptions silently...

YOU > @codex fix what claude found
[CODEX] Fixed. Wrapped in try/catch with proper logging. See diff below.

YOU > @copilot write tests for the fix
[COPILOT] Added 3 test cases covering the happy path and both error branches.

No MCP. No master/servant. No window juggling. Just peers at a terminal table.

Why llm-party?

| | Traditional multi-agent | llm-party | | ---------------------- | ------------------------------ | -------------------------------------- | | Architecture | MCP (master controls servants) | Peer orchestration (you control all) | | Integration | CLI wrapping, output scraping | Direct SDK adapters | | Sessions | Fresh each time | Persistent per provider | | Context | Agents are siloed | Every agent sees the full conversation | | API tokens | Separate keys per tool | Uses your existing CLI auth |

Getting started

Prerequisites

Bun runtime is required. At least one AI CLI must be installed and authenticated:

bun --version            # Bun runtime
claude --version         # Claude Code CLI
codex --version          # OpenAI Codex CLI
copilot --version        # GitHub Copilot CLI

If a CLI doesn't work on its own, it won't work inside llm-party.

Install

bun add -g llm-party-cli

First run

llm-party

On first run, llm-party automatically creates ~/.llm-party/ with a default config and global memory structure. Your system username is detected automatically. No setup commands needed.

Add more agents

Edit ~/.llm-party/config.json:

{
  "agents": [
    {
      "name": "Claude",
      "tag": "claude",
      "provider": "claude",
      "model": "opus"
    },
    {
      "name": "Codex",
      "tag": "codex",
      "provider": "codex",
      "model": "gpt-5.2"
    },
    {
      "name": "Copilot",
      "tag": "copilot",
      "provider": "copilot",
      "model": "gpt-4.1"
    }
  ]
}

That's it. No paths, no prompts, no usernames to configure. Just name, tag, provider, model.

Talk to your agents

@claude explain this error          # talk to one agent
@claude @codex review this          # talk to multiple
@all what does everyone think?      # broadcast to all agents
@everyone same as @all              # alias

Note: Once you tag an agent, all follow-up messages without a tag go to that same agent. Use @all or @everyone to broadcast again.

Agent-to-agent handoff

Agents can pass the conversation to each other by ending their response with @next:<tag>. The orchestrator picks it up and dispatches automatically. Max 15 hops per cycle to prevent loops.

WARNING: FULL AUTONOMY.

All agents run with full permissions. They can read, write, edit files and execute shell commands with zero approval gates. There is no confirmation step before any action. Run in a disposable environment. You are responsible for any changes, data loss, costs, or side effects. Do not run against production systems.

Important notes

Uses your existing CLIs. llm-party uses official SDKs that delegate to each provider's CLI binary. If claude, codex, or copilot commands work on your machine, llm-party works. Authentication is handled entirely by the provider's own tools.

Run in isolation. Always run llm-party inside a disposable environment: a Docker container, a VM, or at minimum a throwaway git branch. Agents have full filesystem and shell access with zero approval gates.

Full permissions. All agents can read, write, edit files and execute shell commands. There is no confirmation step before any action. You are responsible for any changes, data loss, costs, or side effects.

How we use the SDKs

llm-party uses official, publicly available SDKs and CLIs published by each provider. Nothing is reverse-engineered, patched, or bypassed.

| Provider | Official SDK | Published by | | -------- | ----------------------------------------------------------------------------------------------- | ------------ | | Claude | @anthropic-ai/claude-agent-sdk | Anthropic | | Codex | @openai/codex-sdk | OpenAI | | Copilot | @github/copilot-sdk | GitHub |

All authentication flows through the provider's own CLI. llm-party does not implement its own auth flow, store credentials, or intercept authentication traffic.

Supported providers

| Provider | SDK | Session | Prompt Support | | ----------------- | ---------------------------------- | -------------------------------------- | -------------------------------------------------- | | Claude | @anthropic-ai/claude-agent-sdk | Persistent via session ID resume | Full control | | Codex | @openai/codex-sdk | Persistent thread with run() turns | Via developer_instructions (limitations below) | | Copilot | @github/copilot-sdk | Persistent via sendAndWait() | Full control | | GLM | Claude SDK + env proxy | Same as Claude | Full control |


How it works

llm-party uses SDK adapters directly. Each agent gets a persistent session with its provider. Full tool access. Real conversation threading. The orchestrator owns routing, agents are peers.

Terminal (you)
    |
    v
Orchestrator
    |
    +-- Agent Registry
    |     +-- Claude  -> ClaudeAdapter  (SDK session, resume by ID)
    |     +-- Codex   -> CodexAdapter   (SDK thread, persistent turns)
    |     +-- Copilot -> CopilotAdapter (SDK session, sendAndWait)
    |     +-- GLM     -> GlmAdapter     (Claude SDK + env proxy)
    |
    +-- Conversation Log (ordered, all messages, agent-prefixed)
    |
    +-- Transcript Writer (JSONL, append-only, per session)

Each agent receives a rolling window of recent messages (configurable, default 16) plus any unseen messages since its last turn. Messages from other agents are included so everyone sees the full multi-party conversation.

~/.llm-party/config.json is your global config. Every agent receives a base system prompt automatically. The prompts field in config adds extra prompt files on top of it.

Provider details

Claude

| | | | ------- | ----------------------------------------------------------------------------------------------- | | SDK | @anthropic-ai/claude-agent-sdk | | Session | Persistent via resume: sessionId. First call creates a session, subsequent calls resume it. | | Prompt | Passed directly to the SDK. Full control over personality, behavior, and workflow rules. | | Tools | Read, Write, Edit, Bash, Glob, Grep |

Codex

| | | | ------- | ---------------------------------------------------------------------------------------------------------------- | | SDK | @openai/codex-sdk | | Session | Persistent thread.startThread() creates it, thread.run() adds turns to the same conversation. | | Prompt | Injected via developer_instructions config key. Appended alongside Codex's built-in 13k token system prompt. | | Tools | exec_command, apply_patch, file operations |

Known limitation: Codex's built-in system prompt cannot be overridden. Your instructions are appended alongside it. Action instructions (naming, formatting, workflow rules) work. Personality overrides do not.

Copilot

| | | | ------- | ----------------------------------------------------------- | | SDK | @github/copilot-sdk | | Session | Persistent via CopilotClient.createSession(). | | Prompt | Set as systemMessage on session creation. Full control. | | Tools | Copilot built-in toolset |

GLM

| | | | ------- | --------------------------------------------------------- | | SDK | @anthropic-ai/claude-agent-sdk (same as Claude) | | Session | Same as Claude, routed through a proxy via env overrides. | | Prompt | Same as Claude. Full control. | | Tools | Same as Claude |

GLM uses the Claude SDK as a transport layer. The adapter routes API calls through a proxy by setting ANTHROPIC_BASE_URL and model aliases via the env config field.

Config reference

Config file: ~/.llm-party/config.json (created automatically on first run).

Override with LLM_PARTY_CONFIG env var to point to a different file.

Top-level fields

| Field | Required | Default | Description | | --------------- | -------- | -------------------------- | ---------------------------------------------------------------------------- | | humanName | No | Your system username | Display name in the terminal prompt and passed to agents | | humanTag | No | derived from humanName | Tag for human handoff detection (@next:you) | | maxAutoHops | No | 15 | Max agent-to-agent handoffs per cycle. Use "unlimited" to remove the cap | | timeout | No | 600 | Default timeout in seconds for all agents | | agents | Yes | | Array of agent definitions |

Agent fields

| Field | Required | Default | Description | | ------------------ | -------- | ------------------------ | --------------------------------------------------------------------------------------------------- | | name | Yes | | Display name shown in responses as [AGENT NAME]. Must be unique. | | tag | Yes | | Routing tag for @tag targeting. Letters, numbers, hyphens, underscores only. No spaces. | | provider | Yes | | SDK adapter:claude, codex, copilot, or glm | | model | Yes | | Model ID passed to the provider. Examples:opus, sonnet, gpt-5.2, gpt-4.1, glm-5 | | prompts | No | none | Array of extra prompt file paths, concatenated after base.md. Relative to project root | | executablePath | No | PATH lookup | Path to the CLI binary. Supports ~/. Only needed if the CLI is not in your PATH | | env | No | inherits process.env | Environment variable overrides for this agent | | timeout | No | top-level value | Per-agent timeout override in seconds | | preloadSkills | No | none | Array of skill names to load at boot. Skills are discovered from ~/.llm-party/skills/, .llm-party/skills/, .claude/skills/, .agents/skills/ |

Prompts

Every agent receives a base system prompt automatically. To add extra instructions per agent, use the prompts field:

{
  "name": "Reviewer",
  "tag": "reviewer",
  "provider": "claude",
  "model": "opus",
  "prompts": ["./prompts/code-review.md"]
}

Template variables available in prompt files:

| Variable | Description | | --------------------------- | ----------------------------- | | {{agentName}} | This agent's display name | | {{agentTag}} | This agent's routing tag | | {{humanName}} | Your display name | | {{humanTag}} | Your routing tag | | {{agentCount}} | Total number of active agents | | {{allAgentNames}} | All agent names | | {{allAgentTags}} | All agent tags as @tag | | {{otherAgentList}} | Other agents with their tags | | {{validHandoffTargets}} | Valid @next:tag targets | | {{preloadedSkills}} | Skills assigned to this agent via preloadSkills |

GLM environment setup

GLM requires environment overrides to route through a proxy. The adapter tries to load env variables from your shell glm alias automatically. Without the alias, provide everything in the env block:

{
  "name": "GLM",
  "provider": "glm",
  "model": "glm-5",
  "env": {
    "ANTHROPIC_AUTH_TOKEN": "your-glm-api-key",
    "ANTHROPIC_BASE_URL": "https://api.z.ai/api/anthropic",
    "ANTHROPIC_DEFAULT_HAIKU_MODEL": "glm-4.5-air",
    "ANTHROPIC_DEFAULT_SONNET_MODEL": "glm-4.5",
    "ANTHROPIC_DEFAULT_OPUS_MODEL": "glm-5"
  }
}

Skills

Skills are folders containing a SKILL.md file with specialized instructions for specific workflows. Agents discover skills from these locations (in order):

  1. ~/.llm-party/skills/ (global, shared across all projects)
  2. .llm-party/skills/ (project-local)
  3. .claude/skills/ (if present)
  4. .agents/skills/ (if present)

Assign skills to agents with preloadSkills in config:

{
  "name": "Reviewer",
  "provider": "claude",
  "model": "opus",
  "preloadSkills": ["aala-review"]
}

At boot, the orchestrator verifies each skill exists and reports status per agent. Skills assigned to an agent are also shown in the team directory visible to all peers.

Mind-map

Shared memory between all agents, stored as Obsidian-compatible notes in ~/.llm-party/network/mind-map/. Each entry is a .md file with frontmatter and [[wikilinks]] connecting related findings. INDEX.md is the entry point that agents read on boot.

Agents write to the mind-map as they work: constraints, discoveries, session progress, cross-project breadcrumbs, user preferences. Anything a cold-boot agent would need to know that is not in the code. Open the folder in Obsidian to visualize the knowledge graph.

Session and transcript

Every run generates a unique session ID and appends messages to a JSONL transcript in .llm-party/sessions/ (project-level). The session ID and transcript path are printed at startup.

File changes made by agents are detected via git status after each response. Newly modified files are printed with timestamps.

Terminal commands

| Command | What it does | | ---------------- | -------------------------------------------- | | /agents | Open agents panel overlay (Ctrl+P also works) | | /config | Open config wizard | | /info | Commands and keyboard shortcuts panel | | /save <path> | Export conversation as JSON | | /session | Show session ID and transcript path | | /changes | Show git-modified files | | /clear | Clear chat display (Ctrl+L also works) | | /exit | Quit (graceful shutdown, all adapters cleaned up) | | Ctrl+P | Toggle agents panel | | Ctrl+L | Clear chat | | Ctrl+C | Copy selection or exit | | Ctrl+A / E | Jump to start / end of input line | | Ctrl+U | Clear entire input line | | Ctrl+W | Delete word backward | | Shift+Enter | Insert new line in input | | Up / Down | Input history | | PageUp/Down | Scroll chat |

Development

git clone https://github.com/aalasolutions/llm-party.git
cd llm-party
bun install
bun run dev

Build and run:

bun run build
bun start

Override config:

LLM_PARTY_CONFIG=/path/to/config.json bun run dev

Troubleshooting

"No agent matched @tag" Run /agents to see available tags. Tags match against agent tag, name, and provider.

"Unsupported provider" Valid providers: claude, codex, copilot, glm.

"Duplicate agent name" Agent names must be unique (case-insensitive). Rename one of the duplicates in config.

Agent modifies source code unexpectedly Expected with full permissions. Use git to review and revert.

Codex ignores personality instructions Known limitation. Codex's 13k token built-in prompt overrides personality. Functional instructions still work.

"ERR_UNKNOWN_BUILTIN_MODULE: node:sqlite" Your Node.js version is below 22. The Copilot SDK requires Node.js 22+.

Agent response timeout Default is 600 seconds (10 minutes). Adjust with timeout in config (top-level or per-agent).