npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@dropfly/x2000-brain-fleet

v2.1.0

Published

DropFly Inc. 50-Brain Autonomous AI Fleet

Downloads

200

Readme

X2000

Autonomous AI Fleet Platform -- 50 Specialized Brains -- 11 LLM Providers -- Forever Learning


What is X2000

X2000 is an autonomous AI fleet platform that orchestrates 50 specialized brains to build and operate businesses. A CEO Brain decomposes every task, delegates to domain-expert brains (engineering, marketing, finance, legal, and 45 more), and synthesizes the results -- all governed by 5-layer guardrails and an earned-autonomy trust system. Brains debate through a structured Tension Protocol, learn from every outcome via forever-learning memory, and communicate across 13 channels. X2000 works while you sleep.


Prerequisites

| Requirement | Details | |-------------|---------| | Node.js | v18 or later (node -v) | | Redis | Required for the worker system (BullMQ queues, supervisor heartbeats) | | Git | For version control and CLI operations | | LLM Provider | At least one: Claude Max subscription, any provider API key, or Ollama (free, local) |


Quick Start

# Install globally
npm install -g x2000

# Run the setup wizard (5 minutes)
x2000 setup

# Start the gateway
x2000 start

No source code is exposed -- the npm package ships compiled JavaScript only. Brain doctrine files (CLAUDE.md) are included for the AgentLoop to load at runtime.


Setup Wizard

Running npm start setup configures X2000 step by step:

| Step | What It Configures | |------|--------------------| | Dependencies | Checks Node.js 18+ and Git are installed | | Provider | Choose your LLM provider -- Claude, OpenAI, Ollama, or 8 others | | Supabase | Optional persistent memory backend | | Channels | Enable channels -- Telegram, WhatsApp, iMessage, Discord, Email | | Autonomy | Set the trust level (1-4) -- how much freedom X2000 has | | Agent Identity | Name your AI agent and set its personality | | Owner & Team | Your name, email, company, and team members | | Newsletter | Automated newsletter generation and delivery | | Daemon | Install background service + health check |

All configuration is stored in ~/.x2000/.


Commands

# Setup wizard (first time or reconfigure)
npm start setup

# Start the gateway (foreground)
npm start gateway

# Background daemon
npm start daemon install      # Install and auto-start on login
npm start daemon start        # Start the daemon
npm start daemon stop         # Stop the daemon
npm start daemon logs         # View daemon logs
npm start daemon uninstall    # Remove the daemon

# Send a task
npm start msg "Deploy the marketing site"

# Check system status
npm start status

# Manage communication channels
npm start channels            # List configured channels
npm start channels add telegram
npm start channels add email

# Login with Claude subscription
npm start login

# Show help
npm start help

Architecture

 User
  │
  ▼
┌─────────────────────────────────────────────────────────────────────┐
│                        13 CHANNELS                                  │
│  Telegram │ Slack │ Discord │ Email │ WhatsApp │ SMS │ iMessage │…  │
└─────────────────────────┬───────────────────────────────────────────┘
                          │
                          ▼
┌─────────────────────────────────────────────────────────────────────┐
│                     GATEWAY v2 (Hono)                               │
│              REST API │ WebSocket │ Health │ Status                  │
└─────────────────────────┬───────────────────────────────────────────┘
                          │
                          ▼
┌─────────────────────────────────────────────────────────────────────┐
│                        CEO BRAIN                                    │
│       Orchestration │ Task Decomposition │ Conflict Resolution      │
└───────┬─────────────────┼─────────────────────┬─────────────────────┘
        │                 │                     │
        ▼                 ▼                     ▼
┌──────────────┐  ┌──────────────┐  ┌──────────────┐
│ ENGINEERING  │  │   PRODUCT    │  │   FINANCE    │  ... + 47 more
│    BRAIN     │◄►│    BRAIN     │◄►│    BRAIN     │
└──────┬───────┘  └──────┬───────┘  └──────┬───────┘
       │                 │                 │
       ▼                 ▼                 ▼
┌─────────────────────────────────────────────────────────────────────┐
│                    FOREVER MEMORY                                    │
│        Patterns │ Learnings │ Skills │ Decisions │ Anti-Patterns     │
├─────────────────────────────────────────────────────────────────────┤
│                    30 TOOLS                                          │
│    File I/O │ Shell │ Browser │ Deploy │ Git │ Email │ Voice │ …    │
├─────────────────────────────────────────────────────────────────────┤
│                 5-LAYER GUARDRAILS                                   │
│    Input │ Action │ Runtime │ Visibility │ Escalation                │
└─────────────────────────────────────────────────────────────────────┘

Key protocols:

  • Brain Tension Protocol -- Brains PROPOSE, CHALLENGE, and RESOLVE through structured debate. CEO Brain breaks ties when consensus falls below 70%.
  • Forever-Learning Memory -- Every task outcome is logged as patterns, learnings, or anti-patterns. Before every new task, memory is queried for relevant history.
  • 5-Layer Guardrails -- Input validation, action control, runtime governance, reasoning visibility, and bounded escalation.
  • Earned Autonomy -- Brains start at trust level 1 (read-only) and earn higher levels through proven performance, up to level 4 (full autonomy).
  • Fortress Security -- Zero-trust risk engine, AES-256-GCM encrypted vault, threat detection with automated response, compliance monitoring, and Docker sandbox isolation for untrusted agents.
  • Privacy Shield -- Bidirectional PII protection that shields sensitive data before it reaches LLM providers and reassembles it in the response. 3-layer hybrid detection (regex + NER + context boosting), synthetic replacement via faker, and AES-256-GCM encrypted token vault.
  • Mesh Network -- Provider-independent inter-brain messaging via Redis pub/sub + filesystem. Supports multi-instance deployment across machines with SSH-based delivery and cost-optimized LLM routing.
  • Multi-Provider Routing -- Automatic cost optimization: simple chat routes to DeepSeek ($0.27/1M), standard tasks to Sonnet ($3/1M), complex work to Opus ($15/1M).

Brain Tiers

X2000 contains 50 specialized brains organized into 6 tiers:

| Tier | Focus | Count | Brains | |------|-------|-------|--------| | 1 -- Core | Leadership and direction | 7 | CEO, Engineering, Design, Product, Strategy, Research, COO | | 2 -- Technical | Building and infrastructure | 13 | Architecture, Backend, Frontend, Database, DevOps, Cloud, Mobile, Security, Performance, Data, AI, Automation, Debugger | | 3 -- Business | Operations and finance | 8 | Finance, Legal, Operations, HR, Investing, Innovation, Investor, Pricing | | 4 -- Growth | Revenue and partnerships | 5 | Marketing, Sales, Growth, Partnership, Customer Success | | 5 -- Channels | Brand and outreach | 7 | Branding, Email, Social Media, Video, Community, SEO, Paid Ads | | 6 -- Specialized | Domain expertise | 10 | Content, DevRel, Analytics, Localization, Game Design, Support, QA, Blockchain, Psychology, R&D |


LLM Providers

X2000 supports 11 LLM providers. You need at least one configured.

| Provider | Type | Notes | |----------|------|-------| | Anthropic | Cloud (API key) | Claude models. Also supports Claude Max subscription via npm start login | | OpenAI | Cloud (API key) | GPT-4o, GPT-5, etc. | | Google | Cloud (API key) | Gemini models | | DeepSeek | Cloud (API key) | DeepSeek models | | Groq | Cloud (API key) | Fast inference | | Mistral | Cloud (API key) | Mistral and Mixtral models | | xAI | Cloud (API key) | Grok models | | Kimi | Cloud (API key) | Moonshot AI | | OpenRouter | Cloud (API key) | Multi-model gateway | | Together | Cloud (API key) | Open-source model hosting | | Ollama | Local (no key) | Free, runs models locally. No API key required. |


Communication Channels

X2000 supports 13 communication channels:

| Channel | Setup Notes | |---------|-------------| | Telegram | Create a bot via @BotFather, add the token during setup | | Slack | Create a Slack app in the developer portal | | Discord | Create a bot in the Discord developer portal | | Email | Gmail, Outlook, or custom SMTP/IMAP | | WhatsApp | Scan QR code on first connection | | SMS | Requires a Twilio or compatible provider | | iMessage | macOS only, requires Accessibility permissions | | Voice | Powered by Vapi and ElevenLabs MCP integrations | | Signal | Requires Signal CLI bridge | | Matrix | Connect to any Matrix homeserver | | MS Teams | Microsoft Teams bot integration | | IRC | Connect to any IRC network | | API | Direct HTTP/WebSocket access via the gateway |


API Reference

The gateway runs at http://localhost:3000 by default.

Endpoints

# Health check
GET /health
curl http://localhost:3000/health

# System status (brains, memory, channels)
GET /api/status
curl http://localhost:3000/api/status

# Send a message / task
POST /api/message
curl -X POST http://localhost:3000/api/message \
  -H "Content-Type: application/json" \
  -d '{"content": "Build a landing page for our product"}'

WebSocket

Connect to ws://localhost:3000 for streaming responses and real-time updates.


Configuration

All user configuration lives in ~/.x2000/:

~/.x2000/
├── config.json          # Identity, team, projects, preferences, autonomy level
├── credentials.json     # API keys and tokens (file permissions: 600)
├── .env                 # Environment variable overrides
└── logs/
    ├── gateway.log      # Standard output
    └── gateway.err.log  # Error output
  • config.json -- Created by the setup wizard. Stores your identity, team members, projects, and autonomy preferences.
  • credentials.json -- Stores LLM provider API keys, channel tokens, and integration secrets. Restricted to owner-only read/write (chmod 600).
  • .env -- Optional environment variable overrides for local development or deployment.

Project Structure

x2000/
├── src/
│   ├── brains/              # 50 specialized brains
│   │   ├── base.ts          # Base brain class with tool integration
│   │   ├── factory.ts       # Brain factory (dynamic instantiation)
│   │   ├── registry.ts      # Brain registry (all 50 registered)
│   │   ├── ceo/             # CEO Brain -- orchestrator
│   │   ├── engineering/     # Engineering Brain
│   │   ├── product/         # Product Brain
│   │   └── ...              # 46 more brain directories
│   ├── ai/                  # LLM client, providers, prompts, compaction
│   │   └── providers/       # 11 provider adapters
│   ├── gateway-v2/          # Hono-based modular gateway
│   ├── channels/            # 13 channel adapters
│   │   ├── orchestrator.ts  # Channel routing
│   │   ├── message-bus.ts   # Cross-channel message bus
│   │   └── adapters/        # Telegram, Slack, Discord, Email, etc.
│   ├── tools/               # 30 tools (file I/O, shell, browser, deploy, etc.)
│   ├── memory/              # Forever-learning memory system
│   ├── guardrails/          # 5-layer guardrails + earned autonomy
│   ├── agents/              # Agent spawn, sessions, collaboration
│   ├── workers/             # BullMQ worker system (Redis-backed)
│   ├── supervisor/          # Worker supervisor with heartbeats
│   ├── scheduler/           # Task scheduling
│   ├── proactive/           # Proactive operations, goals, pulse
│   ├── onboarding/          # Setup wizard
│   ├── cli/                 # CLI entry point
│   ├── config/              # User config, env config
│   ├── plugins/             # Plugin system
│   ├── security/            # Security layer
│   │   ├── privacy-shield/  # Bidirectional PII shielding (V2)
│   │   ├── fortress/        # Zero-trust risk engine
│   │   ├── audit-log.ts     # JSONL audit log with rotation
│   │   └── vault.ts         # AES-256-GCM encrypted credentials
│   ├── mesh/                # Inter-brain messaging network
│   │   ├── message-hub.ts   # Provider-independent message pool
│   │   ├── identity.ts      # Instance identity management
│   │   ├── hmac.ts          # HMAC authentication for peers
│   │   └── registry.ts      # Peer registry and discovery
│   ├── rnd/                 # R&D experiments and benchmarks
│   ├── web/                 # Web UI
│   └── types/               # TypeScript type definitions
├── manifest.json            # System manifest (brains, providers, channels, tools)
├── package.json             # Dependencies and scripts
├── tsconfig.json            # TypeScript configuration
└── CLAUDE.md                # AI operating protocols (works with all LLM providers)

Note on CLAUDE.md files: The name follows the Claude Code convention, but the content is provider-agnostic. Every brain's CLAUDE.md is a standard markdown system prompt that works with Claude, GPT-4, Gemini, Llama, and all 11 supported providers. No model-specific formatting or instructions. Switching providers requires zero doctrine changes.


Development

# Build (runs brain verification as a prebuild step)
npm run build

# Run tests
npm test

# Run tests with coverage
npm run test:coverage

# Type check without emitting
npm run typecheck

# Lint
npm run lint

# Format
npm run format

The prebuild step runs scripts/verify-brains.ts to confirm all 50 brains are properly registered in the factory and registry before compiling.


Troubleshooting

Port 3000 already in use

# Find what is using port 3000
lsof -i :3000

# Kill it, or start the gateway on a different port

Telegram bot not connecting

  1. Message your bot directly first (not in a group).
  2. Verify the token in ~/.x2000/credentials.json.
  3. Restart the daemon: npm start daemon restart.

Redis not running

The worker system requires Redis for BullMQ queues and supervisor heartbeats.

# macOS (Homebrew)
brew services start redis

# Verify Redis is running
redis-cli ping
# Should return: PONG

No LLM provider configured

Run the setup wizard again to add a provider:

npm start setup

Or use Ollama for free local inference:

# Install Ollama (macOS)
brew install ollama

# Pull a model
ollama pull llama3

# X2000 will detect Ollama automatically

View logs

# Daemon logs
npm start daemon logs

# Or read log files directly
cat ~/.x2000/logs/gateway.log
cat ~/.x2000/logs/gateway.err.log

Reset everything

rm -rf ~/.x2000
npm start setup

Known Limitations (Alpha)

Version 2.0.0 is alpha software. The following limitations apply.

Not Yet Implemented

| Feature | Status | ETA | |---------|--------|-----| | WhatsApp Channel | Declared, not implemented | TBD | | SMS Channel | Declared, not implemented | TBD | | iMessage Channel | Declared, not implemented | TBD | | Signal Channel | Declared, not implemented | TBD | | MS Teams Channel | Declared, not implemented | TBD | | Daemon Auto-Install | Partially implemented | Soon | | Test Suite | Minimal coverage (~5%) | In progress |

Working Channels (6 of 13)

| Channel | Status | |---------|--------| | Telegram | ✅ Working | | Slack | ✅ Working | | Discord | ✅ Working | | Email | ✅ Working | | API (HTTP/WebSocket) | ✅ Working | | Voice (Vapi/ElevenLabs) | ✅ Working |

Infrastructure Requirements

| Dependency | Required | Notes | |------------|----------|-------| | Redis | Yes | Worker queue system (BullMQ) requires Redis | | Supabase | Optional | Persistent memory; falls back to local storage | | Docker | Optional | For sandboxed command execution |

Quick Redis Setup

# Option 1: Docker Compose (recommended)
docker-compose up -d

# Option 2: Homebrew (macOS)
brew install redis && brew services start redis

# Option 3: Linux
sudo apt install redis-server && sudo systemctl start redis

# Verify
redis-cli ping  # Should return: PONG

Known Issues

  1. Gateway v2 migration incomplete — Some routes use Express, others use Hono
  2. Memory not persistent without Supabase — Patterns/learnings lost on restart without Supabase
  3. Some tools have TODO comments — scheduler/worker.ts has incomplete sections
  4. Crypto tool requires optional dependencysend_payment needs @coinbase/agentkit installed

For Testers

See GETTING_STARTED.md for a complete tester guide including:

  • Prerequisites and setup
  • What to test (and what to avoid)
  • How to report bugs
  • FAQ

Privacy Shield

X2000 includes a bidirectional PII protection system that ensures sensitive data never reaches LLM provider servers in plain text.

How It Works

User Input → PII Detection → Shield (tokenize/synthesize) → LLM Provider → Response → Reassemble → User Output

Detection Layers

| Layer | Method | Detects | |-------|--------|---------| | 1 | Regex + Validators | SSN (range validation), credit cards (Luhn), phone, email, addresses | | 2 | NER (compromise.js) | Names, organizations, places | | 3 | Context Boosting | Presidio-style keyword confidence adjustment |

Replacement Modes

| Mode | Example Input | Example Output | |------|--------------|----------------| | Token | Email John at [email protected] | Email [NAME_1] at [EMAIL_1] | | Synthetic | Email John at [email protected] | Email Alex Chen at [email protected] |

Synthetic mode uses @faker-js/faker to generate realistic replacements that produce better LLM output quality. All mappings are stored in an AES-256-GCM encrypted vault for reassembly.

Configuration

Set PRIVACY_SHIELD_MODE in your environment:

  • token -- Replace PII with bracketed tokens
  • synthetic -- Replace PII with realistic fake data (requires @faker-js/faker)
  • disabled -- No PII protection (default)

Mesh Network

X2000 supports multi-instance deployment where multiple brains run on separate machines and communicate through a provider-independent message hub.

Architecture

Machine A (Tony)                    Machine B (Jasson)
┌──────────────────┐               ┌──────────────────┐
│  X2000 Gateway   │               │  X2000 Gateway   │
│  ┌────────────┐  │  Redis/SSH    │  ┌────────────┐  │
│  │ MessageHub │◄─┼──────────────►┼──│ MessageHub │  │
│  └────────────┘  │               │  └────────────┘  │
│  mesh-outbox/    │               │  mesh-inbox/     │
└──────────────────┘               └──────────────────┘

Cost-Optimized Routing

Messages are automatically routed to the cheapest capable LLM:

| Content Type | Provider | Cost/1M tokens | |-------------|----------|----------------| | Simple chat, greetings, status | DeepSeek | $0.27 | | Standard tasks, code review | Claude Sonnet | $3.00 | | Complex tasks, architecture | Claude Opus | $15.00 |

Setup

# Initialize mesh identity
x2000 mesh init

# Connect to a peer
x2000 mesh connect <peer-address>:3000 --secret <shared-secret>

# Check mesh status
x2000 mesh status

Codebase Sync

For multi-machine deployments, use the included sync script:

# Set peer connection in environment
export X2000_PEER_HOST="user@peer-ip"

# Bidirectional sync
bash scripts/sync-codebases.sh

# One-way push or pull
bash scripts/sync-codebases.sh push
bash scripts/sync-codebases.sh pull

MCP Integrations

X2000 connects to 8 external services via the Model Context Protocol:

| Integration | Powers | Description | |-------------|--------|-------------| | Resend | Email | Transactional and marketing email delivery | | Railway | Deploy | One-command cloud deployments | | Vapi | Voice/Calls | Voice AI assistants and phone calls | | ElevenLabs | TTS/STT | Text-to-speech and speech-to-text | | n8n | Automation | Workflow automation and skill orchestration | | Stripe | Payments | Payment processing and subscription management | | Supabase | Database/Memory | Persistent storage for memory, patterns, and learnings | | SocialFly | Social Media | Social media content creation, scheduling, and image generation |


License

MIT