npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@liftping/repochief-core

v0.2.1754504153

Published

AI Agent Orchestration Engine - Linear for AI Agents

Downloads

6

Readme

RepoChief Core

AI Agent Orchestration Engine - Linear for AI Agents

RepoChief Core is the foundational orchestration engine for managing swarms of AI coding agents. It provides the infrastructure for task distribution, quality verification, cost tracking, and parallel execution of AI-powered development tasks.

Features

  • 🤖 AI Agent Orchestration: Manage 10-100 AI agents working in parallel
  • 📊 Context Window Management: Intelligent task sizing based on model token limits
  • ✅ Quality Gates: Automated verification pipelines for generated code
  • 💰 Cost Tracking: Real-time API usage monitoring and budget controls
  • 🧠 Agent Profiles: Specialized agents for different tasks (coding, testing, reviewing)
  • 🔄 Task Dependencies: DAG-based task execution with automatic scheduling
  • 🎭 Mock Mode: Test workflows without API costs
  • 🏠 Local-First: Your code never leaves your machine

System Requirements

  • Node.js 18+
  • Tmux (for agent window management)
    • macOS: brew install tmux
    • Ubuntu/Debian: sudo apt-get install tmux
    • Windows: Use WSL2 with Ubuntu
  • 4GB RAM minimum (8GB recommended for larger swarms)

Installation

npm install @liftping/repochief-core

# Copy environment template
cp .env.example .env

# Configure your API keys (see API Key Setup below)

API Key Setup

RepoChief supports multiple AI providers. You'll need at least one API key to use real AI models:

1. Get Your API Keys

Choose one or more providers:

  • OpenAI: https://platform.openai.com/api-keys
    • Models: GPT-4, GPT-3.5-Turbo
    • Best for: Code generation, general tasks
  • Anthropic: https://console.anthropic.com/
    • Models: Claude 3 Opus, Claude 3 Sonnet
    • Best for: Code comprehension, complex analysis
  • Google AI: https://makersuite.google.com/app/apikey
    • Models: Gemini Pro
    • Best for: Multi-modal tasks, exploration

2. Set Environment Variables

Create a .env file in your project root:

# Add one or more API keys
OPENAI_API_KEY=sk-...your-key-here
ANTHROPIC_API_KEY=sk-ant-...your-key-here
GOOGLE_API_KEY=AIza...your-key-here

# Optional: Run in mock mode (no API calls)
MOCK_MODE=false

3. Verify Setup

const { createOrchestrator } = require('@liftping/repochief-core');

async function verifySetup() {
  const orchestrator = createOrchestrator({ mockMode: false });
  await orchestrator.initialize();
  
  const health = await orchestrator.modelClient.healthCheck();
  console.log('Available providers:', health);
  
  await orchestrator.shutdown();
}

verifySetup();

Quick Start

Run the Demo

Try our TODO API demo that showcases a 4-agent swarm:

# Run in mock mode (no API costs)
MOCK_MODE=true node demo.js

# Run with real AI models
node demo.js

The demo creates:

  1. Alice-Analyst: Comprehends TODO API requirements
  2. Bob-Developer: Generates Express.js implementation
  3. Carol-QA: Creates comprehensive tests
  4. David-Reviewer: Validates code quality

Basic Usage

const { 
    createOrchestrator, 
    createAgentProfile, 
    AgentTemplates 
} = require('@liftping/repochief-core');

// Create orchestrator
const orchestrator = createOrchestrator({
    sessionName: 'my-project',
    totalBudget: 100 // $100 budget
});

// Initialize
await orchestrator.initialize();

// Create agents
const seniorDev = await orchestrator.createAgent({
    name: 'senior-dev',
    ...AgentTemplates.SENIOR_DEVELOPER
});

const qaEngineer = await orchestrator.createAgent({
    name: 'qa-engineer',
    ...AgentTemplates.QA_ENGINEER
});

// Queue tasks
await orchestrator.queueTask({
    type: 'generation',
    objective: 'Create a REST API for user management',
    context: ['models/user.js', 'config/database.js'],
    maxTokens: 50000,
    successCriteria: [
        'CRUD operations for users',
        'Input validation',
        'Error handling'
    ]
});

// Monitor costs
orchestrator.on('costUpdate', ({ total, cost }) => {
    console.log(`Cost update: $${cost.toFixed(4)} (Total: $${total.toFixed(2)})`);
});

Architecture

repochief-core/
├── core/
│   └── AIAgentOrchestrator.js    # Main orchestration engine
├── agents/
│   └── AIAgentProfile.js         # Agent capability definitions
├── context/
│   └── TokenCounter.js           # Token counting for multiple models
├── cost/
│   └── CostTracker.js            # API cost tracking and budgets
├── quality/
│   └── QualityGateRunner.js      # Verification pipeline runner
└── templates/                     # Prompt templates for tasks

Task Types

RepoChief supports four primary AI task types:

1. Comprehension

Understanding existing code, requirements, and architecture.

{
    type: 'comprehension',
    objective: 'Analyze the authentication system',
    context: ['auth/', 'middleware/'],
    maxTokens: 100000
}

2. Generation

Creating new code, features, or implementations.

{
    type: 'generation',
    objective: 'Implement user registration endpoint',
    context: ['models/user.js', 'routes/'],
    successCriteria: ['Validation', 'Password hashing', 'Email confirmation']
}

3. Validation

Verifying correctness, quality, and compliance.

{
    type: 'validation',
    objective: 'Review security of payment processing',
    context: ['services/payment.js'],
    specificChecks: ['PCI compliance', 'Input sanitization']
}

4. Exploration

Researching and investigating technical solutions.

{
    type: 'exploration',
    objective: 'Find best approach for real-time notifications',
    constraints: ['Must scale to 100k users', 'Low latency required']
}

Agent Profiles

Pre-configured agent templates for common roles:

  • SENIOR_DEVELOPER: GPT-4 based, handles complex generation and refactoring
  • QA_ENGINEER: Specialized in testing and validation
  • SECURITY_EXPERT: Claude-3 based, focuses on security analysis
  • ARCHITECT: High-context exploration and system design
  • JUNIOR_DEVELOPER: Cost-effective for simple tasks

Quality Gates

Built-in verification gates:

  • Test Runners: Jest, Mocha, Pytest, Go test
  • Linters: ESLint, Pylint, Golint
  • Security: npm audit, Bandit
  • Complexity: Cyclomatic complexity analysis
  • Coverage: Code coverage requirements

Cost Management

API Pricing Guide

Approximate costs per 1,000 tokens (as of 2024):

| Model | Input | Output | Best For | |-------|-------|--------|----------| | GPT-3.5-Turbo | $0.0005 | $0.0015 | Simple tasks, high volume | | GPT-4 | $0.01 | $0.03 | Complex generation | | Claude 3 Sonnet | $0.003 | $0.015 | Balanced tasks | | Claude 3 Opus | $0.015 | $0.075 | Deep analysis |

Budget Controls

const costTracker = orchestrator.costTracker;

// Set budgets
costTracker.setBudget('total', 500);    // $500 total
costTracker.setBudget('daily', 50);     // $50 per day
costTracker.setBudget('perTask', 10);   // $10 per task

// Monitor spending
costTracker.on('budgetAlert', ({ type, threshold, current }) => {
    console.log(`Budget alert: ${type} at ${threshold}% ($${current})`);
});

// Get reports
const report = costTracker.getReport({
    includeTimeSeries: true,
    includeUsage: true
});

Token Management

const { getTokenCounter } = require('@liftping/repochief-core');
const counter = getTokenCounter();

// Count tokens
const tokens = counter.countTokens('Your text here', 'gpt-4o');

// Check if content fits
const fits = counter.fitsInContext(messages, 'claude-3-opus');

// Split large content
const chunks = counter.splitContent(largeText, 'gpt-4o');

Event System

The orchestrator emits various events:

orchestrator.on('initialized', () => {});
orchestrator.on('agentCreated', (agent) => {});
orchestrator.on('taskQueued', (task) => {});
orchestrator.on('taskAssigned', ({ task, agent }) => {});
orchestrator.on('taskCompleted', ({ task, result }) => {});
orchestrator.on('taskFailed', ({ task, error }) => {});
orchestrator.on('costUpdate', (costInfo) => {});

Advanced Usage

Custom Quality Gates

class CustomGate extends QualityGate {
    async execute(code, context) {
        // Your validation logic
        return {
            status: 'pass', // or 'fail'
            details: { /* results */ }
        };
    }
}

const runner = createQualityRunner();
runner.register('custom', new CustomGate());

Task Dependencies

const tasks = [
    {
        id: 'task-1',
        type: 'comprehension',
        objective: 'Understand current API'
    },
    {
        id: 'task-2',
        type: 'generation',
        objective: 'Add new endpoints',
        dependencies: ['task-1'] // Waits for task-1
    },
    {
        id: 'task-3',
        type: 'validation',
        objective: 'Test new endpoints',
        dependencies: ['task-2'] // Waits for task-2
    }
];

for (const task of tasks) {
    await orchestrator.queueTask(task);
}

Best Practices

  1. Task Sizing: Keep tasks under 50k tokens for better success rates
  2. Agent Specialization: Use appropriate agents for each task type
  3. Budget Monitoring: Set conservative budgets and monitor alerts
  4. Quality First: Always include relevant quality gates
  5. Incremental Progress: Break large features into smaller tasks

Examples

Real AI Integration

Run practical examples with actual AI models:

# Simple code generation example
node examples/real-ai-integration.js

# Multi-agent code review workflow
node examples/real-ai-integration.js workflow

Using the CLI

# Install the CLI globally
npm install -g @liftping/repochief-cli

# Run with a task file
repochief run examples/tasks/code-review.json --budget 5

# Run in mock mode (no API costs)
repochief run examples/tasks/simple-generation.json --mock

Migration from Enhanced Task Management

RepoChief Core is built on the foundation of enhanced-task-management but adapted for AI agents:

  • Human roles → AI agent profiles
  • Time estimates → Token budgets
  • Manual execution → Automated orchestration
  • Code reviews → Quality gates

Testing

RepoChief Core includes comprehensive test coverage for all major components.

Run Tests

# Run all tests
npm test

# Run unit tests only
npm run test:unit

# Run integration tests only
npm run test:integration

# Watch mode for development
npm run test:watch

# Run with coverage (if configured)
npm run test:coverage

Test Structure

tests/
├── unit/
│   ├── agent-profile.test.js    # Agent profile tests
│   ├── cost-tracker.test.js     # Cost tracking tests
│   ├── token-counter.test.js    # Token counting tests
│   └── quality-gates.test.js    # Quality gate tests
└── integration/
    ├── orchestrator.test.js      # Full orchestration tests
    └── mock-mode.test.js         # Mock mode integration tests

Writing Tests

// Example test for custom quality gate
const { expect } = require('chai');
const { MyCustomGate } = require('../src/gates/MyCustomGate');

describe('MyCustomGate', () => {
    it('should detect code issues', async () => {
        const gate = new MyCustomGate();
        const result = await gate.execute(codeString, context);
        
        expect(result.status).to.equal('fail');
        expect(result.details.issues).to.have.length.greaterThan(0);
    });
});

Troubleshooting

Common Issues

1. "Cannot find tmux session"

# Check if tmux is installed
which tmux

# Install tmux if missing
# macOS: brew install tmux
# Linux: sudo apt-get install tmux

2. "API key not found"

# Ensure .env file exists
cp .env.example .env

# Add your API keys
echo "OPENAI_API_KEY=your-key-here" >> .env
echo "ANTHROPIC_API_KEY=your-key-here" >> .env

3. "Budget exceeded" error

// Increase budget in orchestrator config
const orchestrator = createOrchestrator({
    sessionName: 'my-project',
    totalBudget: 500  // Increase from default
});

// Or use mock mode for testing
MOCK_MODE=true node your-script.js

4. "Context too large" error

// Split large tasks into smaller chunks
const chunks = tokenCounter.splitContent(largeContent, 'gpt-4o');
for (const chunk of chunks) {
    await orchestrator.queueTask({
        type: 'comprehension',
        objective: 'Analyze code chunk',
        context: [chunk],
        maxTokens: 40000
    });
}

5. "Agent not available" error

// Check agent status before assigning tasks
const agents = orchestrator.getAgents();
const availableAgent = agents.find(a => 
    a.status === 'idle' && a.canHandle(taskType)
);

if (!availableAgent) {
    console.log('Waiting for available agent...');
    // Implement retry logic or queue management
}

Debug Mode

Enable detailed logging for troubleshooting:

const orchestrator = createOrchestrator({
    sessionName: 'debug-session',
    logLevel: 'debug',
    logToFile: true
});

// Or set environment variable
DEBUG=repochief:* node your-script.js

Performance Tips

  1. Optimize Task Size: Keep tasks under 30k tokens for better performance
  2. Use Appropriate Models: GPT-3.5 for simple tasks, GPT-4 for complex ones
  3. Batch Related Tasks: Group similar tasks to reduce context switching
  4. Monitor Token Usage: Track token consumption to optimize costs
  5. Leverage Mock Mode: Test workflows without API costs

Getting Help

Security

  • Never commit API keys to the repository
  • Use environment variables for sensitive configuration
  • Report security vulnerabilities to: [email protected]

Contributing

See CONTRIBUTING.md for development setup and guidelines.

License

MIT