npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

multiagent-consensus

v0.1.0

Published

A framework for running multi-agent consensus processes using multiple LLMs

Readme

Multiagent Consensus

npm version License: MIT

A framework for running multi-agent consensus processes using multiple Large Language Models (LLMs). This library enables a "jury" of AI models to debate and reach consensus on queries, providing more robust and balanced responses.

Features

  • 🤖 Multiple LLM Support: Compatible with various LLM providers through the Vercel AI SDK
  • 🔄 Configurable Consensus Methods: Choose from majority, supermajority (75%), or unanimous agreement
  • 🧠 Multi-round Debates: Models can debate in multiple rounds to refine their thinking
  • 📊 Detailed Results: Get comprehensive metadata including confidence scores and processing time
  • 🧪 Flexible Output: Customize output format (text, JSON) and content detail
  • 🛠️ Highly Configurable: Set bias, system prompts, and customize debate parameters

Installation

npm install multiagent-consensus

Or with yarn:

yarn add multiagent-consensus

Basic Usage

import { ConsensusEngine } from 'multiagent-consensus';

// Create a consensus engine with your configuration
const engine = new ConsensusEngine({
  models: ['claude-3-haiku', 'gpt-4', 'palm-2'],
  consensusMethod: 'majority', // 'majority', 'supermajority', or 'unanimous'
  maxRounds: 2, // Maximum number of debate rounds
  output: {
    includeHistory: true, // Include debate history in result
    includeMetadata: true, // Include metadata in result
    format: 'text', // 'text' or 'json'
  },
});

// Run a consensus process
async function getConsensus() {
  const result = await engine.run('What is the best approach to solve climate change?');

  console.log('Final Answer:', result.answer);

  if (result.history) {
    console.log('Debate History:');
    result.history.forEach((round, i) => {
      console.log(`\nRound ${i + 1}:`);
      round.responses.forEach(response => {
        console.log(`${response.model}: ${response.response}`);
      });
    });
  }

  console.log('\nMetadata:', result.metadata);
}

getConsensus();

API Reference

ConsensusEngine

The main class for running consensus processes.

constructor(config: ConsensusConfig)

ConsensusConfig

| Parameter | Type | Description | Default | | --------------- | -------------------------------------------- | --------------------------------- | ---------- | | models | string[] | Array of model identifiers to use | Required | | consensusMethod | 'majority' | 'supermajority' | 'unanimous' | Method to determine consensus | 'majority' | | maxRounds | number | Maximum number of debate rounds | 3 | | output | OutputConfig | Output configuration | See below |

OutputConfig

| Parameter | Type | Description | Default | | --------------- | ---------------- | --------------------------- | ------- | | includeHistory | boolean | Include full debate history | false | | includeMetadata | boolean | Include metadata in result | true | | format | 'text' | 'json' | Output format | 'text' |

ConsensusResult

The result of a consensus process.

| Property | Type | Description | | -------- | -------------- | ------------------------------------------ | | answer | string | The final consensus answer | | models | string[] | Models that participated in the debate | | metadata | ResultMetadata | Information about the process | | history? | RoundData[] | Debate history (if includeHistory is true) |

ResultMetadata

| Property | Type | Description | | ---------------- | ---------------------- | ---------------------------------------------- | | totalTokens | number | Total tokens used across all models and rounds | | processingTimeMs | number | Total processing time in milliseconds | | rounds | number | Number of debate rounds conducted | | consensusMethod | string | Method used to determine consensus | | confidenceScores | Record<string, number> | Self-reported confidence per model |

Consensus Methods

Majority

Requires more than 50% of the models to agree on an answer. This is the most lenient consensus method and works well for straightforward queries.

Supermajority

Requires at least 75% of the models to agree. This provides a more stringent threshold for consensus, useful for complex or controversial topics.

Unanimous

Requires all models to agree completely. This is the strictest form of consensus and may require multiple debate rounds to achieve.

Setting Up Environment Variables

For using this package with LLM providers, you'll need to set up environment variables for API keys:

OPENAI_API_KEY=your_openai_key_here
ANTHROPIC_API_KEY=your_anthropic_key_here
COHERE_API_KEY=your_cohere_key_here

We recommend using dotenv for local development:

// In your application's entry point
import 'dotenv/config';

Examples

Custom Debate with Specific System Prompts

const engine = new ConsensusEngine({
  models: ['claude-3-sonnet', 'gpt-4', 'gpt-3.5-turbo'],
  consensusMethod: 'supermajority',
  maxRounds: 3,
  modelConfig: {
    'claude-3-sonnet': {
      systemPrompt: 'You are a scientific expert focused on evidence-based reasoning.',
      temperature: 0.5,
    },
    'gpt-4': {
      systemPrompt: 'You are a philosophical thinker who considers ethical implications.',
      temperature: 0.7,
    },
    'gpt-3.5-turbo': {
      systemPrompt: 'You are a practical problem-solver focusing on realistic solutions.',
      temperature: 0.3,
    },
  },
  output: {
    includeHistory: true,
    format: 'json',
  },
});

Using Bias Presets

const engine = new ConsensusEngine({
  models: ['claude-3-opus', 'gpt-4', 'llama-3'],
  consensusMethod: 'majority',
  biasPresets: ['centrist', 'progressive', 'conservative'],
  output: {
    includeHistory: true,
  },
});

Running the Examples

The package includes a JavaScript example to demonstrate functionality.

As a Package Consumer

When you've installed the published package as a dependency in your project:

# Install the package
npm install multiagent-consensus

# Copy the example file to your project
# Run the JavaScript example
node simple-consensus.js

As a Package Developer

When developing the package itself:

# From the package directory
npm run build         # Build the package first - this creates the dist directory
npm run example       # Run the JavaScript example

The build step is crucial as it compiles the TypeScript source files into JavaScript in the dist directory. The example imports code from this directory, so if you make changes to the source files, you'll need to rebuild the package before running the example again.

License

This project is licensed under the MIT License - see the LICENSE file for details.