monult
v1.0.1
Published
Multi-Model AI Operating System for Developers — orchestrate, debate, and verify across AI models
Downloads
1
Maintainers
Readme
🧠 Monult
Multi-Model AI Operating System for Developers
Stop calling a single AI model. Start running AI assemblies.
Monult orchestrates multiple AI models to collaborate, debate, verify, and deliver consensus-driven results — like a team of AI engineers working together.
Quick Start · Features · Architecture · Documentation · Contributing
The Problem
Today's AI development is fragmented:
- Single model dependency — locked into one provider's strengths and weaknesses
- No verification — AI answers go unchecked, leading to hallucinations in production
- No collaboration — models work in isolation, missing the benefits of diverse reasoning
- Manual orchestration — developers glue together API calls without structure
- No memory — every conversation starts from scratch
The Solution
Monult is an AI orchestration runtime that manages models, agents, context memory, tools, and developer workflows. Instead of calling a single model, you run AI assemblies where multiple models collaborate like a team.
Developer Request
→ Monult Runtime
→ Model A proposes solution
→ Model B critiques it
→ Model C improves it
→ Model D verifies it
→ Verified, consensus-driven answerQuick Start
Installation
# Install globally
npm install -g monultSetup Workspace
Initialize Monult and configure your AI providers.
# Initialize a new project (.monult.json)
monult init
# Configure providers locally
monult provider add openai
monult provider add claudeBasic Commands
# Get a quick answer
monult ask "Design a scalable REST API"
# Run a full DevTeam collaboration (Architect, Frontend, Backend, Security)
monult devteam "Build an authentication system"
# Analyze a codebase and get heuristic suggestions
monult analyze ./my-repo
# See what other capabilities exist
monult --helpSDK Usage
import { Monult } from 'monult';
const monult = new Monult({
providers: {
openai: { apiKey: process.env.OPENAI_API_KEY },
claude: { apiKey: process.env.ANTHROPIC_API_KEY },
gemini: { apiKey: process.env.GOOGLE_API_KEY },
cohere: { apiKey: process.env.COHERE_API_KEY },
}
});
// Simple generation
const result = await monult.generate({
model: 'claude',
prompt: 'Explain recursion with a real-world example'
});
// Multi-model assembly
const assembly = await monult.assembly({
models: ['claude', 'openai', 'gemini', 'cohere'],
debate: true,
verify: true,
prompt: 'Design a scalable event-driven backend architecture'
});
console.log(assembly.consensus); // Best answer
console.log(assembly.confidence); // Confidence score
console.log(assembly.reasoning); // Full reasoning trace
// Agent assembly — multiple agents collaborate
const agentResult = await monult.agentAssembly({
agents: ['architect', 'security', 'devops'],
task: 'Design a scalable backend architecture'
});
// Hybrid assembly — models + agents together
const hybrid = await monult.hybridAssembly({
models: ['claude', 'openai', 'cohere'],
agents: ['architect', 'security'],
prompt: 'Design a secure authentication system'
});
// Register a custom provider at runtime
monult.registerProvider('my-model', myCustomProvider);CLI
# Start interactive chat
monult chat
# Ask an AI a question
monult ask "Design scalable backend" --debate
# Run agent assembly
monult agent-assembly "Design a scalable backend"
# Run hybrid assembly (models + agents)
monult hybrid-assembly "Secure login system" -m claude,openai,cohere -a architect,security
# Analyze a project
monult analyze ./project
# Run an AI agent
monult agent run debugger --input "fix auth bug"
# Start the API server
monult start --port 3000Features
🔌 Universal AI SDK
Connect to any AI provider through a unified API. Built-in support for Claude, OpenAI, Gemini, Cohere, and Local LLMs — plus a dynamic registration system for custom providers.
🏗️ AI Assembly Engine
Multiple models collaborate in structured pipelines: propose → critique → improve → verify → merge.
🤖 Agent Assembly
Multiple AI agents (architect, security, devops, etc.) propose, critique, and improve solutions collaboratively.
🔀 Hybrid Assembly
Models and agents collaborate together: model reasoning → agent critique → model improvement → consensus.
⚔️ AI Debate System
Models argue for different approaches. The system tracks arguments, counterarguments, and improvements to find the strongest answer.
🗳️ Consensus Engine
Voting, weighted scoring, and confidence-based selection to determine the single best response.
🧭 Smart Model Router & Cost Optimizer
Automatically picks the optimal model based on task type, latency requirements, cost budget, and accuracy needs.
🧠 Persistent Memory
Vector-based memory system with four layers: conversation, project, tool, and knowledge memory.
⚙️ Workflow Engine
Declarative JSON-based execution engine that links models, agents, and assemblies together into automated, deterministic pipelines.
🧩 Plugin System
Extensible architecture allowing the community to build custom providers, tools, memory sources, and command plugins.
🤖 AI Agent Framework
Autonomous agents that analyze code, read repositories, execute tasks, and call external tools.
Built-in agents: Debugger · Architect · Security · DevOps · Documentation
🔧 Tool Execution Layer
Extensible tool system with built-in tools for code analysis, documentation parsing, repository reading, and web search.
🛡️ Security Engine
Protect against prompt injection, API key leaks, malicious code suggestions, and unsafe dependencies.
💰 Cost Optimization
Track token usage, cost per request per model, and automatically choose cheaper models when appropriate.
📊 Web Dashboard
Real-time control panel showing active models, running agents, live debates, token usage, latency metrics, and reasoning visualization.
Additional Features
- AI reasoning graph visualization
- AI decision tree tracking
- Repository intelligence engine
- Autonomous bug fixing agent
- Documentation generation agent
- AI task scheduler
- AI collaboration history
- AI confidence scoring system
- AI experiment sandbox
- AI prompt testing lab
- Developer intent detection
- Plugin marketplace
- AI cost analytics dashboard
- AI workflow automation builder
- AI model benchmarking system
Architecture
monult/
├── src/
│ ├── core/ # Core orchestration
│ │ ├── sdk.ts # Universal AI SDK
│ │ ├── assembly.ts # Model Assembly engine
│ │ ├── agent-assembly.ts # Agent Assembly engine
│ │ ├── hybrid-assembly.ts # Hybrid Assembly engine
│ │ ├── debate.ts # Debate system
│ │ ├── consensus.ts # Consensus engine
│ │ ├── router.ts # Smart model router
│ │ └── intent.ts # Intent detection
│ │
│ ├── providers/ # AI provider adapters
│ │ ├── base.ts # Provider interface
│ │ ├── claude.ts # Anthropic Claude
│ │ ├── openai.ts # OpenAI GPT
│ │ ├── gemini.ts # Google Gemini
│ │ ├── cohere.ts # Cohere Command R+
│ │ └── local.ts # Local LLMs (Ollama)
│ │
│ ├── memory/ # Persistent memory
│ │ ├── vector-store.ts # Vector similarity search
│ │ └── context-manager.ts # Multi-layer context
│ │
│ ├── agents/ # Agent framework
│ │ ├── base.ts # Base agent class
│ │ ├── registry.ts # Agent registry
│ │ ├── debugger.ts # Debugger agent
│ │ ├── architect.ts # Architect agent
│ │ ├── security.ts # Security agent
│ │ ├── devops.ts # DevOps agent
│ │ └── docs.ts # Documentation agent
│ │
│ ├── tools/ # Tool system
│ │ ├── base.ts # Tool interface
│ │ ├── code-analyzer.ts # Code analysis
│ │ ├── web-search.ts # Web search
│ │ ├── repo-reader.ts # Repository reader
│ │ ├── doc-parser.ts # Documentation parser
│ │ └── db-analyzer.ts # Database schema analyzer
│ │
│ ├── security/ # Security engine
│ │ └── engine.ts
│ │
│ ├── cost/ # Cost tracking
│ │ └── tracker.ts
│ ├── workflows/ # Workflow engine
│ │ ├── engine.ts # JSON workflow executor
│ │ └── base.ts # Workflow types
│ │
│ ├── plugins/ # Extensibility
│ │ ├── registry.ts # Plugin manager
│ │ └── base.ts # Plugin interface
│ │
│ ├── config/ # Configuration management
│ │ └── config-manager.ts # Global & local config parser
│ │
│ ├── cli/ # Modular CLI platform
│ │ ├── commands/ # Sub-command modules
│ │ ├── utils/ # CLI utilities
│ │ └── index.ts # Commander entrypoint
│ │
│ ├── api/ # REST API
│ │ ├── server.ts
│ │ ├── swagger.ts # OpenAPI 3.0 specification
│ │ └── routes.ts
│ │
│ └── index.ts # SDK entry point
│
├── dashboard/ # Next.js Web Dashboard
├── docs/ # Documentation
├── examples/ # Example scripts
└── tests/ # Test suiteDocumentation
| Document | Description | |----------|-------------| | Architecture | System design and module overview | | Quick Start | Getting started guide | | API Reference | REST API documentation | | Agents | Agent framework guide | | Plugins | Plugin development guide |
Roadmap
- [x] Core orchestration engine
- [x] Multi-provider SDK (Claude, OpenAI, Gemini, Cohere, Local)
- [x] AI debate and consensus system
- [x] Smart model router
- [x] Agent framework with built-in agents
- [x] Agent Assembly (multi-agent collaboration)
- [x] Hybrid Assembly (models + agents)
- [x] Dynamic provider registration
- [x] Memory and context management
- [x] CLI interface
- [x] REST API
- [x] Web dashboard
- [ ] Plugin marketplace
- [ ] Distributed assembly execution
- [ ] Fine-tuning integration
- [ ] IDE extensions (VS Code, JetBrains)
- [ ] Cloud-hosted Monult service
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
git clone https://github.com/rohanvanvi/monult.git
cd monult
npm install
npm run build
npm testLicense
Monult is open source software licensed under the Apache License 2.0.
You are free to use, modify, and distribute this project in accordance with the license.
Original Author: Rohan Vanvi
Built for developers who demand more from AI.
⭐ Star this repo if you find it useful!
