@4iops/core
v1.0.104
Published
AI Ops engine — task management, multi-model dispatch, intelligence jobs. Self-hosted or cloud.
Downloads
162
Readme
Every Claude Code session starts from scratch. 4iops fixes that.
Persistent memory, automatic bug detection, task and plan management, cost tracking, multi-model dispatch (Codex, Gemini, Ollama), and a 30+ page real-time dashboard. One command to install, zero configuration.
Quick Start
Self-hosted (embedded database, runs locally):
npx @4iops/core startHosted (zero infrastructure, connect to 4iops.com):
npx @4iops/core connect <your-api-key>First run walks you through setup. Dashboard at http://localhost:37778.
What You Get
| Without 4iops | With 4iops | |---|---| | Session context lost on exit | Persisted and injected on next start | | No bug tracking | Auto-detected from errors, severity classified | | Tasks disappear after session | Persisted with plans, dependencies, sprints | | No cost visibility | Per-session, per-project, per-model, daily trends | | Manual agent coordination | Auto-suggested teams matched to task complexity | | Secrets in .env files | Encrypted vault, auto-injected per project | | No cross-session context | Observation timeline, handoff docs, preferences |
How it works
4iops hooks into Claude Code's event system. Every significant event — session start, tool use, session end — fires a hook that captures data and injects context.
Claude Code 4iops server
| |
|-- SessionStart ---------------------->| Create session, inject context
|-- UserPromptSubmit ------------------>| Record prompt, check budget
|-- PostToolUse ----------------------->| Record observation, detect bugs
|-- SessionEnd ------------------------>| Summarize, checkpoint, aggregate cost
| |
|<-- Context injection -----------------| "12 open bugs, Plan #73 active"
|<-- Budget warning --------------------| "$47 of $50 budget used"
|<-- Team suggestion -------------------| "Spawn [security-fixer, deploy-agent]"30+ hooks, fully automatic. No configuration needed.
Multi-Model
4iops turns Claude Code into a multi-model development environment. Claude stays the brain; other models handle specific roles.
Dispatch
Send any task to any model — by description or by task ID. The receiving model gets a structured prompt, works on a branch, and reports back. Results land in Review status for verification.
/codex:dispatch #1234 Dispatch task #1234 to OpenAI Codex
/gemini:dispatch "add tests" Send a free-text task to Gemini
/local:dispatch #1234 Run on local Ollama/vLLM (private, free)
/codex:review Codex reviews your uncommitted changes
/codex:rescue "stuck on auth" Delegate a stuck problem to CodexTask ID dispatch — fetches full task details from 4iops (title, description, priority, notes), builds a rich prompt, marks InProgress, and updates the task on completion with the branch name, model used, and summary.
Structured completion — dispatched models output a DISPATCH_RESULT block with status, model name, branch, files changed, test results, and blockers. 4iops parses this to update the task record automatically.
Agent discretion — Claude agents can dispatch tasks to other models at their judgement. Tests, boilerplate, and refactoring go to Codex. Research and docs go to Gemini. Security and architecture stay on Claude. P0 tasks are never dispatched.
Task created (Open)
-> Agent decides Codex is better suited (InProgress, agent=codex)
-> Codex works on branch task-1234-codex
-> Codex outputs DISPATCH_RESULT with model=gpt-4.1
-> Task moves to Review with branch + summary
-> Claude or human verifies -> DoneSupported backends
| Backend | Install | Auth | Dispatch command |
|---------|---------|------|-----------------|
| OpenAI Codex | npm i -g @openai/codex | codex auth login | /codex:dispatch |
| Google Gemini | npm i -g @google/gemini-cli | gcloud auth or GOOGLE_API_KEY | /gemini:dispatch |
| Ollama | ollama.com/download | None (local) | /local:dispatch |
| vLLM | pip install vllm | None (local) | /local:dispatch |
The official Codex plugin for Claude Code is also supported — install it via claude plugin marketplace add openai/codex-plugin-cc for /codex:review, /codex:rescue, and background job management. 4iops captures activity from the official plugin automatically.
Role-based config
Assign models to 9 development roles via /model-roles or the Settings page:
| Role | Default | Speed Preset | Multi-Brain Preset | |------|---------|-------------|-------------------| | Planner | Claude Opus | Claude Opus | Claude Opus | | Coder | Claude Sonnet | Codex | Codex | | Reviewer | Claude Sonnet | Claude Sonnet | Gemini | | Tester | Claude Sonnet | Codex | Codex | | Security | Claude Opus | Claude Opus | Claude Opus |
4 presets: Claude Only, Speed, Cost Saver, Multi Brain. Full custom config available.
Review gate
A Stop hook enables cross-model peer review. Claude writes code, a second model reviews it, Claude fixes issues — automatic loop with budget guard (max 3 cycles). Enable with IOPS_REVIEW_GATE=1.
Auto-detection
At session start, 4iops detects which model CLIs are installed and authed. No manual configuration — if Codex is on your PATH, it's available. Project enrolment (/enrol-project) also prompts to install companion plugins.
Dashboard
30+ page web dashboard at http://localhost:37778:
Project-scoped views — Overview, Sessions, Bugs, Tasks, Plans, Documents, Observations, Secrets, Deploys, Services, Architecture
Cross-project views — Products (multi-repo roll-ups), Projects, Roadmaps, Cost Analytics, Intelligence, Activity Feed, Nexus (knowledge graph), Audit Log
Product management — Group projects into Products with documentation templates (Required/Good/Great tiers), completeness scoring, Roadmap/Wave hierarchy, and cross-repo blocker visibility.
Model health — Dashboard shows which AI models are available, their status (ready/rate-limited/not installed), and per-provider spend.
CLI
npx @4iops/core <command>| Command | Description |
|---------|-------------|
| init | Interactive setup wizard (first-time self-hosted setup) |
| start | Start server (auto-runs setup on first use) |
| start -d | Start as background daemon |
| start --verbose | Start with detailed server output |
| connect <key> | Connect to 4iops.com hosted (single command setup) |
| stop | Stop the server |
| status | Show server and project status |
| doctor | Run diagnostic checks |
| push | Push local plugin/hooks to installed profile |
| migrate | Run database migrations |
| reset | Reset database (destructive) |
| autostart | Configure server to start on login |
| backfill-memory | Backfill existing data into the memory index |
Database modes
4iops picks the best available database automatically:
- PGlite (default) — Embedded WASM Postgres, zero config, no Docker
- Docker — Auto-managed Postgres container if PGlite fails
- External Postgres — Set
DATABASE_URLfor your own instance
Logging
Server output is always written to ~/.4iops/server.log. In foreground mode, output also appears in the terminal. Use --verbose with -d to tail the log after detaching.
MCP Tools
106 tools exposed to Claude via the Model Context Protocol:
Bugs — add_bug, query_bugs, search_bugs, update_bug, batch_update_bugs
Tasks — add_task, query_tasks, search_tasks, update_task, batch_update_tasks
Plans — create_plan, query_plans, update_plan, get_plan
Products — create_product, query_products, get_product_dashboard, add_project_to_product, get_product_blockers, delete_product, remove_project_from_product
Documents — save_doc, search_docs, get_doc, get_doc_outline, get_doc_section
Secrets — get_secret, use_secret, save_secret, list_secrets, search_secrets
Context — get_context, get_observations, observation_stats, timeline, refresh_context
Intelligence — smart_search, smart_outline, smart_unfold, intelligence_insights, intelligence_stats, intelligence_run_job, intelligence_health
Agents — list_agents, get_agent, register_agent, update_agent, sync_agents_to_local
Waves — create_wave, get_wave, list_waves, update_wave
Roadmaps — create_roadmap, get_roadmap, query_roadmaps, update_roadmap
Multi-Model — dispatch_to_model, get_product_template, init_product_template
Agent Sessions — query_agent_sessions, get_agent_session
Templates — query_templates, get_template
Plus — budgets, notifications, deploy status, cost reports, session replay, checkpoints, model health, and more.
Configuration
Everything works out of the box. Optional config via ~/.4iops.json:
{
"url": "http://localhost:37778",
"apiKey": "your-api-key"
}Environment variables
| Variable | Default | Description |
|----------|---------|-------------|
| IOPS_URL | http://localhost:37778 | Server URL |
| IOPS_API_KEY | (none) | API auth key (empty = dev mode, no auth) |
| IOPS_MASTER_KEY | (auto-generated) | Secrets encryption key |
| DATABASE_URL | (PGlite) | PostgreSQL connection string |
| PORT | 37778 | Server port |
Intelligence (optional)
Background AI jobs analyze your data — bug triage, session summaries, cost optimization, and more. Configure any combination:
| Variable | Provider | Cost |
|----------|----------|------|
| INTELLIGENCE_LLM_URL | Local vLLM/Ollama | Free |
| GOOGLE_API_KEY | Gemini Flash | ~$0.0001/job |
| OPENAI_API_KEY | GPT-4o-mini | ~$0.0002/job |
| ANTHROPIC_API_KEY_IOPS | Haiku/Sonnet | ~$0.001/job |
The system routes to the cheapest available provider automatically. No providers configured? Core ops still works — intelligence is disabled gracefully.
4iops
4iops is the engine behind 4iops.com — a cross-framework ops platform for every AI agent framework.
Integration paths
Plugin-first (primary) — Claude Code users get multi-model dispatch via built-in skills:
/codex:dispatch /gemini:dispatch /local:dispatch /model-rolesREST adapters (secondary) — for non-Claude-Code frameworks:
| Package | Framework | Install |
|---------|-----------|---------|
| @4iops/vercel-ai | Vercel AI SDK | npm i @4iops/vercel-ai |
| 4iops-tools | LangChain, LangGraph, CrewAI | pip install 4iops-tools |
| 4iops-openai | OpenAI Agents SDK | pip install 4iops-openai |
| @4iops/connect | 4iops/connect | npm i @4iops/connect |
Adapters work against self-hosted (localhost:37778) or hosted (api.4iops.com). Same API, same data model.
Self-hosted vs hosted
| | Self-hosted | Hosted (4iops.com) |
|---|---|---|
| Install | npx @4iops/core start | npx @4iops/core connect <key> |
| Audience | Claude Code power users | Any framework, any model |
| Features | 70+ MCP tools + 30+ hooks | REST API + adapter tools |
| Cost | Free (MIT) | Free tier + Pro |
| Data | Your infrastructure | Managed cloud |
Architecture
@4iops/core
├── bin/4iops.js CLI entry point
├── plugin/
│ ├── hooks/ 30+ event hooks (auto-installed)
│ ├── skills/ 27 slash commands (multi-model dispatch, ops, review)
│ └── scripts/mcp-server.cjs MCP server (70+ tools)
├── dist/ Compiled server
├── prisma/ Schema + 45 migrations
│ └── schema.prisma 35+ models (Product > Roadmap > Plan > Wave > Task)
└── src/
├── cli/ init, start, stop, status, hosted, doctor, migrate, reset
├── routes/ 42+ API routes + dashboard routes
├── views/ 30+ server-rendered HTML pages
├── intelligence/ 55+ background AI jobs + scheduler
├── lib/ Dispatch router, model roles, rate limits, cost pricing
└── middleware/ Auth, rate limiting, validationStack: Node.js 20+, TypeScript, Express, Prisma, PostgreSQL/PGlite
Contributing
git clone https://github.com/4iops/4iops
cd 4iops
npm install
npx prisma generate
npx tsx src/index.ts # Dev server
npx vitest # TestsSee CONTRIBUTING.md for details.
