claude-orator-mcp
v0.2.0-beta.0
Published
An MCP server for prompt optimization in Claude Code
Maintainers
Readme
claude-orator-mcp
A Model Context Protocol (MCP) server that optimizes prompts for Claude Code. Heuristic analysis, Anthropic technique selection, and structural rewriting — zero external dependencies, fully deterministic.
Orator is the rhetoric coach — Claude is the orator. The MCP provides deterministic heuristic analysis and technique selection; Claude does the actual rewriting with full context. Built on Anthropic's prompt engineering best practices: XML tags, multishot examples, chain-of-thought, structured output, role assignment, prefill, prompt chaining, and uncertainty permission.
what's new in 0.2.0
- Intent disambiguation —
"You are an expert Rust dev... build me an app"now correctly resolves tocode, notsystem. Fallback heuristics catch code blocks, "build me", and debugging language. - Claude 4.6 anti-patterns — 4 new detections: thoroughness backfire, imperative tool instructions, plan-sharing penalties, and suggest-framing traps.
- Context-first assembly — template now front-loads
<context>before<task>, matching Codex research on grounding data ordering. - Scorer overhaul — recalibrated dimension heuristics produce meaningful score jumps (avg +2.6, up from ~0.9).
- Structured output format — replaces the old prefill technique for Claude 4.6+ compatibility.
- 25 regression tests — comprehensive self-test suite covering all intent categories, anti-patterns, and edge cases.
install
Requirements:
From shell:
claude mcp add claude-orator-mcp -- npx claude-orator-mcpFrom inside Claude (restart required):
Add this to our global mcp config: npx claude-orator-mcp
Install this mcp: https://github.com/Vvkmnn/claude-orator-mcpFrom any manually configurable mcp.json: (Cursor, Windsurf, etc.)
{
"mcpServers": {
"claude-orator-mcp": {
"command": "npx",
"args": ["claude-orator-mcp"],
"env": {}
}
}
}There is no npm install required — no external dependencies or databases, only deterministic heuristics.
However, if npx resolves the wrong package, you can force resolution with:
npm install -g claude-orator-mcpskill
Optionally, install the skill to teach Claude when to proactively optimize prompts:
npx skills add Vvkmnn/claude-orator-mcp --skill claude-orator --global
# Optional: add --yes to skip interactive prompt and install to all agentsThis makes Claude automatically optimize prompts before dispatching subagents, writing system prompts, or crafting any prompt worth improving. The MCP works without the skill, but the skill improves discoverability.
plugin
For automatic prompt optimization hooks and commands, install from the claude-emporium marketplace:
/plugin marketplace add Vvkmnn/claude-emporium
/plugin install claude-orator@claude-emporiumThe claude-orator plugin provides:
Hooks (targeted, zero overhead on good prompts):
PreToolUse (Task)— suggest optimization for under-specified subagent prompts- Before dispatching any subagent → quick heuristic score, suggest
orator_optimizeif < 5.0
Command: /reprompt-orator <prompt> — manual prompt optimization
Requires the MCP server installed first. See the emporium for other Claude Code plugins and MCPs.
features
MCP server with a single tool. Prompt in, optimized prompt out.
orator_optimize
Analyze a prompt across 7 quality dimensions, auto-select from 11 Anthropic techniques, and return a structurally optimized scaffold with before/after scores.
orator_optimize prompt="Write a function that sorts users"
> Returns optimized scaffold with XML tags, output format, examples section
orator_optimize prompt="You are a helpful assistant" intent="system"
> Returns role-assigned system prompt with structure and constraints
orator_optimize prompt="Extract all emails from this text" techniques=["xml-tags", "few-shot"]
> Force-applies specific techniques regardless of auto-selectionScore meter (unique notification format — gradient fill bar):
🪶 3.2 ░░░▓▓▓▓▓▓▓▓ 7.8
+xml-tags +few-shot +structured-output · 3 issues
Wrapped in XML tags, added examples, specified output formatThree-zone bar: ░░░ (baseline) ▓▓▓▓▓ (improvement) ░░ (headroom to 10).
Minimal case (already well-structured):
🪶 ━━ already well-structured (8.4)Input:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| prompt | string | Yes | The raw prompt to optimize |
| intent | enum | No | code \| analysis \| creative \| extraction \| conversation \| system (auto-detected) |
| target | enum | No | claude-code \| claude-api \| claude-desktop \| generic (default: claude-code) |
| techniques | string[] | No | Force-apply specific technique IDs |
Output:
| Field | Type | Description |
|-------|------|-------------|
| optimized_prompt | string | Rewritten prompt scaffold (primary output) |
| score_before | number | Quality score of original (0-10) |
| score_after | number | Quality score after optimization (0-10) |
| summary | string | 1-line explanation of improvements |
| detected_intent | string | Auto-detected intent category |
| applied_techniques | string[] | Technique IDs applied |
| issues | string[] | Detected problems |
| suggestions | string[] | Actionable fixes |
The optimized_prompt is a structural scaffold. Claude refines it with domain knowledge, codebase context, and conversation history.
methodology
How claude-orator-mcp works:
🪶 claude-orator-mcp
════════════════════
orator_optimize
──────────────
PROMPT
│
┌────────────┴────────────┐
▼ ▼
┌───────────┐ ┌────────────┐
│ Detect │ │ Measure │
│ Intent │ │ Complexity │
└─────┬─────┘ └──────┬─────┘
│ │
system > code > word count +
extraction > clause depth
analysis > │
creative > │
conversation │
+ disambiguation │
+ fallback heuristics │
│ │
└────────────┬────────────┘
│
▼
┌───────────────────┐
│ Score Before │
│ │
│ clarity 20% │ strong verbs, single task
│ specificity 20% │ named tech, constraints
│ structure 15% │ XML tags, headers, lists
│ examples 15% │ input/output pairs
│ constraints 10% │ scope, edge cases
│ output_fmt 10% │ format specification
│ efficiency 10% │ no filler, no redundancy
│ │
│ ░░░░░░░░░░ 3.2 │
└────────┬──────────┘
│
▼
┌───────────────────┐ techniques?
│ Select Techniques │◄──── (force override)
│ │
│ when_to_use() × │ 11 predicates
│ intent match × │ filtered
│ score gaps × │ sorted by impact
│ cap at 4 │
└────────┬──────────┘
│
▼
┌───────────────────┐
│ Template Assembly │
│ │
│ role preamble │ expert identity
│ → <context> │ grounding data first
│ → <task> │ XML-wrapped prompt
│ → <requirements> │ constraints + gaps
│ → <examples> │ multishot I/O pairs
│ → output format │ format specification
└────────┬──────────┘
│
▼
┌───────────────────┐
│ Score After │
│ │
│ ░░░▓▓▓▓▓▓▓░░ 7.8│
└────────┬──────────┘
│
▼
OUTPUT
optimized_prompt
+ scores + techniques
+ issues + suggestions
score meter (gradient fill bar):
─────────────────────────────────
🪶 3.2 ░░░▓▓▓▓▓▓▓▓ 7.8
+xml-tags +few-shot +structured-output
Wrapped in XML, added examples, format
░░░ baseline ▓▓▓ improvement ░░ headroom7 quality dimensions (weighted scoring, deterministic):
| Dimension | Weight | Measures | |-----------|--------|----------| | Clarity | 20% | Strong verbs, single task, no hedging | | Specificity | 20% | Named tech, numbers, constraints | | Structure | 15% | XML tags, headers, lists | | Examples | 15% | Input/output pairs, demonstrations | | Constraints | 10% | Negative constraints, scope, edge cases | | Output Format | 10% | Format spec, structure definition | | Token Efficiency | 10% | No filler, no redundancy |
11 Anthropic techniques (auto-selected based on intent, scores, and complexity):
| ID | Name | Auto-selected when |
|----|------|--------------------|
| chain-of-thought | Let Claude Think | Analysis intent, complex tasks |
| xml-tags | Use XML Tags | Long prompt + low structure score |
| few-shot | Multishot Examples | Low example score + extraction/code |
| role-assignment | System Prompts & Roles | System intent or low specificity |
| structured-output | Control Output Format | Low output format score |
| prefill | Structured Output Format | API target + extraction/code |
| prompt-chaining | Chain Complex Tasks | Complex + multiple subtasks |
| uncertainty-permission | Say "I Don't Know" | Analysis or extraction intent |
| extended-thinking | Extended Thinking | Complex + analysis/code intent |
| long-context-tips | Long Context | Long prompt (>2000 chars or >50 lines) |
| tool-use | Tool Use | Prompt mentions tool/function calling |
Core algorithms:
- Intent detection (
detectIntent): Priority-ordered regex patterns across 6 categories —system > code > extraction > analysis > creative > conversation. Includes disambiguation (e.g.,system+codesignals → resolves tocode) and fallback heuristics for code blocks, "build me" patterns, and debugging language. - Heuristic scoring (
scorePrompt): 7-dimension weighted analysis. Each dimension 0-10, overall is weighted sum. Also generates flatissues[]andsuggestions[]arrays. - Technique selection (
selectTechniques): Each technique has awhen_to_use()predicate. Auto-selected based on intent + scores + complexity. Sorted by impact, capped at 4. - Template assembly (
optimize): Builds structural scaffold from selected techniques. Context-first ordering: role →<context>→<task>→<requirements>→<examples>→ output format.
Design principles:
- Single tool — one entry point, minimal cognitive overhead
- Deterministic — same input = same output, no LLM calls, no network
- Scaffold, not final — the optimized prompt is structural; Claude adds substance
- Lean output — flat string arrays for issues/suggestions, no nested objects
- Weighted dimensions — clarity and specificity matter most (20% each)
- Technique cap — max 4 techniques per optimization (diminishing returns beyond)
- Anti-pattern detection — 10 Claude-specific anti-patterns including 4 for Claude 4.6 (thoroughness backfire, tool over-triggering, plan-sharing penalty, suggest framing)
- Zero dependencies — only
@modelcontextprotocol/sdk+zod
alternatives
Every existing prompt optimization tool requires LLM calls, labeled datasets, or evaluation infrastructure. When you need structural improvement at zero latency — during CI/CD, before subagent dispatch, or offline — they cannot help.
| Feature | orator | DSPy | promptfoo | TextGrad | OPRO | LLMLingua | Anthropic Generator | |---|---|---|---|---|---|---|---| | Zero latency | Yes (<1ms) | No (LLM calls) | No (eval runs) | No (LLM calls) | No (LLM calls) | No (LLM calls) | No (LLM call) | | Offline/airgapped | Yes | No | Partial | No | No | No | No | | Deterministic | Yes | No | No | No | No | Partial | No | | No labeled data | Yes | No (examples) | No (test cases) | No (feedback) | No (examples) | Yes | Yes | | Claude-specific | Yes (anti-patterns) | No | No | No | No | No | Yes | | MCP native | Yes | No | No | No | No | No | No | | Structural scoring | 7 dimensions | None | Custom metrics | None | None | None | None | | Dependencies | 0 (pure TS) | PyTorch + LLM | Node + LLM | PyTorch + LLM | LLM | PyTorch + LLM | LLM API |
DSPy — Stanford's framework for compiling LM programs with automatic prompt optimization. Requires labeled examples, LLM calls for optimization, and PyTorch. Optimizes for task accuracy, not structural quality. Latency: seconds to minutes per optimization. Use DSPy when you have labeled data and want to tune for a specific metric.
promptfoo — Test-driven prompt evaluation framework. Requires test cases, LLM calls for evaluation, and an evaluation dataset. Measures output quality, not prompt structure. Complementary: use Orator for structural scaffolding, then promptfoo to evaluate output quality.
TextGrad — Automatic differentiation via text feedback from LLMs. Requires LLM calls for both forward and backward passes. Research-oriented, PyTorch dependency. Latency: minutes. Use when iterating on prompt wording with measurable objectives.
OPRO — DeepMind's optimization by prompting: uses an LLM to iteratively rewrite prompts. Requires examples of good/bad outputs, multiple LLM calls per iteration. Latency: minutes. Use when exploring creative prompt variations with evaluation feedback.
LLMLingua — Microsoft's prompt compression via perplexity-based token removal. Reduces token count by 2-20x but requires a local LLM for perplexity scoring. Different goal: compression, not structural improvement. Use when context window is the bottleneck.
Anthropic Prompt Generator — Anthropic's own tool that generates prompts via Claude. Excellent quality but requires an LLM call, non-deterministic, and not available offline or via MCP. Use when you want Claude to write your prompt from scratch.
Orator's approach is deliberately different: structural analysis via deterministic heuristics. No LLM calls means no API keys, no latency variance, no cost per optimization, and identical results every run. The trade-off is that Orator optimizes prompt structure (clarity, specificity, constraints, format) rather than prompt wording — it can't tell you if your prompt produces good output, only that it's well-formed for Claude. This makes it complementary to evaluation tools like promptfoo: scaffold with Orator, then validate with eval.
development
git clone https://github.com/Vvkmnn/claude-orator-mcp && cd claude-orator-mcp
npm install && npm run build
npm testPackage requirements:
- Node.js: >=20.0.0 (ES modules)
- Runtime:
@modelcontextprotocol/sdk,zod - Zero external databases — works with
npx
Development workflow:
npm run build # TypeScript compilation with executable permissions
npm run dev # Watch mode with tsc --watch
npm run start # Run the MCP server directly
npm run lint # ESLint code quality checks
npm run lint:fix # Auto-fix linting issues
npm run format # Prettier formatting (src/)
npm run format:check # Check formatting without changes
npm run typecheck # TypeScript validation without emit
npm run test # Lint + type check
npm run prepublishOnly # Pre-publish validation (build + lint + format:check)Git hooks (via Husky):
- pre-commit: Auto-formats staged
.tsfiles with Prettier and ESLint
Contributing:
- Fork the repository and create feature branches
- Follow TypeScript strict mode and MCP protocol standards
Learn from examples:
- Official MCP servers for reference implementations
- TypeScript SDK for best practices
- Anthropic prompt engineering docs for technique details
license
Cicero Denounces Catiline by Cesare Maccari (1889). "Quo usque tandem abutere, Catilina, patientia nostra?" [How long, Catiline, will you abuse our patience?] Claudius, once dismissed for his stammer, later addressed this same Senate — proof that the right words, well-structured, can move an empire.
