@promptly-ai/cli
v1.0.5
Published
MCP server that analyzes your codebase and refines coding prompts before your AI agent acts on them
Maintainers
Readme
Promptly
Better prompts. Better code. First time.
Promptly is an MCP server that analyzes your codebase and refines your coding prompts before your AI agent acts on them. No extra API key. No separate model. Zero friction.
The intelligence is Claude (or your agent). Promptly is the context it was missing.
How It Works
You type a prompt
↓
Your AI agent calls Promptly's MCP tools
↓
Promptly scans your project (stack, conventions, structure, workspace, user rules)
↓
Promptly refines your prompt with real codebase context
↓
Your agent executes the refined, context-aware versionNo external API call. No latency from a second model. Your agent just becomes more accurate when Promptly is connected.
Features
| Feature | What It Does |
|---------|-------------|
| Stack Detection | Reads package.json, tsconfig.json, go.mod, Cargo.toml, pyproject.toml, deno.json, etc. Detects framework, language, styling, ORM, test runner, package manager, runtime |
| Convention Analysis | Reads tool configs (.prettierrc, .editorconfig, ESLint) as ground truth, then samples code to infer file naming, export style, component pattern, quotes, semicolons, indentation, and test location — each with a confidence score |
| Structure Mapping | Walks your project, identifies key directories (components, hooks, utils, api, routes, stores, etc.), and surfaces a ranked slice of files for relevance scoring |
| Workspace Awareness | Detects npm / yarn / pnpm / Turborepo monorepos and narrows analysis to the sub-package the prompt is about via target_files hints |
| User Rules Prelude | Inlines your CLAUDE.md / .cursorrules / GEMINI.md / QWEN.md at the top of every refined prompt so your ground-truth rules always win |
| Intent-Aware Rewriting | Classifies each prompt as create / fix / refactor / explain / configure / test and rewrites with the conventions and constraints that actually fit that intent |
Quick Start
Step 1: Install
npm install -g @promptly-ai/cliStep 2: Run the setup wizard
promptly initThis will:
- Ask which AI agent you're using (Claude Code, Cursor, Gemini CLI, or Qwen Code)
- Ask whether to enable Promptly globally (all projects) or just the current project
- Automatically configure the MCP server and instruction file for your agent
Step 3: Restart your agent
That's it. Your agent will now call Promptly before acting on any coding prompt.
Don't want to install globally? Use
npx @promptly-ai/cli initinstead — works the same way.
Verify it's working
promptly statusThis shows which agents are configured and where the MCP config + instruction files are located.
CLI Commands
promptly init # Set up Promptly (Claude Code, Cursor, Gemini CLI, or Qwen Code)
promptly mcp # Start MCP server (called automatically by your agent)
promptly status # Check which agents are configured
promptly doctor # Validate wiring (MCP config parses, command resolves, instructions present)
promptly inspect [path] # Print what analyzeCodebase sees for a project (add --json for jq)
promptly rules [agent] # Print refinement rules (claude_code|cursor|gemini_cli|qwen_code|generic)
promptly --version # Print versionstatus and doctor also accept --json for scripting. doctor --strict exits 1 on warnings (for CI gating). inspect accepts --agent <id> and --hints <paths> to preview what monorepo narrowing will pick.
MCP Tools
refine_prompt
The main tool. Detects intent, analyzes your codebase, and returns a rewritten prompt with project context baked in — not appended as footnotes.
Inputs: raw_prompt, project_path, optional agent, optional target_files (paths the prompt is about — used for monorepo routing and relevance scoring), optional context_files (files the agent currently has open).
Caching: Each analysis is cached in-memory for 30 minutes and persisted to .promptly/cache.json. The cache key includes the analysis root + agent, and the fingerprint of package.json + tsconfig.json — edit either and the cache invalidates automatically.
Example output for a create intent in a Next.js + Tailwind project:
Add a LoginForm component (using Next.js 14.1.0, TypeScript, styled with Tailwind CSS).
Place files in src/components. Relevant existing files: src/components/AuthLayout.tsx,
src/lib/auth.ts. Use kebab-case file names, named exports, single quotes, no semicolons.
Add a colocated test file using Vitest. Do not install new packages unless explicitly requested.
---
[Promptly] intent: createFor fix, the rewrite skips convention injection and instead constrains the change ("Touch only the files necessary for the fix. Do not refactor surrounding code…"). For explain, the user's question is preserved verbatim and a key-areas map is added above it. See promptly rules <agent> for the per-intent rewrite rules.
get_refinement_rules
Returns the current ruleset. Only called if the user asks how Promptly works.
Supported Agents
| Agent | promptly init | Rules | Codebase Analysis |
|-------|-----------------|-------|-------------------|
| Claude Code | ✔ | Full agent-specific rules | Full |
| Cursor | ✔ | Agent-specific rules | Full |
| Gemini CLI | ✔ | Agent-specific rules | Full |
| Qwen Code | ✔ | Agent-specific rules | Full |
Setup Details
| Agent | MCP Config | Instruction File |
|-------|-----------|-----------------|
| Claude Code | ~/.claude/settings.json | CLAUDE.md (global or project) |
| Cursor | .cursor/mcp.json (global or project) | .cursorrules (project) |
| Gemini CLI | ~/.gemini/settings.json (global or project) | GEMINI.md (global or project) |
| Qwen Code | ~/.qwen/settings.json (global or project) | QWEN.md (global or project) |
Architecture
promptly/
├── src/
│ ├── analyzer/ # Codebase analysis (stack, conventions, structure, workspace, userRules)
│ ├── bin/ # CLI entrypoint
│ ├── cli/ # CLI commands (init, status, doctor, inspect, rules)
│ ├── mcp/ # MCP server, tool definitions, disk cache
│ └── rules/ # Intent detection + prompt rewriter
├── package.json
├── tsconfig.json
└── tsup.config.tsDevelopment
git clone https://github.com/MuhammadUsmanGM/promptly.git
cd promptly
npm install
npm run buildTest the CLI:
node dist/bin/promptly.js --help
node dist/bin/promptly.js rules claude_codeHow Refinement Works
Promptly rewrites the prompt — it doesn't just append rules. Each refinement runs through:
- Intent Detection — Weighted regex scoring classifies the prompt as
create,fix,refactor,explain,configure,test, orgeneric. Strong signals ("bug", "configure ESLint") outrank weak ones ("add") so intent doesn't hinge on word order. - Analysis — Stack, conventions, structure, monorepo layout, and user rules are gathered in parallel. Tool configs (
.prettierrc,.editorconfig, ESLint) are treated as ground truth; sampling fills in what configs don't cover. - User Rules Prelude — If a
CLAUDE.md/.cursorrules/GEMINI.md/QWEN.mdexists (project or global), its content is inlined at the top of the refined prompt so your rules always win over inferred conventions. - Monorepo Scoping — If the project is a monorepo and
target_filespoint into a sub-package, the analysis is narrowed to that package. Otherwise the rewrite notes that context is from the repo root and suggests passing hints. - Intent-Specific Rewrite — Each intent produces a different shape.
createinjects stack, file location, conventions, test runner, and the no-new-packages guardrail.fixadds stack context, constrains to minimal changes, preserves tests.refactorbakes in code style but not file naming.explainpreserves the question verbatim and prepends a key-areas map.configureadds framework + package manager and points to the config directory.testanchors on the detected test runner and enforces test-location conventions. - File Relevance Scoring — Keyword extraction + signal boosts (
target_files= 5,context_files= 3, recent git history = 2) rank project files; the top 8 are surfaced to the agent. - Convention Confidence Gate — Only conventions above a 0.5 confidence threshold are injected, so low-signal style rules don't override judgment.
Contributing
Contributions are welcome! Please read the Contributing Guide before opening a pull request.
See CHANGELOG.md for release history.
License
© Muhammad Usman — MIT License
