@sudosandwich/limps
v2.11.0
Published
Local Intelligent MCP Planning Server - AI agent plan management
Maintainers
Readme
limps
Local Intelligent MCP Planning Server — A document and planning layer for AI assistants. No subscriptions, no cloud. Point limps at any folder (local, synced, or in git). One shared source of truth across Claude, Cursor, Codex, and any MCP-compatible tool.

Table of Contents
- Quick Start
- Features
- How I Use limps
- How You Can Use It
- Why limps?
- Installation
- Project Setup
- Client Setup
- Transport
- CLI Commands
- Configuration
- Environment Variables
- MCP Tools
- Skills
- Extensions
- Obsidian Compatibility
- Development
- Used in Production
- Creating a feature plan
- Deep Dive
- Instructions
- What is MCP?
- License
Quick Start
# Install globally
npm install -g @sudosandwich/limps
# Initialize a project
limps init my-project --docs-path ~/Documents/my-planning-docs
# Add to your AI assistant (picks up all registered projects)
limps config sync-mcp --client cursor
limps config sync-mcp --client claude-codeRun this in the folder where you want to keep the docs and that's it. Your AI assistant now has access to your documents and nothing else. The folder can be anywhere—local, synced, or in a repo; limps does not require a git repository or a plans/ directory.
Features
- Document CRUD + full-text search across any folder of Markdown files
- Plan + agent workflows with status tracking and task scoring
- Next-task suggestions with score breakdowns and bias tuning
- Sandboxed document processing via
process_doc(s)helpers - Multi-client sync for Cursor, Claude, Codex, and more
- Extensions for domain-specific tooling (e.g., limps-headless)
What to know before you start
- Local only — Your data stays on disk (SQLite index + your files). No cloud, no subscription.
- Restart after changes — If you change the indexed folder or config, restart the MCP server (or rely on the file watcher) so the index and tools reflect the current state.
- Sandboxed user code —
process_docandprocess_docsrun your JavaScript in a QuickJS sandbox with time and memory limits; no network or Node APIs. - One optional network call —
limps version --checkfetches from the npm registry to compare versions. All other commands (serve, init, list, search, create/update/delete docs, process_doc, etc.) do not contact the internet. Omitversion --checkif you want zero external calls.
How I Use limps
I use limps as a local planning layer across multiple AI tools, focused on create → read → update → closure for plans and tasks. The MCP server points at whatever directory I want (not necessarily a git repo), so any client reads and updates the same source of truth.
Typical flow:
- Point limps at a docs directory (any folder, local or synced).
- Use CLI + MCP tools to create plans/docs, read the current status, update tasks, and close work when done.
- Sync MCP configs so Cursor/Claude/Codex all see the same plans.
Commands and tools I use most often:
- Create:
limps init,create_plan,create_doc - Read:
list_plans,list_agents,list_docs,search_docs,get_plan_status - Update:
update_doc,update_task_status,manage_tags - Close:
update_task_status(e.g.,PASS),delete_docif needed
Full lists are below in "CLI Commands" and "MCP Tools."
How You Can Use It
limps is designed to be generic and portable. Point it at any folder with Markdown files and use it from any MCP-compatible client. No git repo required. Not limited to planning—planning (plans, agents, task status) is one use case; the same layer gives you document CRUD, full-text search, and programmable processing on any indexed folder.
Common setups:
- Single project: One docs folder for a product.
- Multi-project: Register multiple folders and switch with
limps config use. - Shared team folder: Put plans in a shared location and review changes like code.
- Local-first: Keep everything on disk, no hosted service required.
Key ideas:
- Any folder — You choose the path; if there’s no
plans/subdir, the whole directory is indexed. Use generic tools (list_docs,search_docs,create_doc,update_doc,delete_doc,process_doc,process_docs) or plan-specific ones (create_plan,list_plans,list_agents,get_plan_status,update_task_status,get_next_task). - One source of truth — MCP tools give structured access; multiple clients share the same docs.
Why limps?
The problem: Each AI assistant maintains its own context. Planning documents, task status, and decisions get fragmented across Claude, Cursor, ChatGPT, and Copilot conversations.
The solution: limps provides a standardized MCP interface that any tool can access. Your docs live in one place—a folder you choose. Use git (or any sync) if you want version control; limps is not tied to a repository.
Supported Clients
| Client | Config Location | Command |
|--------|----------------|---------|
| Cursor | .cursor/mcp.json (local) | limps config sync-mcp --client cursor |
| Claude Code | .mcp.json (local) | limps config sync-mcp --client claude-code |
| Claude Desktop | Global config | limps config sync-mcp --client claude --global |
| OpenAI Codex | ~/.codex/config.toml | limps config sync-mcp --client codex --global |
| ChatGPT | Manual setup | limps config sync-mcp --client chatgpt --print |
Note: By default,
sync-mcpwrites to local/project configs. Use--globalfor user-level configs.
Installation
npm install -g @sudosandwich/limpsProject Setup
Initialize a New Project
limps init my-project --docs-path ~/Documents/my-planning-docsThis creates a config file and outputs setup instructions.
Register an Existing Directory
limps config add my-project ~/Documents/existing-docsIf the directory contains a plans/ subdirectory, limps uses it. Otherwise, it indexes the entire directory.
Multiple Projects
# Register multiple projects
limps init project-a --docs-path ~/docs/project-a
limps init project-b --docs-path ~/docs/project-b
# Switch between them
limps config use project-a
# Or use environment variable
LIMPS_PROJECT=project-b limps list-plansClient Setup
Automatic (Recommended)
# Add all projects to a client's local config
limps config sync-mcp --client cursor
# Preview changes without writing
limps config sync-mcp --client cursor --print
# Write to global config instead of local
limps config sync-mcp --client cursor --global
# Custom config path
limps config sync-mcp --client cursor --path ./custom-mcp.jsonManual Setup
Add to .cursor/mcp.json in your project:
{
"mcpServers": {
"limps": {
"command": "limps",
"args": ["serve", "--config", "/path/to/config.json"]
}
}
}Add to .mcp.json in your project root:
{
"mcpServers": {
"limps": {
"command": "limps",
"args": ["serve", "--config", "/path/to/config.json"]
}
}
}Claude Desktop runs in a sandbox—use npx instead of global binaries.
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"limps": {
"command": "npx",
"args": ["-y", "@sudosandwich/limps", "serve", "--config", "/path/to/config.json"]
}
}
}On Windows, use cmd /c to run npx:
{
"mcpServers": {
"limps": {
"command": "cmd",
"args": ["/c", "npx", "-y", "@sudosandwich/limps", "serve", "--config", "C:\\path\\to\\config.json"]
}
}
}Add to ~/.codex/config.toml:
[mcp_servers.limps]
command = "limps"
args = ["serve", "--config", "/path/to/config.json"]ChatGPT requires a remote MCP server over HTTPS. Deploy limps behind an MCP-compatible HTTP/SSE proxy.
In ChatGPT → Settings → Connectors → Add custom connector:
- Server URL:
https://your-domain.example/mcp - Authentication: Configure as needed
Print setup instructions:
limps config sync-mcp --client chatgpt --printTransport
- Current: stdio (local MCP server, launched by your client).
- Remote clients: Use an MCP-compatible proxy for HTTPS clients (e.g., ChatGPT).
- Roadmap: SSE/HTTP transports are planned but not implemented yet.
CLI Commands
Viewing Plans
limps list-plans # List all plans with status
limps list-agents <plan> # List agents in a plan
limps status <plan> # Show plan progress summary
limps next-task <plan> # Get highest-priority available taskProject Management
limps init <name> # Initialize new project
limps serve # Start MCP server
limps config list # Show registered projects
limps config use <name> # Switch active project
limps config show # Display current config
limps config sync-mcp # Add projects to MCP clientsConfiguration
Config location varies by OS:
| OS | Path |
|----|------|
| macOS | ~/Library/Application Support/limps/config.json |
| Linux | ~/.config/limps/config.json |
| Windows | %APPDATA%\limps\config.json |
Config Options
{
"plansPath": "~/Documents/my-plans",
"docsPaths": ["~/Documents/my-plans"],
"fileExtensions": [".md"],
"dataPath": "~/Library/Application Support/limps/data",
"extensions": ["@sudosandwich/limps-headless"],
"tools": {
"allowlist": ["list_docs", "search_docs"]
},
"scoring": {
"weights": { "dependency": 40, "priority": 30, "workload": 30 },
"biases": {}
}
}| Option | Description |
|--------|-------------|
| plansPath | Directory for structured plans (NNNN-name/ with agents) |
| docsPaths | Additional directories to index |
| fileExtensions | File types to index (default: .md) |
| dataPath | SQLite database location |
| tools | Tool allowlist/denylist filtering |
| extensions | Extension packages to load |
| scoring | Task prioritization weights and biases |
Environment Variables
| Variable | Description | Example |
|---|---|---|
| LIMPS_PROJECT | Select active project for CLI commands | LIMPS_PROJECT=project-b limps list-plans |
| LIMPS_ALLOWED_TOOLS | Comma-separated allowlist; only these tools are registered | LIMPS_ALLOWED_TOOLS="list_docs,search_docs" |
| LIMPS_DISABLED_TOOLS | Comma-separated denylist; tools to hide | LIMPS_DISABLED_TOOLS="process_doc,process_docs" |
Precedence: config.tools overrides env vars. If allowlist is set, denylist is ignored.
MCP Tools
limps exposes 15 MCP tools for AI assistants:
| Category | Tools |
|----------|-------|
| Documents | process_doc, process_docs, create_doc, update_doc, delete_doc, list_docs, search_docs, manage_tags, open_document_in_cursor |
| Plans | create_plan, list_plans, list_agents, get_plan_status |
| Tasks | get_next_task, update_task_status |
Skills
This repo includes a limps planning skill for AI IDEs in skills/limps-planning.
Install from GitHub:
npx skills add paulbreuler/limps/skills/limps-planningThe skill focuses on selecting the right limps tools for common planning workflows.
Extensions
Extensions add MCP tools and resources. Install from npm:
npm install -g @sudosandwich/limps-headlessAdd to config:
{
"extensions": ["@sudosandwich/limps-headless"],
"limps-headless": {
"cacheDir": "~/Library/Application Support/limps-headless"
}
}Available extensions:
@sudosandwich/limps-headless— Headless UI contract extraction, semantic analysis, and drift detection (Radix UI and Base UI migration).
Obsidian Compatibility
limps works with Obsidian vaults. Open your plans/ directory as a vault for visual editing:
- Full YAML frontmatter support
- Tag management (frontmatter and inline
#tag) - Automatic exclusion of
.obsidian/,.git/,node_modules/

Development
git clone https://github.com/paulbreuler/limps.git
cd limps
npm install
npm run build
npm testThis is a monorepo with:
packages/limps— Core MCP serverpackages/limps-headless— Headless UI extension (Radix/Base UI contract extraction and audit)
Used in Production
limps manages planning for runi, using a separate folder (in this case a git repo) for plans.
Creating a feature plan
This flow is used by the create-feature-plan command you can find in claude/commands along with other useful commands and skills. These can be followed manually with MCP tools. The docs path is whatever folder limps is pointed at (any directory, not necessarily a repo).
- Gather context — Project name and scope, work type (
refactor|overhaul|features), tech stack, prototype/reference docs, known gotchas. - Create planning docs — Use MCP:
list_docsonplans/to get the next plan number (max existing + 1).create_planwith nameNNNN-descriptive-nameand a short description.create_docfor:{plan-name}-plan.md(full specs),interfaces.md,README.md,gotchas.md(template). Use templatenonefor plan/interfaces/README,addendumfor gotchas if available.
- Assign features to agents — Group by file ownership and dependencies; 2–4 features per agent; minimize cross-agent conflicts.
- Distill agent files — For each agent,
create_docatplans/NNNN-name/agents/NNN_agent_descriptive-name.agent.md(templatenone). Extract from the plan: feature IDs + TL;DRs, interface contracts, files to create/modify, test IDs, TDD one-liners, brief gotchas. Target ~200–400 lines per agent. - Validate — Agent files self-contained; interfaces consistent; dependency graph and file ownership correct; each agent file <500 lines.
Resulting layout:
NNNN-descriptive-name/
├── README.md
├── {plan-name}-plan.md
├── interfaces.md
├── gotchas.md
└── agents/
├── 000_agent_infrastructure.agent.md
├── 001_agent_....agent.md
└── ...Why the prefixes?
I chose this to keep things lexicographically ordered and easier to reference in chat. "Show me the next agent or agents we can run now in plan NNNN-plan-name", and the MCP will run the tool to process the agents applying weights and biases to choose the next best task or tasks that can run in parallel.
Deep Dive
plans/
├── 0001-feature-name/
│ ├── 0001-feature-name-plan.md # Main plan with specs
│ ├── interfaces.md # Interface contracts
│ ├── README.md # Status index
│ └── agents/ # Task files
│ ├── 000-setup.md
│ ├── 001-implement.md
│ └── 002-test.md
└── 0002-another-feature/
└── ...Agent files use frontmatter to track status:
---
status: GAP | WIP | PASS | BLOCKED
persona: coder | reviewer | pm | customer
depends_on: ["000-setup"]
files:
- src/components/Feature.tsx
---get_next_task returns tasks scored by:
| Component | Max Points | Description | |-----------|------------|-------------| | Dependency | 40 | All dependencies satisfied = 40, else 0 | | Priority | 30 | Based on agent number (lower = higher priority) | | Workload | 30 | Based on file count (fewer = higher score) |
Biases adjust final scores:
{
"scoring": {
"biases": {
"plans": { "0030-urgent-feature": 20 },
"personas": { "coder": 5, "reviewer": -10 },
"statuses": { "GAP": 5, "WIP": -5 }
}
}
}process_doc and process_docs execute JavaScript in a secure QuickJS sandbox. User-provided code is statically validated and cannot use require, import, eval, fetch, XMLHttpRequest, WebSocket, process, timers, or other host/network APIs—so it cannot make external calls or access the host.
await process_doc({
path: 'plans/0001-feature/plan.md',
code: `
const features = extractFeatures(doc.content);
return features.filter(f => f.status === 'GAP');
`
});Available extractors:
extractSections()— Markdown headingsextractFrontmatter()— YAML frontmatterextractFeatures()— Plan features with statusextractAgents()— Agent metadataextractCodeBlocks()— Fenced code blocks
LLM sub-queries (opt-in):
await process_doc({
path: 'plans/0001/plan.md',
code: 'extractFeatures(doc.content)',
sub_query: 'Summarize each feature',
allow_llm: true,
llm_policy: 'force' // or 'auto' (skips small results)
});Progressive disclosure via resources:
| Resource | Description |
|----------|-------------|
| plans://index | List of all plans (minimal) |
| plans://summary | Plan summaries with key info |
| plans://full | Full plan documents |
| decisions://log | Decision log entries |
Create .cursor/commands/run-agent.md:
# Run Agent
Start work on the next available task.
## Instructions
1. Use `get_next_task` to find the highest-priority task
2. Use `process_doc` to read the agent file
3. Use `update_task_status` to mark it WIP
4. Follow the agent's instructionsThis integrates with limps MCP tools for seamless task management.
What is MCP?
Model Context Protocol is a standardized protocol for AI applications to connect to external systems. Originally from Anthropic (Nov 2024), now part of the Linux Foundation's Agentic AI Foundation.
License
MIT
