mcp-task-server
v3.7.0
Published
MCP server for task management with multi-agent coordination
Maintainers
Readme
MCP Task Server
A Model Context Protocol (MCP) server for task management with multi-agent coordination. Designed for use with Cursor IDE and other MCP-compatible AI tools.
Quick Publish
publish-mcpThis script (in ~/scripts/) bumps version, builds, and publishes to npm. Requires $NPM_TOKEN in your shell profile.
Cursor IDE Integration
Global Config (Recommended)
Add to ~/.cursor/mcp.json for all projects:
{
"mcpServers": {
"task-server": {
"command": "npx",
"args": ["-y", "mcp-task-server"]
}
}
}The server auto-detects your workspace. No per-project config needed.
Per-Project Config (Optional)
For explicit control, add to .cursor/mcp.json in your project:
{
"mcpServers": {
"task-server": {
"command": "npx",
"args": ["-y", "mcp-task-server"],
"env": {
"TASK_WORKSPACE": "/absolute/path/to/your/project"
}
}
}
}After adding, reload Cursor: Cmd+Shift+P → "Developer: Reload Window"
Usage
Invoke any tool with call <tool_name>:
| Command | Description |
|---------|-------------|
| call help | List all 44 available tools |
| call list_tasks | Show all tasks |
| call next_task | Get recommended next task |
| call diagnose | Check server configuration |
| call show_memory | View shared context memories |
| call init_project | Initialise project structure |
Why call? Using just help may trigger Cursor's generic help. The call prefix ensures the MCP tool is invoked.
flowchart TB
subgraph Cursor IDE
A[AI Assistant]
end
subgraph MCP Task Server
B[44 Tools]
C[Preferences Engine]
P[Project Context]
end
subgraph Your Machine
D[~/.cursor/shared-context.json]
E[memory_bank/tasks/]
F[memory_bank/execution/progress*.md]
G[~/.cursor/content-context.md]
end
A <-->|MCP Protocol| B
B --> C
B --> P
C -->|read/write| D
P -->|filter tasks| E
B -->|read/write| E
B -->|auto-sync| F
B -->|read/write| G
D -->|inject into responses| AFeatures
- 44 MCP Tools: Comprehensive task management and coordination
- Multi-Project Support: Manage tasks across multiple projects from one workspace with automatic filtering
- Auto Workspace Detection: Works globally without per-project config
- Multi-Agent Coordination: Support for Planner, Worker, and Judge roles
- Dependency Tracking: Tasks can depend on other tasks
- Priority Levels: Critical, High, Medium, Low
- Scalable Storage: Individual task files with JSON registry and per-project markdown summaries
- Prompt-Based Operations: PRD parsing, task expansion, complexity analysis
- Shared Context: Reads user preferences from
~/.cursor/shared-context.jsonto personalise prompts - Content Context: Global content inventory at
~/.cursor/content-context.mdfor content creators - Project Initialisation: Scaffolds 28 template files across agent-kit, memory_bank, and cursor rules
- Wellness Tracking: Break reminder system via cursor rules, using session tracking in shared memory
- Flow Dashboard Integration: Fetch activity data from Flow Control dashboard for work/break recommendations
Quick Start
Note: For brand new empty projects, create at least one file first (e.g.,
touch README.md), then reload Cursor. See Troubleshooting for details.
# Add to a new project and initialise
call init_project({ project_name: "my-app" })
# Add your first task
call add_task({ title: "Set up development environment" })
# Get recommended next task
call next_taskMulti-Project Workflow
Dedicated Project Workspaces
When working in a project's own folder, bind it once:
# First time in this workspace - bind project to folder
call set_project({ project: "coach-platform", workspace: "/Users/andy/Projects/coaching/platform" })
# Future sessions auto-detect via workspace binding
call get_project({ workspace: "/Users/andy/Projects/coaching/platform" })
# Returns: { current_project: "coach-platform", source: "workspace" }Hub Workflow (Obsidian, Docs, etc.)
When managing multiple projects from one workspace (like Obsidian):
Option A: Global context switching (one project at a time)
call set_project({ project: "coach-platform" })
call list_tasks # Shows coach-platform tasks
call set_project({ project: "mcp-task-server" })
call list_tasks # Now shows mcp-task-server tasksNote: Global context is shared across all agent chats in the same workspace.
Option B: Explicit project parameter (parallel work)
# Don't set context - pass project directly
call list_tasks({ project: "coach-platform" })
call list_tasks({ project: "mcp-task-server" })
call add_task({ title: "Fix bug", project: "coach-platform" })This is safer when jumping between projects or having multiple chats open.
Progress Files
Progress files are generated per-project:
memory_bank/execution/
├── progress.md # Combined view (all projects)
├── progress-coach-platform.md # Coach platform tasks only
└── progress-mcp-task-server.md # MCP task server tasks onlyShared Context & Preferences
The server reads ~/.cursor/shared-context.json for user preferences and automatically injects them into tool responses and prompts.
How Preferences Flow
flowchart LR
A[~/.cursor/shared-context.json] -->|read at runtime| B[MCP Task Server]
B -->|inject into| C[Tool Responses]
B -->|inject into| D[Generated Prompts]
C -->|AI sees| E[preferences hint]
D -->|AI uses| F[full context]Agent Workflow for Preferences
Before creating or reviewing content, agents should:
- Call
show_memoryto check for user preferences - Look for recognised memory categories (see below)
- Apply discovered preferences to all content
This is enforced by the Cursor rules created by init_project:
.cursor/rules/agent-workflow.mdc- Task management protocol.cursor/rules/wellness-check.mdc- Break reminders based on time/duration.cursor/rules/shared-context-sync.mdc- Memory sync guidance
Memory Storage Architecture
flowchart TB
subgraph Your Machine
A[shared-context.json]
end
subgraph MCP Server
B[show_memory] -->|read| A
C[update_memory] -->|write| A
D[Tool Responses] -->|include hint from| A
end
subgraph AI Assistant
E[Sees preferences in responses]
F[Applies to content creation]
end
D --> E --> FManaging Memories
Memories use simple sequential IDs (1, 2, 3...) managed by the server:
| Action | What it does |
|--------|--------------|
| create | Add new memory with next available ID |
| update | Update existing memory by ID |
| delete | Remove memory by ID |
| sync | Create or update by title match (recommended) |
| migrate | One-time conversion of old IDs to sequential |
Recognised Memory Titles
| Title (case-insensitive) | Purpose |
|--------------------------|---------|
| identity or values | User context for prompts |
| writing preferences | Formatting, style, avoided words |
| workflow or memory bank | Task management preferences |
| wellness preferences | Break reminder settings and thresholds |
| current_project | Active project for multi-project filtering |
The sync action is recommended because it matches by title, avoiding duplicates:
call update_memory({
action: "sync",
title: "Writing preferences",
content: "British English, no emojis..."
})
# Returns: { status: "synced_new", memory: { id: "1", ... } }Migrating from old IDs: If you have memories with old-style IDs (like mem_1737900000_abc123 or Cursor IDs), run migrate once:
call update_memory({ action: "migrate" })
# Returns: { status: "migrated", changes: [{ old_id: "mem_...", new_id: "1", title: "..." }] }
# Or: { status: "already_migrated" } if already sequentialWhy not Cursor IDs? Cursor's memory system is unreliable - agents cannot always create memories, and when they can, the ID is not always accessible. Sequential IDs managed by the server are simpler and more reliable.
Why Shared Context Exists
Cursor's memory system has limitations that make shared context necessary:
flowchart TB
subgraph cursor [Cursor IDE]
A[AI Assistant]
B[Internal Memory DB]
end
subgraph mcp [MCP Server]
C[Task Tools]
D[Memory Tools]
end
subgraph problem [The Problem]
E["AI can READ Cursor memories"]
F["MCP cannot ACCESS Cursor memories"]
end
A -->|can see| B
A -->|can call| C
C -.-x|no access| B
E --> G[Need bridge]
F --> G| Cursor Memory Limitation | Impact | Solution |
|-------------------------|--------|----------|
| No API access | MCP tools can't read Cursor's memory database | ~/.cursor/shared-context.json as shared store |
| Isolated per conversation | Memories don't persist across all contexts | Shared file accessible to all MCP servers |
| Unreliable memory creation | Agents cannot always create Cursor memories | Server-managed sequential IDs |
Workflow:
- Ask the agent to sync your preferences:
update_memory({ action: "sync", title: "Writing preferences", content: "..." }) - Memory saved to
~/.cursor/shared-context.jsonwith sequential ID - All MCP servers can now read it via
show_memory
Where Preferences Are Used
| Tool Type | How Preferences Are Applied |
|-----------|----------------------------|
| list_tasks, get_task, add_task, update_task, next_task | Brief hint in response |
| parse_prd, expand_task, research_task | Full context in generated prompt |
| check_compliance | All preferences for validation |
| Wellness cursor rule | Session tracking and break reminders |
Compliance Checking
Validate files against your preferences:
# Review only
call check_compliance({ path: "README.md" })
# Review a folder
call check_compliance({ path: "docs/" })
# Review and fix issues
call check_compliance({ path: "README.md", fix: true })See agent-kit/SHARED_CONTEXT.md for full setup and usage.
Installation
Via npx (recommended)
npx mcp-task-serverGlobal Install
npm install -g mcp-task-server
mcp-task-serverFrom Source
git clone https://github.com/yourusername/mcp-task-server.git
cd mcp-task-server
npm install
npm run build
npm startWorkspace Path Configuration
The server automatically detects your project's root directory using multiple strategies.
Detection Order
flowchart TD
A[Start] --> B{TASK_WORKSPACE set?}
B -->|Yes| C[Use TASK_WORKSPACE]
B -->|No| D{WORKSPACE_FOLDER_PATHS set?}
D -->|Yes| E[Use Cursor workspace path]
D -->|No| F{Project markers found?}
F -->|Yes| G["Use directory with .git, package.json, or memory_bank"]
F -->|No| H[Fall back to cwd]| Priority | Method | When Used |
|----------|--------|-----------|
| 1 | TASK_WORKSPACE env | Explicit per-project override |
| 2 | WORKSPACE_FOLDER_PATHS env | Auto-set by Cursor (undocumented) |
| 3 | Project marker detection | Walks up from cwd looking for .git, package.json, memory_bank |
| 4 | process.cwd() | Final fallback |
Debugging
Run call get_version to see how the workspace was detected:
call get_version
# Returns: { workspace: { root: "/path/to/project", source: "found .git" } }Possible sources:
"TASK_WORKSPACE env"- Explicit override"WORKSPACE_FOLDER_PATHS env"- Detected from Cursor"found .git"/"found package.json"/"found memory_bank"- Project marker"process.cwd() fallback"- No detection, using current directory
Explicit Override
If auto-detection isn't working, set TASK_WORKSPACE in per-project config:
{
"mcpServers": {
"task-server": {
"command": "npx",
"args": ["-y", "mcp-task-server"],
"env": {
"TASK_WORKSPACE": "/absolute/path/to/project"
}
}
}
}Troubleshooting
Tools Not Available in Agent
Symptom: Cursor Settings shows the MCP server connected with tools, but the Agent says it can only see cursor-browser-extension and cursor-ide-browser.
Cause: Cursor has a limitation where MCP tools don't work in empty workspaces (folders with no files).
flowchart LR
A[Empty Folder] --> B["No workspace folders"]
B --> C[MCP connects]
C --> D[Tools registered]
D --> E["Agent can't access"]
F[Folder with files] --> G[Workspace registered]
G --> H[MCP connects]
H --> I[Tools work]Fix:
- Create at least one file in the project:
touch README.md # or npm init -y - Reload Cursor:
Cmd+Shift+P→ "Developer: Reload Window" - Start a new Agent chat
- Try
call help
Cached Old Version
Symptom: Server shows old version or missing tools after publishing update.
Fix: Clear the npx cache:
rm -rf ~/.npm/_npxThen reload Cursor and start a new Agent chat.
Workspace Detected as Home Directory
Symptom: call get_version shows workspace as /Users/yourname instead of your project.
Cause: No project markers found (.git, package.json, memory_bank).
Fix: Either:
- Initialise git:
git init - Create package.json:
npm init -y - Use explicit override with
TASK_WORKSPACEenv var
Viewing MCP Logs
Cursor logs MCP server activity. To debug issues:
# Find latest logs
ls -t ~/Library/Application\ Support/Cursor/logs/*/
# View task-server specific logs (replace YYYYMMDD with date)
cat ~/Library/Application\ Support/Cursor/logs/*/window*/exthost/anysphere.cursor-mcp/MCP\ user-task-server.log | tail -50
# Look for warnings
grep -r "warning\|error" ~/Library/Application\ Support/Cursor/logs/*/window*/exthost/anysphere.cursor-mcp/Key log messages:
No workspace folders found→ Empty workspace issueFound 44 tools→ Server connected successfullyWorkspace: /path→ Shows detected workspace
Configuration
Configure via environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
| TASK_MD_PATH | memory_bank/execution/progress.md | Path to markdown summary |
| TASK_JSON_PATH | memory_bank/tasks/tasks.json | Path to JSON registry |
| TASK_DIR | memory_bank/tasks | Directory for task files |
| FLOW_API_URL | https://flow.reids.net.au | Flow Control dashboard API URL (falls back to STATS_API_URL) |
| SLACK_BOT_TOKEN | - | Slack bot OAuth token (xoxb-...) |
| SLACK_CHANNEL_ID | - | Default Slack channel for notifications |
| SLACK_TEAM_ID | - | Optional Slack workspace ID |
Storage Architecture
The server uses a scalable storage model with three layers:
1. JSON Registry (memory_bank/tasks/tasks.json)
Machine-readable source of truth containing task IDs, status, dependencies, and subtasks.
{
"version": "2.0.0",
"tasks": [
{
"id": "1",
"title": "Project Setup",
"status": "done",
"priority": "high",
"subtasks": [
{ "id": 1, "title": "Init project", "status": "done" }
]
}
]
}2. Task Files (memory_bank/tasks/task_XXX.txt)
Human-readable detailed task files for each top-level task:
# Task ID: 1
# Title: Project Setup and Configuration
# Status: done
# Dependencies: None
# Priority: high
# Description: Initialise the project with required tooling.
# Details:
Full implementation details here...
# Subtasks:
## 1. Init project [done]
### Dependencies: None
### Description: Create initial project structure3. Progress Summary
The server generates progress summaries automatically:
memory_bank/execution/progress.md- Combined view of all projectsmemory_bank/execution/progress-{project}.md- Per-project progress files
Combined Progress (shows all projects):
# Implementation Progress
## Task Completion Summary
### Completed Tasks (3/5)
- Task 1: Project Setup
- Task 2: Database SchemaThis architecture scales well for complex projects with many tasks and subtasks.
Tools
Project Initialisation
| Tool | Description |
|------|-------------|
| init_project | Initialise project with agent-kit, memory_bank, and cursor rules |
| sync_rules | Sync .cursor/rules/ to latest templates (for existing projects) |
# Initialise with auto-detected project name
call init_project
# Initialise with custom name
call init_project({ project_name: "my-app" })
# Force overwrite existing files
call init_project({ force: true })
# Sync rules to existing project (add missing only)
call sync_rules
# Preview what would change
call sync_rules({ dry_run: true })
# Update existing rules to latest templates
call sync_rules({ update_existing: true })Core Tools
| Tool | Description | Preferences |
|------|-------------|-------------|
| list_tasks | List all tasks, optionally filtered by status or assignee | ✓ hint |
| get_task | Get a specific task by ID with subtasks | ✓ hint |
| add_task | Create a new task | ✓ hint |
| update_task | Update task title, description, status, priority, or metadata | ✓ hint |
| complete_task | Mark a task as completed | |
| next_task | Get the next recommended task based on priority/dependencies | ✓ hint |
Multi-Agent Coordination
| Tool | Description |
|------|-------------|
| claim_task | Claim a task for the calling agent |
| release_task | Release a claimed task |
| handoff_task | Transfer task to another role |
| review_task | Review a task (Judge role) |
| approve_task | Approve completed task (Judge role) |
| reject_task | Reject with feedback (Judge role) |
Task Breakdown
| Tool | Description |
|------|-------------|
| expand_task | Get prompt to break task into subtasks |
| add_subtask | Add a subtask to existing task |
| set_dependencies | Set task dependencies |
| remove_task | Remove a task |
Prompt-Based Tools
| Tool | Description | Preferences |
|------|-------------|-------------|
| parse_prd | Get prompt to parse PRD into tasks | ✓ full |
| research_task | Get prompt to research a task | ✓ full |
| analyse_complexity | Get prompt to analyse task complexity | ✓ full |
| check_compliance | Check file/folder against user preferences from shared context | ✓ full |
| analyse_project | Analyse project structure and suggest memory_bank updates | ✓ full |
Project Context Tools
| Tool | Description |
|------|-------------|
| set_project | Set project context (global or workspace-specific) |
| get_project | Get current project (checks workspace first, then global) |
| tag_all_tasks | Bulk tag tasks with a project name (for migration) |
Hybrid Workspace Support
Project context supports two modes:
- Global context - For hub workflows (like Obsidian) where you manage multiple projects from one workspace
- Workspace-specific binding - For dedicated project workspaces that should always use a specific project
# Hub workflow: Set global project (switchable)
call set_project({ project: "mcp-task-server" })
# Returns: { project: "mcp-task-server", binding: "global" }
call set_project({ project: "coach-platform" })
# Switches global context to coach-platform
# Dedicated workspace: Bind project to workspace path
call set_project({ project: "mcp-task-server", workspace: "/Users/andy/Projects/tools/mcp-task-server" })
# Returns: { project: "mcp-task-server", binding: "workspace", workspace: "..." }
# Get project: Checks workspace mapping first, then global
call get_project({ workspace: "/Users/andy/Projects/tools/mcp-task-server" })
# Returns: { current_project: "mcp-task-server", source: "workspace" }
call get_project()
# Returns: { current_project: "coach-platform", source: "global" }
# Bulk tag all unassigned tasks with a project
call tag_all_tasks({ project: "coach-platform" })
# Returns: { tagged: 100, skipped: 0, message: "Tagged 100 tasks..." }Storage Structure
Project context is stored in ~/.cursor/shared-context.json:
{
"cursor_memories": [...],
"metadata": {
"current_project": "coach-platform",
"workspace_projects": {
"/Users/andy/Projects/tools/mcp-task-server": "mcp-task-server",
"/Users/andy/Projects/coaching/platform": "coach-platform"
}
}
}All task operations (list_tasks, add_task, next_task) automatically filter by the current project context.
Content Context Tools
| Tool | Description |
|------|-------------|
| show_content_context | Read global content context file |
| update_content_context | Update a section of the content context |
# Read content context
call show_content_context
# Returns: { exists: true, path: "~/.cursor/content-context.md", content: "..." }
# Update a section
call update_content_context({ section: "Published Content", content: "- Post 1\n- Post 2" })The content context file (~/.cursor/content-context.md) stores content inventory, themes, and locations for content creators.
Flow Dashboard Tools
| Tool | Description |
|------|-------------|
| stats | Fetch work activity statistics from Flow Control dashboard |
| usage | Fetch Cursor AI billing and spend limit info |
| flow | Get current flow state (score 0-10, zone, signals, nudge) |
# Quick check - today's hours and recommendation
call stats
# Returns: { today: { hours: 3.5 }, recommendation: { status: "continue", message: "..." } }
# Detailed with costs and billing period
call stats({ detailed: true })
# Returns: Full stats including costs, period, and totals
# Cursor billing - spend limit and usage
call usage
# Returns: { plan: "Ultra", onDemand: { used: 909.21, limit: 1000, remaining: 90.79, percentUsed: 91 } }
# Current flow state
call flow
# Returns: { score: 7.2, zone: "flow", status: "flowing", signals: { parallelism: 8, velocity: 9, ... },
# sleepDebt: 25, sessionHours: 4.5, nudge: "In flow at 7.2/10. Keep going while it lasts." }Slack Tools
| Tool | Description |
|------|-------------|
| slack_notify | Post a message to Slack (returns thread_ts for follow-up) |
| slack_check_replies | Get replies to a specific thread |
| slack_wait_reply | Block and poll until reply received or timeout |
| slack_ask | Post message + wait for reply in one call (for conversations) |
| slack_list_channels | List available channels |
| slack_add_reaction | Add emoji reaction to a message |
| slack_get_user | Get user profile information |
| slack_channel_history | Get recent channel messages |
# Post to default channel
call slack_notify({ message: "[QUESTION] Need input on auth approach" })
# Returns: { ok: true, ts: "1234567890.123456", thread_ts: "1234567890.123456" }
# Check for replies
call slack_check_replies({ channel_id: "C...", thread_ts: "1234567890.123456" })
# Returns: { reply_count: 2, replies: [{ user: "U...", text: "Use OAuth" }] }
# React to acknowledge
call slack_add_reaction({ channel_id: "C...", timestamp: "1234567890.123456", emoji: "thumbsup" })
# Wait for reply (blocks until reply or timeout)
call slack_wait_reply({ thread_ts: "1234567890.123456", timeout: 1800 })
# Returns when user replies, or after 30 min timeout
# { status: "reply_received", replies: [{ user: "U...", text: "Use OAuth" }], waited_seconds: 45 }
# Conversation mode - post + wait in one call
call slack_ask({ message: "Should we use OAuth or JWT?" })
# Returns: { reply: "Use OAuth", thread_ts: "..." }
# Continue the conversation in same thread
call slack_ask({ message: "What about refresh tokens?", thread_ts: "..." })
# Returns: { reply: "Use rotating refresh tokens" }Configuration: Set these environment variables in ~/.cursor/mcp.json:
{
"mcpServers": {
"task-server": {
"command": "npx",
"args": ["-y", "mcp-task-server"],
"env": {
"SLACK_BOT_TOKEN": "xoxb-...",
"SLACK_CHANNEL_ID": "C..."
}
}
}
}Required Slack Bot Scopes: channels:history, channels:read, chat:write, reactions:write, users:read
Auto-notifications: The handoff_task tool automatically posts to Slack when configured.
Returns data from your Flow Control dashboard (FLOW_API_URL env var, defaults to flow.reids.net.au):
| Field | Description |
|-------|-------------|
| today | Hours, commits, pipelines for today |
| thisWeek | Weekly totals |
| streak | Current and longest streak |
| recommendation | continue/pause/stop with message |
| costs (detailed) | Usage and subscription costs |
| period (detailed) | Billing cycle start/end dates |
| totals (detailed) | Active days, estimated hours, projects |
File Format:
# Content Context
## Current State
| Property | Value |
|----------|-------|
| Last update | 2026-02-01 |
## Published Content
- Post 1: Title (2026-01-15)
- Post 2: Title (2026-01-20)
## Themes Covered
- Theme A
- Theme B
## Content Locations
| Type | Location |
|------|----------|
| Draft posts | ~/drafts |
| Published | ~/published |Utility Tools
| Tool | Description |
|------|-------------|
| help | List all available tools with descriptions and parameters |
| get_version | Get server version and workspace detection info |
| diagnose | Diagnose MCP configuration, paths, and workspace detection |
| show_memory | Show shared context memories from ~/.cursor/shared-context.json |
| update_memory | Create, update, sync, or delete memories (sync matches by title) |
# List all tools
call help
# Returns: { server, version, usage, tool_count: 44, tools: [...] }
# Get help for specific tool
call help({ tool: "update_memory" })
# Returns: { name, description, usage: "call update_memory", parameters: {...} }
# Check version and workspace
call get_version
# Returns: { version: "x.x.x", workspace: { root: "/path", source: "found .git" } }
# Diagnose configuration
call diagnose # Basic info
call diagnose({ verbose: true }) # Include env vars and file checks
# Show all memories
call show_memory
# Search for specific memories
call show_memory({ search: "writing" })
# Force reload from file (clears cache)
call show_memory({ reload: true })
# Sync a memory by title (recommended - avoids duplicates)
call update_memory({
action: "sync",
title: "Writing preferences",
content: "British English, ISO dates, no emojis..."
})
# Create a new memory (generates sequential ID)
call update_memory({
action: "create",
title: "Project conventions",
content: "Use TypeScript strict mode."
})
# Update an existing memory by ID
call update_memory({
action: "update",
id: "1",
title: "Writing preferences",
content: "Updated content..."
})
# Delete a memory
call update_memory({ action: "delete", id: "1" })
# Migrate old IDs to sequential (one-time)
call update_memory({ action: "migrate" })
# Analyse project and get suggestions for memory_bank
call analyse_project
# Focus on specific area
call analyse_project({ focus: "tech" }) # Just tech stack
call analyse_project({ focus: "brief" }) # Just project brief
call analyse_project({ focus: "active" }) # Just current focus
call analyse_project({ focus: "architecture" }) # Just architectureMulti-Agent Mode
The server supports three agent roles:
Planner
- Creates and organizes tasks
- Sets dependencies and priorities
- Analyses complexity
- Cannot execute tasks
Worker
- Claims and executes tasks
- Updates progress
- Adds subtasks during implementation
- Cannot create top-level tasks
Judge
- Reviews completed work
- Approves or rejects with feedback
- Cannot claim or modify tasks
Example
# Solo mode (no role required)
call add_task({ title: "Build login page" })
# Multi-agent mode (explicit identification)
call add_task({
title: "Build login page",
description: "Create a secure login page with form validation",
agent_id: "planner-1",
role: "planner"
})
call claim_task({
task_id: "5",
agent_id: "worker-1",
role: "worker"
})Project Structure
After running init_project:
your-project/
├── agent-kit/
│ ├── AGENT_RULES.md # Role definitions and permissions
│ ├── KICKOFF.md # Session startup checklist
│ ├── TASKS.md # Task reference (points to memory_bank)
│ ├── HANDOFF.md # Handoff protocol reference
│ └── SHARED_CONTEXT.md # Shared context documentation
├── memory_bank/
│ ├── architecture/
│ │ ├── architecture.md # High-level architecture overview
│ │ ├── tech.md # Technical stack and context
│ │ ├── models.md # Data models
│ │ ├── services.md # System services
│ │ ├── deployment.md # Deployment guide
│ │ ├── kubernetes.md # Kubernetes deployment (if applicable)
│ │ └── webhooks.md # Webhooks implementation (if applicable)
│ ├── context/
│ │ ├── context.md # Context index
│ │ ├── brief.md # Project overview
│ │ ├── active.md # Current focus
│ │ ├── product.md # Product context
│ │ ├── canvas.md # Lean canvas
│ │ └── changelog.md # Change log
│ ├── execution/
│ │ ├── execution.md # Execution overview
│ │ ├── progress.md # Task summary (auto-synced)
│ │ ├── decisions.md # Decision log
│ │ ├── debug.md # Debug diary
│ │ └── git.md # Git setup and code quality
│ ├── reference/
│ │ └── README.md # Reference materials folder
│ └── tasks/
│ ├── tasks.json # Task registry (source of truth)
│ ├── task_001.txt # Detailed task file
│ ├── task_002.txt # ...
│ └── ...
└── .cursor/
└── rules/
├── agent-workflow.mdc # Task management protocol
├── wellness-check.mdc # Break reminders
└── shared-context-sync.mdc # Memory sync guidanceThe full structure includes 28 template files covering architecture, context, execution tracking, and cursor rules.
Migration from v1.x
If you have existing .taskmaster/tasks.json:
- Run any task command (e.g.,
list_tasks) - The server auto-migrates to
memory_bank/tasks/ - Individual task files are generated
- Progress summary is updated
Publishing to npm
Quick Publish (recommended)
publish-mcpThe publish-mcp script (in ~/scripts/) handles everything:
- Prompts for version bump (patch/minor/none)
- Updates
VERSIONconstant in source - Builds the project
- Publishes to npm
One-Time Setup
1. Create npm Token
- Go to npmjs.com/settings/~/tokens
- Click Generate New Token → Granular Access Token
- Configure:
- Token name:
mcp-task-server-publish - Bypass two-factor authentication (2FA): ✓ Check this (required for CLI)
- Allowed IP ranges: Optional - add your IP as
x.x.x.x/32 - Packages and scopes: Read and write → All packages
- Expiration: 90 days (maximum)
- Token name:
- Copy the token immediately
Note: npm no longer supports TOTP authenticator apps for new 2FA setups. Automation tokens with "Bypass 2FA" are required for scripted publishing.
2. Add Token to Shell
echo 'export NPM_TOKEN="npm_xxxxxxxxxxxx"' >> ~/.zshrc
source ~/.zshrcManual Publishing
cd /path/to/mcp-task-server
npm version patch # or minor/major
npm run build
npm publish --//registry.npmjs.org/:_authToken=$NPM_TOKENVerify Publication
npm info mcp-task-serverToken Security
- Regenerate tokens before 90-day expiry
- Use IP restrictions for static IPs
- Never commit tokens to git
Development
# Install dependencies
npm install
# Build
npm run build
# Watch mode
npm run dev
# Test locally before publishing
npm pack # Creates .tgz file for inspectionTesting Local Changes
To test changes without publishing, create a per-project MCP config that points to your local build:
// .cursor/mcp.json (in this project)
{
"mcpServers": {
"task-server": {
"command": "node",
"args": ["/Users/andy/Projects/tools/mcp-task-server/dist/index.js"]
}
}
}This overrides the global config only for this workspace. Other projects continue using the npm version.
After making changes:
- Run
npm run build - Reload Cursor (
Cmd+Shift+P→ "Developer: Reload Window") - Test with
call helpto verify tools are available
License
MIT
