npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

harness-mcp-v2

v3.0.0

Published

Give AI agents full access to the Harness.io platform — manage pipelines, deployments, cloud costs, chaos engineering, feature flags, SEI, and 125+ resource types through 11 MCP tools

Readme

Harness MCP Server 2.0

An MCP (Model Context Protocol) server that gives AI agents full access to the Harness.io platform through 11 consolidated tools and 169 resource types.

Why Use This MCP Server

Most MCP servers map one tool per API endpoint. For a platform as broad as Harness, that means 240+ tools — and LLMs get worse at tool selection as the count grows. Context windows fill up with schemas, and every new endpoint means new code.

This server is built differently:

  • 11 tools, 169 resource types. A registry-based dispatch system routes harness_list, harness_get, harness_create, etc. to any Harness resource — pipelines, services, environments, orgs, projects, feature flags, cost data, and more. The LLM picks from 11 tools instead of hundreds.
  • Full platform coverage. 31 toolsets spanning CI/CD, GitOps, Feature Flags, Cloud Cost Management, Security Testing, Chaos Engineering, Database DevOps, Internal Developer Portal, Software Supply Chain, Governance, Service Overrides, Visualizations, and more. Not just pipelines — the entire Harness platform.
  • Multi-project workflows out of the box. Agents discover organizations and projects dynamically — no hardcoded env vars needed. Ask "show failed executions across all projects" and the agent can navigate the full account hierarchy.
  • 30 prompt templates. Pre-built prompts for common workflows: build & deploy apps end-to-end, debug failed pipelines, review DORA metrics, triage vulnerabilities, optimize cloud costs, audit access control, plan feature flag rollouts, review pull requests, approve pending pipelines, and more.
  • Works everywhere. Stdio transport for local clients (Claude Desktop, Cursor, Windsurf), HTTP transport for remote/shared deployments, Docker and Kubernetes ready.
  • Zero-config start. Just provide a Harness API key. Account ID is auto-extracted from PAT tokens, org/project defaults are optional, and toolset filtering lets you expose only what you need.
  • Extensible by design. Adding a new Harness resource means adding a declarative data file — no new tool registration, no schema changes, no prompt updates.

Prerequisites

Before installing or running the server, you need a Harness API key:

  1. Log in to your Harness account
  2. Go to My ProfileAPI Keys+ New API Key
  3. Create a new Token under the API key — this generates a PAT in the format pat.<accountId>.<tokenId>.<secret>
  4. Save the token somewhere secure — you'll need it in the next step

For detailed instructions, see the Harness API Quickstart.

Quick Start

Option 0: Hosted Harness MCP

If your Harness account has the hosted MCP service enabled, clients that support remote MCP servers can connect directly to the managed endpoint instead of running the server locally.

Important: The hosted MCP service uses Harness Platform OAuth, not HARNESS_API_KEY. It must also be enabled/configured per account by Harness Support before the endpoint can be used.

See Hosted Harness MCP for configuration examples.

Option 1: npx (Recommended)

No install required — just run it:

HARNESS_API_KEY=pat.xxx.xxx.xxx npx harness-mcp-v2@latest

Or configure the API key in your AI client (see Client Configuration below).

# Stdio transport (default — for Claude Desktop, Cursor, Windsurf, etc.)
HARNESS_API_KEY=pat.xxx npx harness-mcp-v2

# HTTP transport (for remote/shared deployments)
HARNESS_API_KEY=pat.xxx npx harness-mcp-v2 http --port 8080

Note: The account ID is auto-extracted from PAT tokens (pat.<accountId>.<tokenId>.<secret>), so HARNESS_ACCOUNT_ID is only needed for non-PAT API keys.

Option 2: Global Install

npm install -g harness-mcp-v2

# Then run directly
harness-mcp-v2

Option 3: Build from Source

For development or customization:

git clone https://github.com/harness/mcp-server.git
cd mcp-server
pnpm install
pnpm build

# Run
pnpm start              # Stdio transport
pnpm start:http         # HTTP transport
pnpm inspect            # Test with MCP Inspector

Anthropic MCP Directory bundle

The MCPB bundle manifest lives in [mcp-directory/](mcp-directory/), and the bundle icon is tracked at [icon.png](icon.png) in the repository root. Copy mcp-directory/manifest.json to the bundle root after pnpm build so the generated archive contains root-level manifest.json, icon.png, build/, package.json, and production node_modules/.

To keep the archive small, build MCPB packages from a staging directory:

pnpm prepare:mcpb

The staged package is written to dist/mcpb/ with production dependencies installed using npm's flat layout.

CLI Usage

harness-mcp-v2 [stdio|http] [--port <number>]

Options:
  --port <number>  Port for HTTP transport (default: 3000, or PORT env var)
  --help           Show help message and exit
  --version        Print version and exit

Transport defaults to stdio if not specified. Use http for remote/shared deployments.

HTTP Transport

When running in HTTP mode, the server exposes:

| Endpoint | Method | Description | | --------- | --------- | ---------------------------------------------------------------- | | /mcp | POST | MCP JSON-RPC endpoint (initialize + session requests) | | /mcp | GET | SSE stream for server-initiated messages (progress, elicitation) | | /mcp | DELETE | Terminate an active MCP session | | /mcp | OPTIONS | CORS preflight | | /health | GET | Health check — returns { "status": "ok", "sessions": <count> } |

The HTTP transport runs in session-based mode. A new MCP session is created on initialize, the server returns an mcp-session-id header, and subsequent requests for that session must include the same header.

Operational constraints in HTTP mode:

  • POST /mcp without mcp-session-id must be an initialize request.
  • POST /mcp, GET /mcp, and DELETE /mcp for existing sessions require the mcp-session-id header.
  • GET /mcp is used for SSE notifications (progress updates and elicitation prompts).
  • Idle sessions are reaped after 30 minutes.
  • GET /health is the only non-MCP endpoint.
  • Request body size is capped by HARNESS_MAX_BODY_SIZE_MB (default 10 MB).
  • Set x-harness-pipeline-version: 0 or 1 on the initialize request to select V0 or V1 pipeline resources for that HTTP session.
# Health check
curl http://localhost:3000/health

# MCP initialize request (capture mcp-session-id response header)
curl -i -X POST http://localhost:3000/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}'

# Subsequent MCP request (use returned session ID)
curl -X POST http://localhost:3000/mcp \
  -H "Content-Type: application/json" \
  -H "mcp-session-id: <session-id>" \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/list","params":{}}'

# Terminate session
curl -X DELETE http://localhost:3000/mcp \
  -H "mcp-session-id: <session-id>"

Client Configuration

Note: HARNESS_ORG and HARNESS_PROJECT are optional. They set the org ID and project ID used when not specified per tool call. Agents can discover orgs and projects dynamically using harness_list(resource_type="organization") and harness_list(resource_type="project"). The deprecated names HARNESS_DEFAULT_ORG_ID and HARNESS_DEFAULT_PROJECT_ID are still accepted for backward compatibility.

Hosted Harness MCP

Harness also supports a hosted MCP endpoint for accounts that have the managed service enabled. This is useful when you want a shared remote MCP endpoint instead of running npx harness-mcp-v2 or self-hosting the HTTP transport yourself.

Important: Hosted MCP authentication uses Harness Platform OAuth. It does not use HARNESS_API_KEY in the client config. Hosted MCP availability is configured per Harness account, so you will need to work with Harness Support to enable/configure the setting before using it.

The hosted endpoint https://mcp.harness.io/mcp is a managed service. Client-side MCP config in Claude, Cursor, or Cowork cannot override which Harness environment it routes to. For Harness0 or another private Harness SaaS environment, ask Harness Support to enable/configure hosted MCP for that environment, or run the local/self-hosted server and set HARNESS_BASE_URL to the target Harness host.

Hosted MCP example:

{
  "mcpServers": {
    "harness-prod1-mcp": {
      "url": "https://mcp.harness.io/mcp",
      "auth": {
        "CLIENT_ID": "mcp-client"
      }
    }
  }
}

Example with both hosted and local entries:

{
  "mcpServers": {
    "harness-hosted": {
      "url": "https://mcp.harness.io/mcp",
      "auth": {
        "CLIENT_ID": "mcp-client"
      }
    },
    "harness-local": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

Troubleshooting npx ENOENT or node: No such file or directory

GUI apps (Cursor, Claude Desktop, Windsurf, VS Code) don't inherit your shell's PATH, so they often can't find npx or node. Fix this by using absolute paths and explicitly setting PATH in the env block:

{
  "mcpServers": {
    "harness": {
      "command": "/absolute/path/to/npx",
      "args": ["-y", "harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx",
        "PATH": "/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin"
      }
    }
  }
}

Find your paths with which npx and which node in a terminal, then make sure the directory containing node is included in the PATH value above. Common locations:

  • Homebrew (macOS): /opt/homebrew/bin/npx
  • nvm: ~/.nvm/versions/node/v20.x.x/bin/npx (run nvm which current to find the exact path)
  • System Node: /usr/local/bin/npx

Claude Desktop (claude_desktop_config.json)

npx (zero install)

{
  "mcpServers": {
    "harness": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

node (local install)

npm install -g harness-mcp-v2
{
  "mcpServers": {
    "harness": {
      "command": "harness-mcp-v2",
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

Claude Code (via claude mcp add)

npx (zero install)

claude mcp add harness -- npx harness-mcp-v2

node (local install)

npm install -g harness-mcp-v2
claude mcp add harness -- harness-mcp-v2

Then set HARNESS_API_KEY in your environment or .env file.

Cursor (.cursor/mcp.json)

npx (zero install)

{
  "mcpServers": {
    "harness": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

node (local install)

npm install -g harness-mcp-v2
{
  "mcpServers": {
    "harness": {
      "command": "harness-mcp-v2",
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

Windsurf (~/.windsurf/mcp.json)

npx (zero install)

{
  "mcpServers": {
    "harness": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

node (local install)

npm install -g harness-mcp-v2
{
  "mcpServers": {
    "harness": {
      "command": "harness-mcp-v2",
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

Using a local build from source?

Replace the command with the path to your built index.js:

{
  "command": "node",
  "args": ["/absolute/path/to/harness-mcp-v2/build/index.js", "stdio"]
}

MCP Gateway

The Harness MCP server is fully compatible with MCP Gateways — reverse proxies that provide centralized authentication, governance, tool routing, and observability across multiple MCP servers. Since the server implements the standard MCP protocol with both stdio and HTTP transports, it works behind any MCP-compliant gateway with no code changes.

Why use a gateway?

  • Centralized credential management — no API keys in agent configs
  • Governance & audit logging for all tool calls across teams
  • Single endpoint for agents instead of N connections to N MCP servers
  • Access control — restrict which teams can use which tools

Docker MCP Gateway

Register the server in your Docker MCP Gateway configuration:

{
  "mcpServers": {
    "harness": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

Portkey

Add the Harness MCP server to your Portkey MCP Gateway for enterprise governance, cost tracking, and multi-LLM routing:

{
  "mcpServers": {
    "harness": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

LiteLLM

Add to your LiteLLM proxy config:

mcp_servers:
  - name: harness
    command: npx
    args:
      - harness-mcp-v2
    env:
      HARNESS_API_KEY: "pat.xxx.xxx.xxx"

Envoy AI Gateway

The server works with Envoy AI Gateway's MCP support via HTTP transport:

# Start the server in HTTP mode
HARNESS_API_KEY=pat.xxx.xxx.xxx npx harness-mcp-v2 http --port 8080

Then configure Envoy to route to http://localhost:8080/mcp as an upstream MCP backend.

Kong

Use Kong's AI MCP Proxy plugin to expose the Harness MCP server through your existing Kong gateway infrastructure.

Other Gateways

Any gateway that supports the MCP specification (Microsoft MCP Gateway, IBM ContextForge, Cloudflare Workers, etc.) can proxy this server. For stdio-based gateways, use the default transport. For HTTP-based gateways, start the server with http transport and point the gateway at the /mcp endpoint.

Docker

Build and run the server as a Docker container:

# Build the image
pnpm docker:build

# Run with your .env file
pnpm docker:run

# Or run directly with env vars
docker run --rm -p 3000:3000 \
  -e HARNESS_API_KEY=pat.xxx.xxx.xxx \
  -e HARNESS_ACCOUNT_ID=your-account-id \
  harness-mcp-server

The container runs in HTTP mode on port 3000 by default with a built-in health check.

Kubernetes

Deploy to a Kubernetes cluster using the provided manifests:

# 1. Edit the Secret with your real credentials
#    k8s/secret.yaml — replace HARNESS_API_KEY and HARNESS_ACCOUNT_ID

# 2. Apply all manifests
kubectl apply -f k8s/

# 3. Verify the deployment
kubectl -n harness-mcp get pods

# 4. Port-forward for local testing
kubectl -n harness-mcp port-forward svc/harness-mcp-server 3000:80
curl http://localhost:3000/health

The deployment runs 2 replicas with readiness/liveness probes, resource limits, and non-root security context. The Service exposes port 80 internally (targeting container port 3000).

Configuration

The server automatically loads environment variables from a .env file in the project root if one exists. Copy .env.example to .env and fill in your values. Environment variables can also be set via your shell or MCP client config.

| Variable | Required | Default | Description | | --------------------------- | -------- | --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | HARNESS_API_KEY | Yes | -- | Harness personal access token or service account token | | HARNESS_ACCOUNT_ID | No | (from PAT) | Harness account identifier. Auto-extracted from PAT tokens; only needed for non-PAT API keys | | HARNESS_BASE_URL | No | https://app.harness.io | Harness API/UI base URL for local stdio or self-hosted HTTP deployments. Set this to environments such as https://harness0.harness.io when running the server yourself. It does not affect the managed https://mcp.harness.io/mcp hosted endpoint | | HARNESS_ORG | No | -- | Organization ID. Used when org_id is not specified per tool call. If omitted, org_id must be provided explicitly. Agents can also discover orgs dynamically via harness_list(resource_type="organization") | | HARNESS_PROJECT | No | -- | Project ID. Used when project_id is not specified per tool call. Agents can also discover projects dynamically via harness_list(resource_type="project") | | HARNESS_API_TIMEOUT_MS | No | 30000 | HTTP request timeout in milliseconds | | HARNESS_MAX_RETRIES | No | 3 | Retry count for transient failures (429, 5xx) | | HARNESS_MAX_BODY_SIZE_MB | No | 10 | Max HTTP request body size in MB for http transport | | HARNESS_RATE_LIMIT_RPS | No | 10 | Client-side request throttle (requests per second) to Harness APIs | | LOG_LEVEL | No | info | Log verbosity: debug, info, warn, error | | HARNESS_TOOLSETS | No | (defaults) | Comma-separated toolset list. Empty loads default toolsets and excludes opt-in toolsets such as ai-evals. Supports +name to add opt-in toolsets and -name to remove defaults (see Toolset Filtering) | | HARNESS_READ_ONLY | No | false | Block all mutating operations (create, update, delete, execute). Only list and get are allowed. Useful for shared/demo environments | | HARNESS_AUTO_APPROVE_RISK | No | none | Risk-based auto-approve threshold for autonomous workflows. Operations at or below this risk proceed without confirmation. Values: none, low_write, medium_write, high_write, all. See Elicitation | | HARNESS_SKIP_ELICITATION | No | false | Deprecated — use HARNESS_AUTO_APPROVE_RISK=all instead. Kept for backward compatibility | | HARNESS_ALLOW_HTTP | No | false | Allow non-HTTPS HARNESS_BASE_URL. By default, the server enforces HTTPS for security. Set to true only for local development against a non-TLS Harness instance | | HARNESS_PIPELINE_VERSION | No | 0 | (Alpha) Pipeline YAML version. 0 loads the pipeline resource type and excludes pipeline_v1; 1 loads pipeline_v1 and excludes pipeline. HTTP sessions can override this at initialize time with x-harness-pipeline-version: 0 or 1 | | HARNESS_MCP_ALLOWED_HOSTS | No | -- | Comma-separated hostnames allowed by HTTP transport Host-header validation. mcp.harness.io is allowed by default for localhost binds; add proxy/custom domains here | | HARNESS_MCP_LOG_FILE | No | ~/.claude/harness-mcp.log | File used for stdio disconnect/crash diagnostics when stderr may no longer be available |

HTTPS Enforcement

HARNESS_BASE_URL must use HTTPS by default. If you set a non-HTTPS URL (e.g. http://localhost:8080), the server will refuse to start with:

HARNESS_BASE_URL must use HTTPS (got "http://..."). If you need HTTP for local development, set HARNESS_ALLOW_HTTP=true.

Audit Logging

All write operations (harness_create, harness_update, harness_delete, harness_execute) emit structured audit log entries to stderr. Each entry includes the tool name, resource type, operation, identifiers, and timestamp. This provides an audit trail without requiring external logging infrastructure.

Tools Reference

The server exposes 11 MCP tools. Most API tools accept org_id and project_id as optional overrides — if omitted, they fall back to HARNESS_ORG and HARNESS_PROJECT. harness_describe is local metadata only and does not use org/project scope.

URL support: Most API-facing tools accept a url parameter — paste a Harness UI URL and the server auto-extracts org, project, resource type, resource ID, pipeline ID, and execution ID. harness_describe does not accept url.

| Tool | Description | | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | harness_describe | Discover available resource types, operations, and fields. No API call — returns local registry metadata. | | harness_schema | Fetch exact JSON Schema definitions for creating/updating resources. Supports deep drilling via path parameter. | | harness_list | List resources of a given type with filtering, search, and pagination. | | harness_get | Get a single resource by its identifier. | | harness_create | Create a new resource. Supports inline and remote (Git-backed) pipelines. Prompts for user confirmation via elicitation. | | harness_update | Update an existing resource. Supports inline and remote (Git-backed) pipelines. Prompts for user confirmation via elicitation. | | harness_delete | Delete a resource. Prompts for user confirmation via elicitation. Destructive. | | harness_execute | Execute an action on a resource (run/retry pipeline, import pipeline from Git, toggle flag, sync app). Prompts for user confirmation via elicitation. For pipeline runs, use the runtime-input workflow below (supports branch/tag/pr_number/commit_sha shorthand expansion). | | harness_search | Search across multiple resource types in parallel with a single query. | | harness_diagnose | Diagnose pipeline, connector, delegate, and gitops_application resources (aliases: execution -> pipeline, gitops_app -> gitops_application). For pipelines, returns stage/step timing and failure details; for connectors/delegates/GitOps apps, returns targeted health and troubleshooting signals. | | harness_status | Get a real-time project health dashboard — recent executions, failure rates, and deep links. |

Tool Examples

Discover what resources are available:

{ "resource_type": "pipeline" }

List organizations in the account:

{ "resource_type": "organization" }

List projects in an organization:

{ "resource_type": "project", "org_id": "default" }

List pipelines in a project:

{ "resource_type": "pipeline", "search_term": "deploy", "size": 10 }

Get a specific service:

{ "resource_type": "service", "resource_id": "my-service-id" }

Run a pipeline:

{
  "resource_type": "pipeline",
  "action": "run",
  "resource_id": "my-pipeline",
  "inputs": { "tag": "v1.2.3" }
}

Toggle a feature flag:

{
  "resource_type": "feature_flag",
  "action": "toggle",
  "resource_id": "new_checkout_flow",
  "enable": true,
  "environment": "production"
}

Search across all resource types:

{ "query": "payment-service" }

Diagnose an execution by ID (summary mode — default):

{ "execution_id": "abc123XYZ" }

Diagnose from a Harness URL:

{ "url": "https://app.harness.io/ng/account/.../pipelines/myPipeline/executions/abc123XYZ/pipeline" }

Diagnose connector connectivity:

{ "resource_type": "connector", "resource_id": "my_github_connector" }

Diagnose delegate health:

{ "resource_type": "delegate", "resource_id": "delegate-us-east-1" }

Diagnose a GitOps application (with options):

{
  "resource_type": "gitops_application",
  "resource_id": "checkout-app",
  "options": { "agent_id": "gitops-agent-1" }
}

Get the latest execution report for a pipeline:

{ "pipeline_id": "my-pipeline" }

Full diagnostic mode with YAML and failed step logs:

{ "execution_id": "abc123XYZ", "summary": false }

Summary mode with logs enabled (best of both):

{ "execution_id": "abc123XYZ", "include_logs": true }

Get project health status:

{ "org_id": "default", "project_id": "my-project", "limit": 5 }

List database schemas filtered by migration type:

{ "resource_type": "database_schema", "migration_type": "Liquibase" }

List database instances for a schema:

{ "resource_type": "database_instance", "dbschema_id": "my_schema" }

Get the resolved LLM authoring pipeline for a schema and instance:

{ "resource_type": "database_llm_authoring_pipeline", "resource_id": "my_schema", "dbinstance_id": "prod_db" }

List snapshot object names (e.g. tables) for a schema instance:

{
  "resource_type": "database_snapshot_object",
  "dbschema_id": "my_schema",
  "dbinstance_id": "prod_db",
  "object_type": "Table"
}

Get full snapshot metadata for specific named objects:

{
  "resource_type": "database_snapshot_object",
  "resource_id": "prod_db",
  "params": {
    "dbschema_id": "my_schema",
    "object_type": "Table",
    "object_names": ["users", "orders"]
  }
}

Pipeline Run Workflow (Recommended)

Use this sequence to reduce execution-time input errors:

  1. Discover required runtime inputs
  • harness_get(resource_type="runtime_input_template", resource_id="<pipeline_id>")
  • The returned template shows <+input> placeholders that need values.
  1. Choose input strategy
  • Simple variables: pass flat key-value inputs (for example {"branch":"main","env":"prod"}).

  • Complex/structural inputs: use input_set_ids (CI codebase/build blocks and nested template inputs are best handled this way).

  • CI codebase shorthand keys (pipeline run only):

    | Shorthand key | Expanded structure | | ------------- | ------------------------------------------------------ | | branch | build.type=branch, build.spec.branch=<value> | | tag | build.type=tag, build.spec.tag=<value> | | pr_number | build.type=PR, build.spec.number=<value> | | commit_sha | build.type=commitSha, build.spec.commitSha=<value> |

  • Constraint: shorthand expansion is skipped when inputs.build is already present (explicit build wins).

  1. Execute the run
  • harness_execute(resource_type="pipeline", action="run", resource_id="<pipeline_id>", ...)
  1. Optional: combine both
  • Use input_set_ids for the base shape and inputs for simple overrides.

If required fields are unresolved, the tool returns a pre-flight error with expected keys and suggested input sets. You can inspect available shorthand mappings with harness_describe(resource_type="pipeline") (executeActions.run.inputShorthands).

Ask the AI DevOps Agent to create a pipeline:

{
  "prompt": "Create a pipeline that builds a Go app with Docker and deploys to Kubernetes",
  "action": "CREATE_PIPELINE"
}

Update a service via natural language:

{
  "prompt": "Add a sidecar container for logging",
  "action": "UPDATE_SERVICE",
  "conversation_id": "prev-conversation-id",
  "context": [{ "type": "yaml", "payload": "<existing service YAML>" }]
}

Pipeline Storage Modes

Harness pipelines can be stored in three ways:

| Mode | Description | When to use | | ------------------------- | ------------------------------------------------------- | ------------------------------------------------------------------ | | Inline | Pipeline YAML stored in Harness | Default. Simplest setup, no Git required. | | Remote (External Git) | Pipeline YAML stored in GitHub, GitLab, Bitbucket, etc. | Teams using Git-backed pipeline-as-code with an external provider. | | Remote (Harness Code) | Pipeline YAML stored in a Harness Code repository | Teams using Harness's built-in Git hosting. |

Create an inline pipeline (default):

// harness_create
{
  "resource_type": "pipeline",
  "body": {
    "yamlPipeline": "pipeline:\n  name: My Pipeline\n  identifier: my_pipeline\n  stages:\n    - stage:\n        name: Build\n        type: CI\n        spec:\n          execution:\n            steps:\n              - step:\n                  type: Run\n                  name: Echo\n                  spec:\n                    command: echo hello"
  }
}

Create a remote pipeline (External Git — e.g. GitHub):

// harness_create
{
  "resource_type": "pipeline",
  "body": {
    "yamlPipeline": "pipeline:\n  name: Deploy Service\n  identifier: deploy_service\n  stages: []"
  },
  "params": {
    "store_type": "REMOTE",
    "connector_ref": "my_github_connector",
    "repo_name": "my-repo",
    "branch": "main",
    "file_path": ".harness/deploy-service.yaml",
    "commit_msg": "Add deploy pipeline via MCP"
  }
}

Create a remote pipeline (Harness Code — no connector needed):

// harness_create
{
  "resource_type": "pipeline",
  "body": {
    "yamlPipeline": "pipeline:\n  name: Build App\n  identifier: build_app\n  stages: []"
  },
  "params": {
    "store_type": "REMOTE",
    "is_harness_code_repo": true,
    "repo_name": "product-management",
    "branch": "main",
    "file_path": ".harness/build-app.yaml",
    "commit_msg": "Add build pipeline via MCP"
  }
}

Update a remote pipeline:

// harness_update
{
  "resource_type": "pipeline",
  "resource_id": "deploy_service",
  "body": {
    "yamlPipeline": "pipeline:\n  name: Deploy Service\n  identifier: deploy_service\n  stages:\n    - stage:\n        name: Deploy\n        type: Deployment"
  },
  "params": {
    "store_type": "REMOTE",
    "connector_ref": "my_github_connector",
    "repo_name": "my-repo",
    "branch": "main",
    "file_path": ".harness/deploy-service.yaml",
    "commit_msg": "Update deploy pipeline via MCP",
    "last_object_id": "abc123",
    "last_commit_id": "def456"
  }
}

Import a pipeline from an external Git repo:

// harness_execute
{
  "resource_type": "pipeline",
  "action": "import",
  "params": {
    "connector_ref": "my_github_connector",
    "repo_name": "my-repo",
    "branch": "main",
    "file_path": ".harness/existing-pipeline.yaml"
  },
  "body": {
    "pipeline_name": "Existing Pipeline",
    "pipeline_description": "Imported from GitHub"
  }
}

Import a pipeline from a Harness Code repo:

// harness_execute
{
  "resource_type": "pipeline",
  "action": "import",
  "params": {
    "is_harness_code_repo": true,
    "repo_name": "product-management",
    "branch": "main",
    "file_path": ".harness/existing-pipeline.yaml"
  },
  "body": {
    "pipeline_name": "Existing Pipeline"
  }
}

Create a connector:

{
  "resource_type": "connector",
  "body": { "connector": { "name": "My Docker Hub", "identifier": "my_docker", "type": "DockerRegistry" } }
}

Delete a trigger:

{
  "resource_type": "trigger",
  "resource_id": "nightly-trigger",
  "pipeline_id": "my-pipeline"
}

List input sets for a pipeline:

{
  "resource_type": "input_set",
  "pipeline_id": "my-pipeline"
}

Get a specific input set:

{
  "resource_type": "input_set",
  "resource_id": "prod-inputs",
  "pipeline_id": "my-pipeline"
}

Create an input set:

{
  "resource_type": "input_set",
  "pipeline_id": "my-pipeline",
  "body": "inputSet:\n  name: Production Inputs\n  identifier: prod_inputs\n  pipeline:\n    identifier: my-pipeline\n    variables:\n      - name: env\n        type: String\n        value: production"
}

Update an input set:

{
  "resource_type": "input_set",
  "resource_id": "prod_inputs",
  "pipeline_id": "my-pipeline",
  "body": "inputSet:\n  name: Production Inputs\n  identifier: prod_inputs\n  pipeline:\n    identifier: my-pipeline\n    variables:\n      - name: env\n        type: String\n        value: production\n      - name: replicas\n        type: String\n        value: \"3\""
}

Delete an input set:

{
  "resource_type": "input_set",
  "resource_id": "prod_inputs",
  "pipeline_id": "my-pipeline"
}

Resource Types

169 resource types organized across 31 toolsets. Each resource type supports a subset of CRUD operations and optional execute actions.

Platform

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | -------------- | ---- | --- | ------ | ------ | ------ | --------------- | | organization | x | x | x | x | x | | | project | x | x | x | x | x | |

Pipelines

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ------------------------- | ---- | --- | ------ | ------ | ------ | ------------------- | | pipeline | x | x | x | x | x | run, retry | | pipeline_v1 (Alpha) | x | x | x | x | x | run | | execution | x | x | | | | interrupt | | trigger | x | x | x | x | x | | | pipeline_summary | | x | | | | | | input_set | x | x | x | x | x | | | runtime_input_template | | x | | | | | | approval_instance | x | | | | | approve, reject |

Only one pipeline YAML resource type is loaded at startup. By default HARNESS_PIPELINE_VERSION=0 exposes pipeline and hides pipeline_v1; set HARNESS_PIPELINE_VERSION=1 to expose pipeline_v1 and hide pipeline. In HTTP mode, include x-harness-pipeline-version: 0 or 1 on the initialize request to choose the version for that session.

AI Agents

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ------------- | ---- | --- | ------ | ------ | ------ | --------------- | | agent | x | x | x | x | x | | | agent_run | x | | | | | |

Services

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ------------- | ---- | --- | ------ | ------ | ------ | --------------- | | service | x | x | x | x | x | |

Environments

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ------------- | ---- | --- | ------ | ------ | ------ | --------------- | | environment | x | x | x | x | x | move_configs |

Connectors

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | --------------------- | ---- | --- | ------ | ------ | ------ | ----------------- | | connector | x | x | x | x | x | test_connection | | connector_catalogue | x | | | | | |

Infrastructure

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ---------------- | ---- | --- | ------ | ------ | ------ | --------------- | | infrastructure | x | x | x | x | x | move_configs |

Secrets

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ------------- | ---- | --- | ------ | ------ | ------ | --------------- | | secret | x | x | | | | |

Execution Logs

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | --------------- | ---- | --- | ------ | ------ | ------ | --------------- | | execution_log | | x | | | | |

Audit Trail

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ------------- | ---- | --- | ------ | ------ | ------ | --------------- | | audit_event | x | x | | | | |

Delegates

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ---------------- | ---- | --- | ------ | ------ | ------ | ------------------------- | | delegate | x | | | | | | | delegate_token | x | x | x | | x | revoke, get_delegates |

Code Repositories

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | -------------- | ---- | --- | ------ | ------ | ------ | -------------------- | | repository | x | x | x | x | | | | branch | x | x | x | | x | | | commit | x | x | | | | diff, diff_stats | | file_content | | x | | | | blame | | tag | x | | x | | x | | | repo_rule | x | x | | | | | | space_rule | x | x | | | | |

Artifact Registries

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ------------------ | ---- | --- | ------ | ------ | ------ | --------------- | | registry | x | x | | | | | | artifact | x | | | | | | | artifact_version | x | | | | | | | artifact_file | x | | | | | |

Templates

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ------------- | ---- | --- | ------ | ------ | ------ | --------------- | | template | x | x | x | x | x | |

Dashboards

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ---------------- | ---- | --- | ------ | ------ | ------ | --------------- | | dashboard | x | x | | | | | | dashboard_data | | x | | | | |

Database DevOps

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | --------------------------------- | ---- | --- | ------ | ------ | ------ | --------------- | | database_schema | x | x | | | | | | database_instance | x | x | | | | | | database_snapshot_object | x | x | | | | | | database_llm_authoring_pipeline | | x | | | | |

Internal Developer Portal (IDP)

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ----------------------- | ---- | --- | ------ | ------ | ------ | --------------- | | idp_entity | x | x | | | | | | scorecard | x | x | | | | | | scorecard_check | x | x | | | | | | scorecard_stats | | x | | | | | | scorecard_check_stats | | x | | | | | | idp_score | x | x | | | | | | idp_workflow | x | | | | | execute | | idp_tech_doc | x | | | | | |

Pull Requests

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | -------------- | ---- | --- | ------ | ------ | ------ | --------------- | | pull_request | x | x | x | x | | merge | | pr_reviewer | x | | x | | | submit_review | | pr_comment | x | | x | | | | | pr_check | x | | | | | | | pr_activity | x | | | | | |

Feature Flags

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ----------------------------------- | ---- | --- | ------ | ------ | ------ | ----------------------------------------- | | fme_workspace | x | | | | | | | fme_environment | x | | | | | | | fme_feature_flag | x | x | x | x | x | kill, restore, archive, unarchive | | fme_feature_flag_definition | | x | | | | | | fme_rollout_status | x | | | | | | | fme_rule_based_segment | x | x | x | | x | | | fme_rule_based_segment_definition | x | | | x | | enable, disable, change_request | | feature_flag | x | x | x | | x | toggle |

FME (Split.io) resourcesfme_* resources use the Split.io API (api.split.io) and are scoped by workspace ID rather than org/project. Auth uses HARNESS_API_KEY as a Bearer token. fme_feature_flag supports full lifecycle management: create (requires traffic_type_id), list, get, update metadata, delete, and kill/restore/archive/unarchive execute actions. fme_rule_based_segment provides CRUD for targeting segments, while fme_rule_based_segment_definition manages environment-specific segment rules with enable/disable and change request approval flows. Use feature_flag for the Harness CF admin API which supports environment-specific definitions, create, delete, and toggle.

GitOps

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | -------------------------- | ---- | --- | ------ | ------ | ------ | --------------- | | gitops_agent | x | x | | | | | | gitops_application | x | x | | | | sync | | gitops_cluster | x | x | | | | | | gitops_repository | x | x | | | | | | gitops_applicationset | x | x | | | | | | gitops_repo_credential | x | x | | | | | | gitops_app_event | x | | | | | | | gitops_pod_log | | x | | | | | | gitops_managed_resource | x | | | | | | | gitops_resource_action | x | | | | | | | gitops_dashboard | | x | | | | | | gitops_app_resource_tree | | x | | | | |

Chaos Engineering

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | --------------------------- | ---- | --- | ------ | ------ | ------ | ---------------------- | | chaos_experiment | x | x | | | | run | | chaos_probe | x | x | | | | enable, verify | | chaos_experiment_template | x | | | | | create_from_template | | chaos_infrastructure | x | | | | | | | chaos_experiment_variable | x | | | | | | | chaos_experiment_run | x | x | | | | | | chaos_loadtest | x | x | x | | x | run, stop | | chaos_k8s_infrastructure | x | x | | | | check_health | | chaos_hub | x | x | | | | | | chaos_fault | x | x | | | | | | chaos_network_map | x | x | | | | | | chaos_guard_condition | x | x | | | | | | chaos_guard_rule | x | x | | | | | | chaos_recommendation | x | x | | | | | | chaos_risk | x | x | | | | |

Cloud Cost Management (CCM)

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ---------------------------- | ---- | --- | ------ | ------ | ------ | ------------------------------------------------------------------------------ | | cost_perspective | x | x | x | x | x | | | cost_breakdown | x | | | | | | | cost_timeseries | x | | | | | | | cost_summary | x | x | | | | | | cost_recommendation | x | x | | | | update_state, override_savings, create_jira_ticket, create_snow_ticket | | cost_anomaly | x | | | | | | | cost_anomaly_summary | | x | | | | | | cost_category | x | x | | | | | | cost_account_overview | | x | | | | | | cost_filter_value | x | | | | | | | cost_recommendation_stats | | x | | | | | | cost_recommendation_detail | | x | | | | | | cost_commitment | | x | | | | |

Software Engineering Insights (SEI)

SEI resources are consolidated for token efficiency. Use metric or aspect params for DORA, team/org-tree details, and AI insights.

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ------------------------- | ---- | --- | ------ | ------ | ------ | -------------------------------------------------------------------------------------------------------- | | sei_metric | x | | | | | | | sei_productivity_metric | | x | | | | | | sei_dora_metric | | x | | | | Pass metric: deployment_frequency, change_failure_rate, mttr, lead_time, or *_drilldown | | sei_team | x | x | | | | | | sei_team_detail | x | | | | | Pass aspect: integrations, developers, integration_filters | | sei_org_tree | x | x | | | | | | sei_org_tree_detail | x | x | | | | Pass aspect: efficiency_profile, productivity_profile, business_alignment_profile, integrations, teams | | sei_business_alignment | x | x | | | | Pass aspect: feature_metrics, feature_summary, drilldown for get | | sei_ai_usage | x | x | | | | Pass aspect: metrics, breakdown, summary, top_languages | | sei_ai_adoption | x | x | | | | Pass aspect: metrics, breakdown, summary | | sei_ai_impact | | x | | | | Pass aspect: pr_velocity, rework | | sei_ai_raw_metric | x | | | | | |

Software Supply Chain Assurance (SCS)

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | -------------------------- | ---- | --- | ------ | ------ | ------ | --------------- | | scs_artifact_source | x | | | | | | | artifact_security | x | x | | | | | | scs_artifact_component | x | | | | | | | scs_artifact_remediation | | x | | | | | | scs_chain_of_custody | | x | | | | | | scs_compliance_result | x | | | | | | | code_repo_security | x | x | | | | | | scs_sbom | | x | | | | |

Security Testing Orchestration (STO)

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ----------------------- | ---- | --- | ------ | ------ | ------ | ------------------------------ | | security_issue | x | | | | | | | security_issue_filter | x | | | | | | | security_exemption | x | | | | | approve, reject, promote |

Access Control

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ----------------- | ---- | --- | ------ | ------ | ------ | --------------- | | user | x | x | | | | | | user_group | x | x | x | | x | | | service_account | x | x | x | | x | | | role | x | x | x | | x | | | role_assignment | x | | x | | | | | resource_group | x | x | x | | x | | | permission | x | | | | | |

Governance

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ------------------- | ---- | --- | ------ | ------ | ------ | --------------- | | policy | x | x | x | x | x | | | policy_set | x | x | x | x | x | | | policy_evaluation | x | x | | | | |

Deployment Freeze

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | --------------- | ---- | --- | ------ | ------ | ------ | --------------- | | freeze_window | x | x | x | x | x | toggle_status | | global_freeze | | x | | | | manage |

Service Overrides

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ------------------ | ---- | --- | ------ | ------ | ------ | --------------- | | service_override | x | x | x | x | x | |

Settings

| Resource Type | List | Get | Create | Update | Delete | Execute Actions | | ------------- | ---- | --- | ------ | ------ | ------ | --------------- | | setting | x | | | | | |

Visualizations

Inline PNG chart visualizations rendered from Harness data. These are metadata-only resource types with no API operations — they exist so the LLM can discover available chart types via harness_describe. Use include_visual=true on supported tools (harness_diagnose, harness_list, harness_status) to generate charts.

| Resource Type | Description | How to Generate | | ------------------------- | --------------------------------------------------- | ----------------------------------------------------- | | visual_timeline | Gantt chart of pipeline stage execution over time | harness_diagnose with visual_type: "timeline" | | visual_stage_flow | DAG flowchart of pipeline stages and steps | harness_diagnose with visual_type: "flow" | | visual_health_dashboard | Project health overview with status indicators | harness_status with include_visual: true | | visual_pie_chart | Donut chart of execution status breakdown | harness_list with visual_type: "pie" | | visual_bar_chart | Bar chart of execution counts by pipeline | harness_list with visual_type: "bar" | | visual_timeseries | Daily execution trend over 30 days | harness_list with visual_type: "timeseries" | | visual_architecture | Pipeline YAML architecture diagram (stages → steps) | harness_diagnose with visual_type: "architecture" |

MCP Prompts

DevOps

| Prompt | Description | Parameters | | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | | build-deploy-app | End-to-end CI/CD workflow: scan a git repo, generate CI pipeline (build & push Docker image), discover or generate K8s manifests, create CD pipeline, and deploy — with auto-retry on CI failures (up to 5 attempts) and CD failures (up to 3 attempts with user permission). On exhausted retries, provides Harness UI deep links to all created resources for manual investigation. | repoUrl (required), imageName (required), projectId (optional), namespace (optional) | | debug-pipeline-failure | Analyze a failed execution: accepts an execution ID, pipeline ID, or Harness URL. Gets stage/step breakdown, failure details, delegate info, and failed step logs via harness_diagnose, then provides root cause analysis and suggested fixes. Automatically follows chained pipeline failures. | executionId (optional), projectId (optional) | | create-pipeline | Generate a new pipeline YAML from natural language requirements, reviewing existing resources for context | description (required), projectId (optional) | | create-agent | Interactively build a Harness AI agent — check existing agents, gather requirements, generate agent YAML spec using the agent-pipeline schema, confirm with user, then create or update via harness_create/harness_update | agent_name (required), task_description (required), org_id (optional), project_id (optional) | | onboard-service | Walk through onboarding a new service with environments and a deployment pipeline | serviceName (required), projectId (optional) | | dora-metrics-review | Review DORA metrics (deployment frequency, change failure rate, MTTR, lead time) with Elite/High/Medium/Low classification and improvement recommendations | teamRefId (optional), dateStart