npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@dwmkerr/mock-llm

v0.1.28

Published

Simple OpenAI compatible Mock API server. Useful for deterministic testing of AI applications.

Readme

mock-llm

npm version codecov All Contributors

Simple OpenAI compatible Mock API server. Useful for deterministic testing of AI applications.

Introduction

Creating integration tests for AI applications that rely on LLMs can be challenging due to costs, the complexity of response structures and the non-deterministic nature of LLMs. Mock LLM runs as a simple 'echo' server that responses to a user message.

The server can be configured to provide different responses based on the input, which can be useful for testing error scenarios, different payloads, etc. It is currently designed to mock the OpenAI Completions API but could be extended to mock the list models APIs, responses APIs, A2A apis and so on in the future.

Quickstart

Install and run:

npm install -g mock-llm
mock-llm

Mock-LLM runs on port 6556 (which is dial-pad code 6556, to avoid conflicts with common ports).

Or use Docker:

docker run -p 6556:6556 ghcr.io/dwmkerr/mock-llm

Or use Helm for Kubernetes deployments.

Test with curl. The default rule for incoming requests is to reply with the user's exact message:

curl -X POST http://localhost:6556/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

Response:

{
  "id": "chatcmpl-1234567890",
  "object": "chat.completion",
  "model": "gpt-4",
  "choices": [{
    "message": {
      "role": "assistant",
      "content": "Hello"
    },
    "finish_reason": "stop"
  }]
}

Mock LLM also has basic support for the A2A (Agent-to-Agent) protocol for testing agent messages, task, and asynchronous operations.

Configuration

Responses are configured using a yaml file loaded from mock-llm.yaml in the current working directory. Rules are evaluated in order - last match wins.

The default configuration echoes the last user message and provides a /v1/models endpoint:

rules:
  # Default echo rule
  - path: "/v1/chat/completions"
    method: "POST"
    # The JMESPath expression '@' always matches.
    match: "@"
    response:
      status: 200
      content: |
        {
          "id": "chatcmpl-{{timestamp}}",
          "object": "chat.completion",
          "model": "{{jmes request body.model}}",
          "choices": [{
            "message": {
              "role": "assistant",
              "content": "{{jmes request body.messages[-1].content}}"
            },
            "finish_reason": "stop"
          }]
        }

  # List models endpoint
  - path: "/v1/models"
    method: "GET"
    response:
      status: 200
      content: |
        {
          "object": "list",
          "data": [{"id": "gpt-5.2", "object": "model", "owned_by": "openai"}]
        }

Each rule supports these fields:

  • path - Regular expression to match the request path
  • method - HTTP method to match (GET, POST, etc). If not specified, matches all methods
  • match - JMESPath expression to match request content (optional)
  • sequence - Match by request order (optional)
  • response - The response to return (status and content)

Customising Responses

JMESPath is a query language for JSON used to match incoming requests and extract values for responses.

This returns a fixed message for hello and simulates a 401 error for error-401, and simulates v1/models:

rules:
  # Fixed message when input contains 'hello':
  - path: "/v1/chat/completions"
    match: "contains(body.messages[-1].content, 'hello')"
    response:
      status: 200
      content: |
        {
          "choices": [{
            "message": {
              "role": "assistant",
              "content": "Hi there! How can I help you today?"
            },
            "finish_reason": "stop"
          }]
        }
  # Realistic OpenAI 401 if the input contains `error-401`:
  - path: "/v1/chat/completions"
    match: "contains(body.messages[-1].content, 'error-401')"
    response:
      status: 401
      content: |
        {
          "error": {
            "message": "Incorrect API key provided.",
            "type": "invalid_request_error",
            "param": null,
            "code": "invalid_api_key"
          }
        }

  # List models endpoint
  - path: "/v1/models"
    # The JMESPath expression '@' always matches.
    match: "@"
    response:
      status: 200
      # Return a set of models.
      content: |
        {
          "data": [
            {"id": "gpt-4", "object": "model"},
            {"id": "gpt-3.5-turbo", "object": "model"}
          ]
        }

Loading Configuration Files

The --config parameter can be used for a non-default location:

# Use the '--config' parameter directly...
mock-llm --config /tmp/myconfig.yaml

# ...mount a config file from the working directory for mock-llm in docker.
docker run -v $(pwd)/mock-llm.yaml:/app/mock-llm.yaml -p 6556:6556 ghcr.io/dwmkerr/mock-llm

Updating Configuration

Configuration can be updated at runtime via the /config endpoint: GET returns current config (JSON by default, YAML with Accept: application/x-yaml), POST replaces it, PATCH merges updates, DELETE resets to default. Both POST and PATCH accept JSON (Content-Type: application/json) or YAML (Content-Type: application/x-yaml).

Health & Readiness Checks

curl http://localhost:6556/health
# {"status":"healthy"}

curl http://localhost:6556/ready
# {"status":"ready"}

Template Variables

Available in response content templates:

  • {{jmes request <query>}} - Query the request object using JMESPath:
    • request.body - Request body (e.g., body.model, body.messages[-1].content)
    • request.headers - HTTP headers, lowercase (e.g., headers.authorization)
    • request.method - HTTP method (e.g., POST)
    • request.path - Request path (e.g., /v1/chat/completions)
    • request.query - Query parameters (e.g., query.apikey)
  • {{timestamp}} - Current time in milliseconds

Objects and arrays are automatically JSON-stringified. Primitives are returned as-is.

"model": "{{jmes request body.model}}"              // "gpt-4"
"message": {{jmes request body.messages[0]}}        // {"role":"system","content":"..."}
"auth": "{{jmes request headers.authorization}}"    // "Bearer sk-..."
"apikey": "{{jmes request query.apikey}}"           // "test-123"

Sequential Responses

For testing multi-turn interactions like tool calling, use sequence to return different responses based on request order:

rules:
  # First request: trigger tool call
  - path: "/v1/chat/completions"
    sequence: 0
    response:
      status: 200
      content: '{"choices":[{"message":{"tool_calls":[{"function":{"name":"get_weather"}}]},"finish_reason":"tool_calls"}]}'

  # Second request: return final answer
  - path: "/v1/chat/completions"
    sequence: 1
    response:
      status: 200
      content: '{"choices":[{"message":{"content":"The weather is 72°F"},"finish_reason":"stop"}]}'

Rules can use match, sequence, both, or neither:

| match | sequence | Behavior | |---------|------------|----------| | No | No | Matches all requests (catch-all) | | Yes | No | Content-based matching only | | No | Yes | Order-based matching only | | Yes | Yes | Must satisfy both conditions |

Sequence counters are tracked per path and reset via DELETE /config. The counter only increments when the winning rule has a sequence property, so fallback rules (without sequence) can handle liveness probes without consuming sequence numbers.

Streaming Configuration

Mock-LLM supports streaming responses when clients send stream: true in their requests. Streaming behavior is configured globally:

streaming:
  chunkSize: 50         # characters per chunk (default: 50)
  chunkIntervalMs: 50   # milliseconds between chunks (default: 50)

rules:
  - path: "/v1/chat/completions"
    match: "@"
    # etc...

When clients request streaming, Mock-LLM returns Server-Sent Events (SSE) with Content-Type: text/event-stream:

const stream = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello' }],
  stream: true  // Enables streaming
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

This enables deterministic testing of streaming protocol responses. Errors conditions can also be tested - error responses are sent as per the OpenAI Streaming Specification.

MCP (Model Context Protocol) Mocking

Mock-LLM exposes MCP servers and tools which support testing the MCP protocol, details are in the MCP Documentation.

A2A (Agent to Agent Protocol) Mocking

Mock-LLM exposes A2A servers and tools which support testing the A2A protocol, details are in the A2A Documentation.

Deploying to Kubernetes with Helm

# Install from OCI registry
helm install mock-llm oci://ghcr.io/dwmkerr/charts/mock-llm --version 0.1.8

# Install with Ark resources enabled
# Requires Ark to be installed: https://github.com/mckinsey/agents-at-scale-ark
helm install mock-llm oci://ghcr.io/dwmkerr/charts/mock-llm --version 0.1.8 \
  --set ark.model.enabled=true \
  --set ark.a2a.enabled=true \
  --set ark.mcp.enabled=true

# Verify deployment
kubectl get deployment mock-llm
kubectl get service mock-llm

# Port forward and test
kubectl port-forward svc/mock-llm 6556:6556 &
curl -X POST http://localhost:6556/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}'

Custom configuration via values.yaml:


# Optional additional mock-llm configuration.
config:
  rules:
    - path: "/v1/chat/completions"
      match: "contains(messages[-1].content, 'hello')"
      response:
        status: 200
        content: |
          {
            "choices": [{
              "message": {
                "role": "assistant",
                "content": "Hi there!"
              },
              "finish_reason": "stop"
            }]
          }

# Or use existing ConfigMap (must contain key 'mock-llm.yaml')
# existingConfigMap: "my-custom-config"

See the full Helm documentation for advanced configuration, Ark integration, and more.

Examples

Any OpenAI API compatible SDKs can be used with Mock LLM. For Node.js:

const OpenAI = require('openai');

const client = new OpenAI({
  apiKey: 'mock-key',
  baseURL: 'http://localhost:6556/v1'
});

const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello' }]
});

console.log(response.choices[0].message.content);
// "Hello"

And for Python:

from openai import OpenAI

client = OpenAI(
    api_key='mock-key',
    base_url='http://localhost:6556/v1'
)

response = client.chat.completions.create(
    model='gpt-4',
    messages=[{'role': 'user', 'content': 'Hello'}]
)

print(response.choices[0].message.content)
# "Hello"

Developer Guide

Install dependencies and start with live-reload:

npm install
npm run dev

Lint or run tests:

npm run lint
npm run test

Test and inspect the MCP Server running locally:

npm run local:inspect

Samples

Each sample below is in the form of an extremely minimal script that shows:

  1. How to configure mock-llm for a specific scenario
  2. How to run the scenario
  3. How to validate the results

These can be a reference for your own tests. Each sample is also run as part of the project's build pipeline.

| Sample | Description | |--------|-------------| | 01-echo-message.sh | Assert a response from an LLM. | | 02-error-401.sh | Verify error handling scenario. | | 03-system-message-in-conversation.sh | Test system message handling in conversations. | | 04-headers-validation.sh | Test custom HTTP header validation. | | 05-a2a-countdown-agent.sh | Test A2A blocking task operations. | | 06-a2a-echo-agent.sh | Test A2A message handling. | | 07-a2a-message-context.sh | Test A2A message context and history. | | 08-mcp-echo-tool.sh | Test MCP tool invocation. | | 09-token-usage.sh | Test token usage tracking. | | 10-mcp-inspect-headers.sh | Test MCP header inspection. | | 11-sequential-tool-calling.sh | Test sequential responses for tool-calling flows. | | 12-list-models.sh | Test GET /v1/models endpoint. |

Each sample below is a link to a real-world deterministic integration test in Ark that uses mock-llm features. These tests can be used as a reference for your own tests.

| Test | Description | |------|-------------| | agent-default-model | Basic LLM query and response. | | model-custom-headers | Passing custom headers to models. | | query-parameter-ref | Dynamic prompt resolution from ConfigMaps and Secrets. | | query-token-usage | Token usage tracking and reporting. | | a2a-agent-discovery | A2A agent discovery and server readiness. | | a2a-message-query | A2A message handling. | | a2a-blocking-task-completed | A2A blocking task successful completion. | | a2a-blocking-task-failed | A2A blocking task error handling. | | mcp-discovery | MCP server and tool discovery. | | mcp-header-propagation (PR #311) | MCP header propagation from Agents and Queries. | | agent-tools | Sequential responses for tool-calling agents. |

Contributors

Thanks to (emoji key):

This project follows the all-contributors specification. Contributions of any kind welcome.