npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

openai-mock-api

v0.4.0

Published

A mock OpenAI API server for testing LLM applications

Downloads

77

Readme

OpenAI Mock API

npm

A mock OpenAI API server for testing LLM applications. This tool allows you to define predictable responses to specific message patterns, making it easier to test your AI-powered applications without the variability of real LLM responses.

Features

  • 🔄 NPX runnable - Use directly with npx openai-mock-api
  • 📝 YAML configuration - Define responses with simple conversation flows
  • 🎯 Multiple matching strategies - Exact, fuzzy, regex, contains, and any message matching
  • 🔄 Conversation flows - Define complete conversation patterns with automatic partial matching
  • 🛠️ Tool call support - Full support for OpenAI function/tool calls
  • 🔒 API key validation - Secure your mock API with custom keys
  • 📊 OpenAI-compatible - Drop-in replacement for OpenAI API endpoints
  • 🌊 Streaming support - Full SSE streaming compatibility
  • 🧮 Automatic token calculation - Real token counts using tiktoken library
  • 🪵 Flexible logging - Log to file or stdout with configurable verbosity
  • TypeScript first - Written in TypeScript with full type safety

Installation

npm install -g openai-mock-api

Or use directly with npx:

npx openai-mock-api --config config.yaml

Usage

Basic Usage

  1. Create a configuration file (config.yaml):
apiKey: 'your-test-api-key'
port: 3000
responses:
  - id: 'greeting'
    messages:
      - role: 'user'
        content: 'Hello, how are you?'
      - role: 'assistant'
        content: "Hello! I'm doing well, thank you for asking."
  1. Start the mock server:
npx openai-mock-api --config config.yaml --port 3000

Or use stdin for configuration:

cat config.yaml | npx openai-mock-api
# or
npx openai-mock-api < config.yaml
# or explicitly with -
npx openai-mock-api --config -
  1. Use with your OpenAI client:
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'your-test-api-key',
  baseURL: 'http://localhost:3000/v1',
});

const response = await openai.chat.completions.create({
  model: 'gpt-3.5-turbo',
  messages: [{ role: 'user', content: 'Hello, how are you?' }],
});

CLI Options

npx openai-mock-api [options]

Options:
  -c, --config <path>      Path to YAML configuration file (required)
  -p, --port <number>      Port to run the server on (default: 3000)
  -l, --log-file <path>    Path to log file (defaults to stdout)
  -v, --verbose            Enable verbose logging
  -h, --help              Display help for command

Configuration

The configuration format is conversation-first, where each response is defined as a complete conversation flow. The last assistant message in the flow is used as the response.

Matcher Types

Exact Match (Default)

Messages are matched exactly as specified. This is the default behavior when no matcher field is provided:

responses:
  - id: 'greeting'
    messages:
      - role: 'user'
        content: 'Hello, how are you?'
      - role: 'assistant'
        content: "Hello! I'm doing well, thank you for asking."

Fuzzy Match

Matches messages with similarity scoring:

responses:
  - id: 'help-request'
    messages:
      - role: 'user'
        content: 'I need help with something'
        matcher: 'fuzzy'
        threshold: 0.8 # 0.0-1.0, higher = more similar required
      - role: 'assistant'
        content: "I'd be happy to help!"

Regex Match

Matches messages using regular expressions:

responses:
  - id: 'code-request'
    messages:
      - role: 'user'
        content: '.*code.*python.*' # Matches any message containing "code" and "python"
        matcher: 'regex'
      - role: 'assistant'
        content: "Here's some Python code for you!"

Contains Match

Matches messages that contain the specified substring (case-insensitive):

responses:
  - id: 'weather-info'
    messages:
      - role: 'user'
        content: 'weather' # Matches any message containing "weather"
        matcher: 'contains'
      - role: 'assistant'
        content: 'The weather is nice today!'

Any Match

Matches any message of the specified role, enabling flexible conversation flows:

responses:
  - id: 'flexible-flow'
    messages:
      - role: 'user'
        matcher: 'any' # No content field needed
      - role: 'assistant'
        content: 'Thanks for your message!'

Conversation Flows and Partial Matching

All responses support partial conversation matching. If the incoming conversation matches the beginning of a conversation flow, it will return the final assistant response:

responses:
  - id: 'conversation-flow'
    messages:
      - role: 'user'
        content: 'Start conversation'
      - role: 'assistant'
        content: 'Hello! How can I help you?'
      - role: 'user'
        content: 'Tell me about the weather'
      - role: 'assistant'
        content: 'The weather is sunny today!'

This will match:

  • Just ["Start conversation"] → Returns: "The weather is sunny today!"
  • ["Start conversation", "Hello! How can I help you?"] → Returns: "The weather is sunny today!"
  • Full 3-message conversation → Returns: "The weather is sunny today!"

Tool Calls

The configuration format supports OpenAI tool calls in conversation flows:

responses:
  - id: 'weather-tool-flow'
    messages:
      - role: 'user'
        content: 'weather'
        matcher: 'contains'
      - role: 'assistant'
        tool_calls:
          - id: 'call_abc123'
            type: 'function'
            function:
              name: 'get_weather'
              arguments: '{"location": "San Francisco"}'
      - role: 'tool'
        matcher: 'any'
        tool_call_id: 'call_abc123'
      - role: 'assistant'
        content: "It's sunny in San Francisco!"

Full Configuration Example

apiKey: 'test-api-key-12345'
port: 3000
responses:
  - id: 'greeting'
    messages:
      - role: 'user'
        content: 'Hello, how are you?'
      - role: 'assistant'
        content: "Hello! I'm doing well, thank you for asking."

  - id: 'help-request'
    messages:
      - role: 'user'
        content: 'I need help'
        matcher: 'fuzzy'
        threshold: 0.7
      - role: 'assistant'
        content: "I'd be happy to help! What do you need assistance with?"

  - id: 'weather-info'
    messages:
      - role: 'user'
        content: 'weather'
        matcher: 'contains'
      - role: 'assistant'
        content: 'The weather is nice today!'

  - id: 'complex-conversation'
    messages:
      - role: 'system'
        matcher: 'any'
      - role: 'user'
        content: '.*help.*'
        matcher: 'regex'
      - role: 'assistant'
        content: 'How can I assist you today?'
      - role: 'user'
        matcher: 'any'
      - role: 'assistant'
        content: 'Thanks for using our service!'

Token Calculation

The mock server automatically calculates token counts for all responses using OpenAI's tiktoken library. Token usage is included in every response:

{
  "usage": {
    "prompt_tokens": 15,
    "completion_tokens": 12,
    "total_tokens": 27
  }
}
  • Prompt tokens: Calculated from the input messages
  • Completion tokens: Calculated from the response content
  • Total tokens: Sum of prompt and completion tokens

For simplicity in the mock environment, all calculations use the cl100k_base tokenizer regardless of the specified model.

Supported Endpoints

  • GET /v1/models - List available models
  • POST /v1/chat/completions - Chat completions (with streaming support)
  • GET /health - Health check endpoint

Streaming Support

The mock server supports Server-Sent Events (SSE) streaming just like the real OpenAI API:

const stream = await openai.chat.completions.create({
  model: 'gpt-3.5-turbo',
  messages: [{ role: 'user', content: 'Hello!' }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Error Handling

The mock server returns OpenAI-compatible error responses:

  • 401 Unauthorized - Invalid or missing API key
  • 400 Bad Request - Invalid request format or no matching response
  • 404 Not Found - Unsupported endpoint
  • 500 Internal Server Error - Server errors

Programmatic Usage

You can also use the mock server programmatically in your tests:

import { createMockServer } from 'openai-mock-api';

const mockServer = await createMockServer({
  config: {
    apiKey: 'test-key',
    responses: [
      {
        id: 'test',
        messages: [
          { role: 'user', content: 'Hello' },
          { role: 'assistant', content: 'Hi there!' },
        ],
      },
    ],
  },
  port: 3001,
});

await mockServer.start();
// Your tests here
await mockServer.stop();

See the programmatic usage guide for more details.

Development

Setup

git clone <repository>
cd openai-mock-api
npm install

Build

npm run build

Test

npm test

Development Mode

npm run dev -- --config example-config.yaml

Documentation

For comprehensive documentation, visit our documentation site.

The documentation includes:

  • Getting Started: Quick setup and installation guides
  • Configuration: Detailed matcher types and response configuration
  • Guides: Testing patterns, streaming, error handling, and integration examples
  • API Reference: CLI options and configuration reference

Local Documentation Development

To work on the documentation locally:

# Install docs dependencies
npm run docs:install

# Start development server
npm run docs:dev

# Build documentation
npm run docs:build

# Preview built docs
npm run docs:preview

License

MIT License - see LICENSE file for details.