npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

llm-emulator

v0.5.1

Published

Enterprise-grade LLM mock server for local and CI: scenarios, faults, latency, contracts, VCR. Supports standalone server and Express middleware.

Downloads

647

Readme

LLM Emulator

LLM Emulator is an enterprise-grade, deterministic, fully offline emulator for LLM providers such as OpenAI, Gemini, and Ollama.
It enables full-stack automated testing—CI, integration tests, E2E flows, multi-agent orchestration flows, and local development—without hitting real LLM APIs, without API keys, and without nondeterministic model drift.

The star of the system is Scenario Graphs: branching, stateful, multi-step scripted interactions that emulate how your LLM-powered agents and workflows behave in production.

Other features include:

  • Linear scenarios
  • Case-based prompt → response mocking
  • HTTP downstream API mocks (for your REST dependencies)
  • Fault injection
  • Delays
  • JSON-schema contract validation
  • VCR request recording
  • Express middleware integration

📌 Table of Contents

  1. Overview
  2. Installation
  3. Quick Start
  4. Scenario Graphs
  5. Linear Scenarios
  6. Case-Based Prompt Mocks
  7. HTTP Mocking
  8. Provider Compatibility
  9. Fault Injection
  10. Delays
  11. Contract Validation
  12. VCR Recording
  13. Express Middleware
  14. CLI Reference
  15. Full DSL & Config Documentation
  16. License

Overview

Applications today rely on LLM outputs for:

  • multi-step conversations
  • agent tool calls
  • chain-of-thought workflows
  • structured output generation
  • code generation
  • orchestration logic
  • multi-agent routing

This makes local testing, CI, and E2E automation incredibly fragile unless you have:

  • deterministic outputs
  • reproducible flows
  • fast execution
  • offline capability
  • stateful multi-turn interactions

LLM Emulator provides all of this.


Installation

npm install llm-emulator --save-dev

Or use npx:

npx llm-emulator ./mocks/config.mjs

Quick Start

config.mjs

import { define, scenario, caseWhen, httpGet } from "llm-emulator";

export default define({
  server: { port: 11434 },

  useScenario: "checkout-graph",

  scenarios: [
    scenario("checkout-graph", {
      start: "collect-name",
      steps: {
        "collect-name": {
          branches: [
            {
              when: "my name is {{name}}",
              if: ({ name }) => name.toLowerCase().includes("declined"),
              reply: "Your application is declined.",
              next: "end-declined",
            },
            {
              when: "my name is {{name}}",
              if: ({ name }) => name.toLowerCase().includes("approved"),
              reply: "Your application is approved!",
              next: "end-approved",
            },
            {
              when: "my name is {{name}}",
              reply: ({ vars }) =>
                `Thanks ${vars.name}, what's your address?`,
              next: "collect-address",
            },
          ],
        },

        "collect-address": {
          branches: [
            {
              when: "my address is {{address}}",
              reply: ({ vars }) =>
                `We will contact you at ${vars.address}.`,
              next: "end-pending",
            },
          ],
        },

        "end-declined": { final: true },
        "end-approved": { final: true },
        "end-pending": { final: true },
      },
    }),
  ],

  cases: [
    caseWhen("explain {{topic}} simply", ({ topic }) =>
      `Simple explanation of ${topic}.`
    ),
  ],

  httpMocks: [
    httpGet("/api/user/:id", ({ params }) => ({
      id: params.id,
      name: "Mock User",
    })),
  ],

  defaults: {
    fallback: "No mock available.",
  },
});

Run it:

npx llm-emulator ./config.mjs --scenario checkout-graph

Scenario Graphs

Scenario Graphs are the primary way to emulate multi-step LLM-driven workflows.

A scenario consists of:

  • start: the initial state ID
  • steps: a mapping of state IDs to state definitions
  • each state contains one or more branches
  • each branch defines:
    • a pattern (when)
    • optional guard (if)
    • reply (reply)
    • next state (next)
    • optional delay (delayMs)
    • optional tool result (result)
    • optional type (kind: "chat" or "tools")

Example

scenario("checkout-graph", {
  start: "collect-name",
  steps: {
    "collect-name": {
      branches: [
        {
          when: "my name is {{name}}",
          if: ({ name }) => name.toLowerCase().includes("declined"),
          reply: "Declined.",
          next: "end-declined",
        },
        {
          when: "my name is {{name}}",
          if: ({ name }) => name.toLowerCase().includes("approved"),
          reply: "Approved!",
          next: "end-approved",
        },
        {
          when: "my name is {{name}}",
          reply: ({ vars }) => `Hello ${vars.name}. Your address?`,
          next: "collect-address",
        },
      ],
    },

    "collect-address": {
      branches: [
        {
          when: "my address is {{address}}",
          reply: ({ vars }) =>
            `Thanks. We'll mail you at ${vars.address}.`,
          next: "end-pending",
        },
      ],
    },

    "end-declined": { final: true },
    "end-approved": { final: true },
    "end-pending": { final: true },
  },
});

What Scenario Graphs Support

  • Multi-turn conversation emulation
  • Conditional routing
  • Stateful flows
  • Dynamic replies
  • Tool-style responses
  • Terminal states
  • Deterministic behavior

Linear Scenarios

For simple ordered scripts:

scenario("simple-linear", {
  steps: [
    { kind: "chat", reply: "Welcome" },
    { kind: "chat", reply: "Next" }
  ]
});

These run top-to-bottom.


Case-Based Prompt Mocks

Direct LLM prompt → response mocking:

caseWhen("summarize {{topic}}", ({ topic }) =>
  `Summary of ${topic}`
);

Pattern matching supports:

  • Template variables {{var}}
  • Looser lexical matching
  • Optional fuzzy matching fallback

HTTP Mocking

Mock downstream REST calls:

httpGet("/api/user/:id", ({ params }) => ({
  id: params.id,
  name: "Mock User",
}));

httpPost("/api/checkout", ({ body }) => ({
  status: "ok",
  orderId: "mock123",
}));

Works with:

  • GET
  • POST
  • PUT
  • DELETE
  • Path params (:id)
  • Query params
  • JSON body parsing

Provider Compatibility

LLM Emulator exposes mock endpoints identical to real providers.

OpenAI-Compatible

POST /v1/chat/completions
POST /chat/completions
POST /v1/responses
POST /responses
POST /v1/embeddings

Embeddings return deterministic fake vectors.

Gemini-Compatible

POST /v1/models/:model:generateContent
POST /v1alpha/models/:model:generateContent
POST /v1beta/models/:model:generateContent

Ollama-Compatible

POST /api/generate

Fault Injection

Faults can be attached to any:

  • branch
  • case
  • HTTP mock

Examples:

fault: { type: "timeout" }
fault: { type: "http", status: 503 }
fault: { type: "malformed-json" }
fault: { type: "stream-glitch" }

Delays

Simulate real-world latency.

Global:

server: { delayMs: 200 }

Per-scenario-state:

delayMs: 500

Per-HTTP-route:

httpGet("/x", { delayMs: 300 })

Contract Validation

Optional JSON-schema validation using Ajv.

Modes:

contracts: {
  mode: "strict" | "warn" | "off"
}

Validates:

  • OpenAI request/response
  • Gemini request/response
  • Ollama request/response

VCR Recording

Capture all incoming requests:

npx llm-emulator ./config.mjs --record ./recordings

Produces .jsonl files containing:

  • timestamp
  • provider
  • request JSON
  • response JSON

Perfect for test reproducibility and debugging.


Express Middleware

Mount the emulator in an existing server:

import { createLlmEmulatorRouter } from "llm-emulator";

const emulator = await createLlmEmulatorRouter("./config.mjs");

app.use("/llm-emulator", emulator.express());

Now you can point you OpenAI or Gemini application to this route.


CLI Reference

npx llm-emulator ./config.mjs [options]

--scenario <id>
--record <dir>
--port <num>
--verbose

Full DSL & Config Documentation

Top-Level define(config)

| Field | Description | |-------|-------------| | server.port | Port to run mock provider | | server.delayMs | Global delay | | useScenario | Active scenario ID | | scenarios[] | Scenario definitions | | cases[] | Case mocks | | httpMocks[] | HTTP mocks | | defaults.fallback | Default response text |


Scenario Graph DSL

scenario(id, {
  start: "state",
  steps: {
    "state": {
      branches: [ ... ]
    },
    "end": { final: true }
  }
})

Branch Fields

| Field | Description | |-------|-------------| | when | Pattern with template vars | | if(vars, ctx) | Optional guard | | reply | String or function | | kind | "chat" or "tools" | | result | For tool-style replies | | next | Next state ID | | delayMs | Per-branch delay | | fault | Fault injection config |


Linear Scenario DSL

scenario(id, {
  steps: [
    { kind, reply, result, delayMs, fault }
  ]
})

Case DSL

caseWhen(pattern, handler)

HTTP Mocks

httpGet(path, handler)
httpPost(path, handler)
httpPut(path, handler)
httpDelete(path, handler)

Handler receives:

{ params, query, body, headers }

Supports per-route:

  • delays
  • faults
  • dynamic replies

License

MIT