npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@flink-app/fake-llm-adapter

v2.0.0-alpha.56

Published

Fake LLM adapter for Flink AI framework - perfect for demos and testing

Readme

@flink-app/fake-llm-adapter

A fake LLM adapter for the Flink AI framework - perfect for demos, development, and testing without requiring real LLM API connections.

Overview

The FakeLLMAdapter implements the LLMAdapter interface and generates contextually relevant responses with intelligent tool calling decisions. Unlike simple mocks, it:

  • Intelligently decides when to call tools based on user messages and available tools
  • Generates realistic tool inputs by analyzing tool schemas and extracting values from user messages
  • Provides contextually relevant text responses using customizable templates
  • Simulates realistic token usage for testing cost/performance scenarios
  • Supports reproducible randomness via seeding for deterministic testing
  • Configurable behavior modes from text-only to tool-heavy

Installation

pnpm add @flink-app/fake-llm-adapter

Quick Start

import { FlinkApp } from "@flink-app/flink";
import { FakeLLMAdapter } from "@flink-app/fake-llm-adapter";

const app = new FlinkApp({
  ai: {
    llms: {
      fake: new FakeLLMAdapter()
    }
  }
});

Creating an Agent with Fake Adapter

import { createAgent } from "@flink-app/flink/ai";
import { FakeLLMAdapter } from "@flink-app/fake-llm-adapter";

const weatherAgent = createAgent({
  llm: new FakeLLMAdapter({
    mode: "tool-heavy", // Prefer calling tools
    preferredTools: ["get_weather"], // Prioritize weather tool
  }),
  instructions: "You are a helpful weather assistant.",
  tools: [
    {
      name: "get_weather",
      description: "Get weather for a location",
      input_schema: {
        type: "object",
        properties: {
          city: { type: "string" },
          unit: { type: "string", enum: ["celsius", "fahrenheit"] }
        },
        required: ["city"]
      }
    }
  ]
});

const result = await weatherAgent.execute("What's the weather in Stockholm?");
// Likely calls get_weather tool with { city: "Stockholm", unit: "celsius" }

Configuration

export interface FakeLLMAdapterConfig {
  // Response behavior
  mode?: AdapterMode; // "balanced" | "tool-heavy" | "text-only" | "random"

  // Tool calling configuration
  toolCallProbability?: number; // 0.0 to 1.0 (default: 0.3)
  maxToolCallsPerTurn?: number; // default: 3

  // Randomness control
  seed?: number; // For reproducible responses

  // Timing simulation
  responseDelay?: number; // milliseconds (default: 0)

  // Response customization
  templates?: ResponseTemplate[];
  personality?: PersonalityPreset; // "professional" | "friendly" | "technical" | "concise" | "verbose"

  // Token simulation
  tokenMultiplier?: number; // Adjust token counts (default: 1.0)

  // Advanced options
  debugMode?: boolean; // Log decision-making process
  preferredTools?: string[]; // Prioritize these tool names
}

Modes Explained

balanced (Default)

Mix of text responses and tool calls. Base 30% probability of calling tools, adjusted based on context.

Use when: General development and testing

new FakeLLMAdapter({ mode: "balanced" })

tool-heavy

Strongly prefers tool calls. Base 70% probability of calling tools.

Use when: Testing tool calling flows, demos showcasing tools

new FakeLLMAdapter({ mode: "tool-heavy" })

text-only

Never calls tools. Always returns text responses.

Use when: Testing pure conversation flows, UI without tool interactions

new FakeLLMAdapter({ mode: "text-only" })

random

Completely unpredictable behavior.

Use when: Stress testing agent error handling

new FakeLLMAdapter({ mode: "random" })

Context-Aware Tool Calling

The adapter intelligently decides when to call tools based on:

  1. Action verbs in user message ("get", "find", "search", "calculate", etc.) → +20% probability
  2. Tool name matches message content → +25% probability
  3. Temperature setting < 0.5 → +10% probability (more deterministic)
  4. Preferred tools configuration → Prioritized in selection

Example: Intelligent Tool Selection

const adapter = new FakeLLMAdapter({
  mode: "balanced",
  preferredTools: ["search_database"]
});

// "Find user John" → Likely calls tool (has "find" action verb)
// "What's the weather?" → May or may not call tool (depends on available tools)
// "Hello!" → Unlikely to call tool (no action verbs)

Custom Response Templates

Define domain-specific responses:

const adapter = new FakeLLMAdapter({
  templates: [
    {
      pattern: /order|purchase|buy/i,
      responses: [
        "I'll help you with that order.",
        "Let me process your purchase.",
        "I can assist with buying that."
      ]
    },
    {
      pattern: /cancel|refund/i,
      responses: [
        "I'll help you with the cancellation.",
        "Let me process that refund request."
      ],
      toolTrigger: "process_refund" // Prefer this tool when pattern matches
    }
  ]
});

Reproducible Testing with Seeds

Use seeds for deterministic output in tests:

import { FakeLLMAdapter } from "@flink-app/fake-llm-adapter";

describe("Agent tests", () => {
  it("should handle weather query consistently", async () => {
    const adapter1 = new FakeLLMAdapter({ seed: 42 });
    const adapter2 = new FakeLLMAdapter({ seed: 42 });

    const agent1 = createAgent({ llm: adapter1, tools: weatherTools });
    const agent2 = createAgent({ llm: adapter2, tools: weatherTools });

    const result1 = await agent1.execute("Weather in Paris?");
    const result2 = await agent2.execute("Weather in Paris?");

    // Same seed = same behavior
    expect(result1).toEqual(result2);
  });
});

Simulating Network Delays

const adapter = new FakeLLMAdapter({
  responseDelay: 500 // 500ms delay per response
});

// Useful for:
// - Testing loading states in UI
// - Simulating production latency
// - Demo scenarios with realistic pacing

Token Usage Simulation

const adapter = new FakeLLMAdapter({
  tokenMultiplier: 2.0 // Double all token counts
});

// Useful for:
// - Testing token limit handling
// - Cost estimation in development
// - Performance testing with large contexts

Personality Presets

// Professional (default)
new FakeLLMAdapter({ personality: "professional" })
// → "I can assist with that request."

// Friendly
new FakeLLMAdapter({ personality: "friendly" })
// → "Sure thing! Let me help you out!"

// Technical
new FakeLLMAdapter({ personality: "technical" })
// → "Processing query. Executing tool call."

// Concise
new FakeLLMAdapter({ personality: "concise" })
// → "Processing."

// Verbose
new FakeLLMAdapter({ personality: "verbose" })
// → "I understand your request and I'll be happy to assist you with that. Let me process..."

Debug Mode

Enable logging to understand decision-making:

const adapter = new FakeLLMAdapter({
  debugMode: true
});

// Logs:
// [FakeLLMAdapter] Processing message: What's the weather?
// [FakeLLMAdapter] Generated tool calls: [{ name: "get_weather", ... }]

Tool Input Generation

The adapter generates realistic tool inputs by:

  1. Extracting from message: Searches for cities, numbers, dates in user message
  2. Schema-based defaults: Generates appropriate values based on property types
  3. Smart field matching: Recognizes common field names (email, city, name, date)
// Tool schema:
{
  name: "create_user",
  input_schema: {
    properties: {
      email: { type: "string" },
      city: { type: "string" },
      age: { type: "number" }
    }
  }
}

// Message: "Create user in Stockholm"
// Generated input:
{
  email: "[email protected]",  // Smart default for email field
  city: "Stockholm",           // Extracted from message
  age: 42                      // Random number
}

Comparison: fake-llm-adapter vs mockLLMAdapter

| Feature | FakeLLMAdapter | mockLLMAdapter (test-utils) | |---------|------------------|-------------------------------| | Purpose | Development & demos | Unit testing only | | Tool calling | Intelligent, context-aware | Always returns predefined responses | | Responses | Dynamic, template-based | Static, hard-coded | | Configuration | Highly configurable modes | Minimal config | | Reproducibility | Seeded randomness | Fully deterministic | | Use case | Running app without APIs | Mocking in tests |

Rule of thumb:

  • Use FakeLLMAdapter for running your app, demos, and development
  • Use mockLLMAdapter for unit tests where you need exact, predictable outputs

Use Cases

1. Development Without API Keys

// Develop AI features without spending money or setting up API keys
const app = new FlinkApp({
  ai: {
    llms: {
      default: new FakeLLMAdapter()
    }
  }
});

2. Demos and Presentations

// Reliable demos that don't depend on external APIs
const demoAdapter = new FakeLLMAdapter({
  mode: "tool-heavy",
  responseDelay: 800, // Realistic pacing
  seed: 12345 // Same demo every time
});

3. Testing Tool Calling Logic

// Test your agent's tool handling without real LLM
const testAdapter = new FakeLLMAdapter({
  toolCallProbability: 1.0, // Always call tools
  maxToolCallsPerTurn: 5
});

4. UI Development

// Test loading states and UI behavior
const uiTestAdapter = new FakeLLMAdapter({
  responseDelay: 1500,
  personality: "verbose" // Longer responses to test text wrapping
});

5. Integration Tests

// Consistent behavior across test runs
describe("Agent integration", () => {
  const adapter = new FakeLLMAdapter({ seed: 999 });

  it("should handle multi-turn conversation", async () => {
    // Test logic here - same results every run
  });
});

Advanced Example: Custom E-commerce Agent

const ecommerceAdapter = new FakeLLMAdapter({
  mode: "tool-heavy",
  preferredTools: ["search_products", "get_order_status"],
  templates: [
    {
      pattern: /track|status|where.*order/i,
      responses: ["Let me check your order status."],
      toolTrigger: "get_order_status"
    },
    {
      pattern: /find|search|looking for/i,
      responses: ["I'll search our catalog for you."],
      toolTrigger: "search_products"
    },
    {
      pattern: /return|refund/i,
      responses: ["I'll help you process that return."],
      toolTrigger: "process_return"
    }
  ],
  personality: "friendly",
  debugMode: true
});

const agent = createAgent({
  llm: ecommerceAdapter,
  instructions: "You are a helpful e-commerce assistant.",
  tools: [searchProducts, getOrderStatus, processReturn]
});

Limitations (Phase 1)

  • No streaming support: stream() method throws an error. Use execute() instead.
  • Simple tool input generation: Uses heuristics, may not handle complex nested schemas
  • Template matching: Basic regex patterns, not semantic understanding

Future phases may add streaming and more sophisticated input generation.

TypeScript Types

All types are fully exported:

import {
  FakeLLMAdapter,
  FakeLLMAdapterConfig,
  AdapterMode,
  PersonalityPreset,
  ResponseTemplate
} from "@flink-app/fake-llm-adapter";

License

MIT

Contributing

Issues and PRs welcome at https://github.com/FrostDigital/flink-framework