npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

stack-replayer

v1.0.0

Published

Turn cryptic error logs into reproducible bugs, replay scripts, and fix suggestions — with optional AI

Readme

stack-replayer

Turn cryptic error logs into reproducible bugs, replay scripts, and fix suggestions — with optional AI.

NPM Version License: MIT

Features

  • 🔍 Parse error logs - Extract structured data from stack traces
  • 🎯 Generate replay scripts - Create Node.js scripts that attempt to reproduce bugs
  • 🤖 Optional AI enhancement - Use OpenAI, Ollama, or any LLM for smarter analysis
  • 🔬 Sandboxed execution - Safely run replay scripts in isolated environments
  • 💡 Fix suggestions - Get actionable recommendations to resolve issues
  • 📝 Patch generation - AI can suggest code patches and tests (when enabled)
  • 🚀 Zero config - Works immediately without any setup or API keys
  • 📦 CLI & Library - Use as a command-line tool or import into your code

Installation

npm install stack-replayer
# or
pnpm add stack-replayer
# or
yarn add stack-replayer

Quick Start

1. Basic Usage (No AI, Zero Config)

The library works immediately without any configuration or API keys:

import { replayBug } from "stack-replayer";

try {
  // Your code that might throw
  const user = null;
  console.log(user.name); // TypeError!
} catch (err) {
  const errorLog = err instanceof Error ? err.stack ?? String(err) : String(err);
  
  const result = await replayBug(errorLog);
  
  console.log(result.explanation);
  console.log(result.reproductionSteps);
  console.log(result.suggestedFix);
}

Output:

TypeError occurred: "Cannot read properties of null (reading 'name')"

This error was thrown at /home/user/app.js:5 in function "<anonymous>".

The error likely indicates a runtime issue in your code. Review the stack trace and the code at the specified location for potential bugs.

2. Enable AI with OpenAI (2 env vars, no code changes)

Set two environment variables and your analysis gets dramatically smarter:

export AI_BUG_REPLAYER_PROVIDER=openai
export OPENAI_API_KEY=sk-...

Then run the same code as above. The library automatically detects and uses OpenAI for enhanced analysis including:

  • Root cause explanation
  • Detailed reproduction steps
  • Better replay scripts
  • Suggested fixes and patches
  • Generated test cases

3. Enable AI with Ollama (Local, Free)

Run a local LLM with Ollama (completely free, no API keys):

# Install and start Ollama
ollama pull llama3
ollama serve &

# Configure environment
export AI_BUG_REPLAYER_PROVIDER=ollama
# Optional: export OLLAMA_MODEL=llama3
# Optional: export OLLAMA_BASE_URL=http://localhost:11434

Now your same code uses local AI with no external API calls or costs.

CLI Usage

Install globally:

npm install -g stack-replayer

Read from a file:

stack-replayer --log error.log

Read from stdin:

cat error.log | stack-replayer

Execute the replay script:

stack-replayer --log error.log --run

Specify project root:

stack-replayer --log error.log --root /path/to/project

JSON output:

stack-replayer --log error.log --json > result.json

API Reference

replayBug(errorLog, options?)

Convenience function for one-line bug replay.

Parameters:

  • errorLog: string - The error log or stack trace
  • options?: object
    • llmClient?: LlmClient - Custom LLM client (overrides auto-detection)
    • dryRun?: boolean - If true, don't execute replay script (default: false)
    • projectRoot?: string - Project root directory
    • metadata?: object - Additional context (nodeVersion, os, etc.)

Returns: Promise<BugReplayResult>

interface BugReplayResult {
  explanation: string;
  reproductionSteps: string[];
  replayScript: string;
  suggestedFix?: string;
  suggestedPatch?: string;
  suggestedTest?: string;
  sandboxResult?: {
    success: boolean;
    reproduced: boolean;
    stdout: string;
    stderr: string;
    exitCode: number | null;
  };
}

AiBugReplayer Class

For more control, use the class directly:

import { AiBugReplayer, OpenAiLlmClient } from "stack-replayer";

const replayer = new AiBugReplayer({
  llmClient: new OpenAiLlmClient({
    apiKey: process.env.OPENAI_API_KEY!,
    model: "gpt-4o-mini"
  }),
  dryRun: false
});

const result = await replayer.replay({
  errorLog: errorStack,
  projectRoot: "/path/to/project",
  metadata: {
    nodeVersion: process.version,
    os: process.platform
  }
});

LLM Providers

Built-in Providers

OpenAI

import { OpenAiLlmClient } from "stack-replayer";

const client = new OpenAiLlmClient({
  apiKey: "sk-...",
  model: "gpt-4o-mini", // optional
  baseURL: "https://api.openai.com/v1" // optional
});

Ollama (Local)

import { OllamaLlmClient } from "stack-replayer";

const client = new OllamaLlmClient({
  baseUrl: "http://localhost:11434",
  model: "llama3"
});

Generic HTTP (OpenAI-compatible)

import { HttpLlmClient } from "stack-replayer";

const client = new HttpLlmClient({
  baseUrl: "https://your-api.com/v1/chat/completions",
  apiKey: "your-key",
  model: "your-model"
});

Custom LLM Client

Implement the LlmClient interface:

import { LlmClient, ParsedErrorLog, BugReplayInput } from "stack-replayer";

class MyCustomLlmClient implements LlmClient {
  async generateReplay(parsed: ParsedErrorLog, input: BugReplayInput) {
    // Your custom logic here
    return {
      explanation: "...",
      reproductionSteps: ["..."],
      replayScript: "...",
      suggestedFix: "..."
    };
  }
}

Environment Variables

| Variable | Description | Default | |----------|-------------|---------| | AI_BUG_REPLAYER_PROVIDER | LLM provider: openai or ollama | None (no-AI mode) | | OPENAI_API_KEY | OpenAI API key | - | | OPENAI_MODEL | OpenAI model to use | gpt-4o-mini | | OPENAI_BASE_URL | Custom OpenAI endpoint | https://api.openai.com/v1 | | OLLAMA_BASE_URL | Ollama server URL | http://localhost:11434 | | OLLAMA_MODEL | Ollama model to use | llama3 |

Examples

Catch and analyze in production

import { replayBug } from "stack-replayer";

process.on('uncaughtException', async (err) => {
  console.error('Uncaught exception:', err);
  
  const analysis = await replayBug(err.stack ?? String(err), {
    projectRoot: process.cwd(),
    metadata: {
      nodeVersion: process.version,
      os: process.platform,
      timestamp: new Date().toISOString()
    }
  });
  
  // Send to your logging service
  await sendToLoggingService({
    error: err,
    analysis: analysis.explanation,
    suggestedFix: analysis.suggestedFix
  });
});

Analyze test failures

import { replayBug } from "stack-replayer";

afterEach(async function() {
  if (this.currentTest?.state === 'failed') {
    const err = this.currentTest.err;
    if (err?.stack) {
      const analysis = await replayBug(err.stack);
      console.log('\n🔍 AI Analysis:');
      console.log(analysis.explanation);
      console.log('\n💡 Suggested Fix:');
      console.log(analysis.suggestedFix);
    }
  }
});

Dry run (skip sandbox execution)

const result = await replayBug(errorLog, { dryRun: true });
// Only get analysis and script, don't execute
console.log(result.replayScript);

How It Works

No-AI Mode (Default)

  1. Parse the error log using regex patterns
  2. Extract error type, message, and stack frames
  3. Identify user code vs. node internals
  4. Generate a basic replay script heuristically
  5. Execute in sandbox (unless dry-run)
  6. Provide generic fix suggestions based on error type

AI Mode (Optional)

  1. Parse the error log (same as above)
  2. Send to LLM with structured prompt
  3. Receive enhanced analysis:
    • Root cause explanation
    • Step-by-step reproduction
    • Smart replay script
    • Code patches
    • Test cases
  4. Execute in sandbox (unless dry-run)

Why stack-replayer?

  • Works immediately - No setup, no config, no API keys required
  • Progressive enhancement - Add AI when you want better results
  • Privacy-friendly - Use Ollama for completely local processing
  • Framework agnostic - Works with any Node.js code
  • Production ready - TypeScript, tests, proper error handling
  • Extensible - Bring your own LLM provider

Requirements

  • Node.js 18+
  • TypeScript 5+ (for development)

License

MIT

Contributing

Contributions welcome! Please read our contributing guidelines and submit PRs.

Roadmap

  • [ ] Support for browser error logs
  • [ ] Python error log parsing
  • [ ] More LLM providers (Anthropic, Gemini, etc.)
  • [ ] Better heuristic replay generation
  • [ ] Automatic fix application
  • [ ] Integration with issue trackers

Credits

Built with ❤️ by the open source community.