npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, πŸ‘‹, I’m Ryan HefnerΒ  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you πŸ™

Β© 2026 – Pkg Stats / Ryan Hefner

sushify

v0.1.1

Published

Turn your prompt salad into sushi πŸ₯— β†’ 🍣. Capture, analyze, and refine LLM interactions with zero code changes

Downloads

12

Readme

🍣 Sushify

"Turn your prompt salad into sushi" πŸ₯— β†’ 🍣

Sushify is a development tool that runs on your local machine, wraps your AI app, and automatically captures and analyzes every LLM API call in your application - catching issues that can cause unpredictable behaviors and bad outputs.

The best part? No code changes needed. No complex configuration files (actually no configuration files at all!). It supports any app (including dockerized) that can be proxied, regardless of programming language or LLM provider.

You can set it up in ~5 minutes.

Watch a quick demo

Sushify dashboard

Sushify analysis

Who? When? Why? How?

  • Who is Sushify for? Developers building AI applications
  • When? During development. Sushify runs on your localhost.
  • Why? Because we pass tons of free text to LLMs, usually made out of fragments, sometimes involving conditional logic. System prompts, tool descriptions, output schemas - sometimes thousands of tokens that evolve over time. There is no compiler, no linter. It's all free text. And we haven't even started talking about context management bugs. How many times has your app started misbehaving and after a lot of digging you realized that it's because you (or someone else) messed up some of the instructions? No more!
  • How? Sushify runs a proxy that wraps your app and intercepts network requests and responses to LLMs. Each exchange is sent to OpenAI for analysis using your API key. The analysis results are displayed on a dashboard that runs locally on your machine, highlighting issues and suggesting fixes.

Current status: Sushify is currently in Alpha. It is very new and seeking for feedback and contributions.

✨ Features

  • 🌍 Language agnostic, Zero code changes - Treats you app like a black-box
  • 🐳 Docker support - Seamlessly wraps containerized applications
  • πŸ“Š Real-time deep/cheap analysis - Get sushiness scores and detailed reports instantly
  • 🎯 Smart recommendations - Turn soup-like prompts into sushi
  • πŸ“± Simple dashboard - Inspect all LLM interactions with ease

πŸš€ Quick Start

Prerequisites

  • Node.js version ^20.19 || ^22.12 || >=24
  • Python 3.8+ with pip (for HTTP proxy functionality)
    • Note: Command python3 must be available (even if python points to Python 2.7)
  • OpenAI API Key (optional, for prompt analysis)

    Note: Sushify sends your prompts to open AI for analysis (request body, url and response), besides that everything runs locally on your machine.

Install Sushify

npm install -g sushify

One-time Setup

Run the setup command to configure dependencies:

sushify setup

This will automatically:

  • βœ… Check for Python 3.8+ and Node.js requirements
  • βœ… Install mitmproxy (HTTP proxy for traffic capture)
  • βœ… Install HTTPS certificate (one-time setup)
  • βœ… Provide step-by-step guidance if any issues

Note: You will be prompted for your password to install the HTTPS certificate. Why the certificate? Sushify needs to intercept HTTPS traffic to analyze AI API calls (OpenAI, Anthropic, etc.). The certificate is only active when Sushify runs and can be removed anytime with sushify cleanup.

One-time only: You'll be prompted for your password once to install the certificate. After that, Sushify works without any authentication.

Configure Your Environment

Set Your OpenAI API Key (Required for Analysis)

The whole point of Sushify is to analyze your prompts for quality issues. Set your OpenAI API key to enable this:

export OPENAI_API_KEY=sk-your-key-here

Or add it to your shell profile for persistence:

echo 'export OPENAI_API_KEY=sk-your-key-here' >> ~/.zshrc
source ~/.zshrc

Analysis Mode (Optional)

Sushify supports two analysis modes:

  • Deep mode (default): Runs multiple LLM calls per analysis for thorough prompt quality checking
  • Cheap mode: Uses a single LLM call per analysis for faster, more cost-effective analysis
# Switch to cheap mode for faster analysis
export ANALYSIS_MODE=cheap

# Switch back to deep mode (or unset the variable)
export ANALYSIS_MODE=deep
# OR
unset ANALYSIS_MODE

The current analysis mode is displayed in the dashboard. You can switch modes anytime by changing the environment variable and restarting Sushify.

Custom LLM Provider (Optional)

By default, Sushify monitors OpenAI, Anthropic, and Google AI APIs. To monitor a custom provider:

export LLM_PROVIDER_BASE_URL=your-custom-ai-gateway.com

Examples:

# Custom gateway
export LLM_PROVIDER_BASE_URL=api.mygateway.com

# Self-hosted with port
export LLM_PROVIDER_BASE_URL=https://my-llm-server.internal:8080

# Azure OpenAI
export LLM_PROVIDER_BASE_URL=https://your-resource.openai.azure.com

Start Using Sushify

# Wrap any command that makes LLM API calls
```bash
sushify start "python main.py"
sushify start "flask run"
sushify start "npm start"
sushify start "npm run dev"
# Docker applications
sushify start --docker=backend "docker compose up"

That's it! Open your browser to the dashboard URL shown in the terminal to see your LLM interactions being captured and analyzed in real-time.

✨ What You Get

With Sushify configured, you'll get:

  • 🍣 Quality scoring for each prompt
  • πŸ” Problem detection in instructions
  • πŸ“Š Real-time feedback in the dashboard
  • 🎯 Smart recommendations to improve your prompts

🌐 Language Support

Note: I am testing more SDKs, popular frameworks and languages and looking for inputs from the community.

Sushify works by setting standard proxy environment variables (HTTP_PROXY, HTTPS_PROXY) and certificate paths. This works automatically with libraries that respect these environment variables:

βœ… Confirmed Working (tested)

  • Python - requests, urllib3, httpx (via REQUESTS_CA_BUNDLE)
  • Node.js v24+ - Native fetch() (via NODE_EXTRA_CA_CERTS + NODE_USE_ENV_PROXY)
  • Docker containers - Python and Node.js apps in containers

🟑 Should Work (proxy environment variable support)

  • Go - net/http package (respects HTTP_PROXY/HTTPS_PROXY)
  • Rust - reqwest with default features (proxy support enabled)
  • Java - JVM with -Djava.net.useSystemProxies=true
  • C#/.NET - HttpClient (respects proxy environment variables)
  • PHP - cURL and Guzzle (via CURL_CA_BUNDLE)
  • Bun / Deno - Should work like Node.js but not tested

⚠️ May Need Code Changes

  • Frameworks that ignore proxy env vars - Custom HTTP clients, some SDKs
  • gRPC clients - Often need explicit proxy configuration
  • WebSocket libraries - May not respect HTTP proxy settings

Works automatically with zero code changes! Sushify configures standard proxy environment variables that most HTTP clients respect automatically.

Node.js < v24: See docs/OLD-NODE-INSTRUCTIONS.md for workarounds.

Google SDK: Google's client uses gRPC by default. For LLM API capture, add transport="rest" to your client configuration`

🐳 Docker Support

Sushify seamlessly works with containerized applications:

# Automatically configure specific Docker services
sushify start --docker=backend "docker compose up"
sushify start --docker=api-service "docker compose up"

Sushify automatically configures proxy settings and certificate mounting for your containers.

πŸ“‹ Logging & Debugging

Sushify creates detailed logs to help with troubleshooting and bug reports. Each session generates its own logging directory with structured logs.

Log Location

~/.sushify/logs/
β”œβ”€β”€ session-2025-09-04-103045/    # Each session gets its own folder
β”‚   β”œβ”€β”€ interceptor.log           # Python mitmproxy logs
β”‚   └── server.log                # Node.js server/analysis logs
β”œβ”€β”€ session-2025-09-04-110230/    # Previous sessions
└── latest -> session-2025-09-04-103045/  # Symlink to current session

What Gets Logged

  • interceptor.log: Request/response interception, network errors, exchange capturing
  • server.log: Dashboard startup, LLM analysis results, API calls

Automatic Cleanup

Sushify automatically keeps the 10 most recent session directories and removes older ones to prevent disk space issues.

F.A.Q

  1. Q: What's the difference between Sushify and running evals? A: Evals focus on the LLM's output and Sushify, while output aware, focuses on the inputs.
  2. Q: Why use a proxy rather than X? A: I considered and implemented various approaches in an attempt to solve this problem. I might give a talk or write about it one day. Other solutions either require code modifications that are easy to get wrong or break under the complexity of real world apps. With that said, I am open to feedback from the community and will course correct if needed.
  3. Q: What is Pragmatic Fish? You're just one person... A: I am one person now but I hope that others will want to join me. Almost all the dev tools for LLM apps are still MIA and Pragmatic Fish is my attempt to start something that addresses these gaps in novel and creative ways.
  4. Q: Why use OpenAI for analysis? A: Because it's easy for the MVP. They have great support for structured outputs and I know their models very well. Ideally a future version would allow using local models and then it will be completely free and super cool.
  5. Q: Can you add feature X? A: Sure, why not. Open an issue and let's talk.

πŸ› οΈ Contributing

Want to contribute to Sushify or modify it for your needs?

πŸ‘¨β€πŸ’» See the Development Guide for:

Contributions and feedback are welcome!! The development guide has everything you need to get started.