npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

oneshot

v2.0.2

Published

One-shot LLM prompts

Downloads

12

Readme

🚧 this is under-construction.gif 🚧

Oneshot (pre-release)

A command-line tool for sending prompts to AI models. Supports OpenAI, Anthropic, OpenRouter, and self-hosted models.

Installation

npm install -g oneshot

Features

  • Send prompts to AI models (OpenAI, Anthropic, OpenRouter, self-hosted models)
  • Support for system prompts and variations
  • Configuration via environment variables or config files (~/.config/oneshot/config.json)
  • Model aliases for common AI models
  • Save responses to files
  • Control reasoning effort for o1/o3 models
  • Model Context Protocol (MCP) support for tool usage
  • Flexible prompt construction with multiple content types
  • Execute commands and analyze their output
  • Process meld files for advanced prompt scripting
  • AI-assisted file editing with natural language instructions or explicit diffs

Usage

Basic Usage

Send a prompt directly to an AI model:

oneshot "Your prompt here" [options]

Options:

  • -m, --model <model> - Which model to use (defaults to claude-3-7-sonnet-latest)
  • -s, --system <prompt> - System prompt text
  • -e, --effort <level> - Reasoning effort level (low, med/medium, high)
  • -o, --output <file> - Write response to file
  • --provider <provider> - Specify provider manually (openai, anthropic, openrouter)
  • --silent - Only output model response (or nothing with -o)
  • --verbose - Show detailed output including commands and prompts
  • --tools - Enable tool usage for supported models
  • --yolo - Auto-approve all tool actions without prompting
  • --mcp-server <n> - Use specific MCP server from config
  • --meld - Process .md files as meld files

Output Levels

Oneshot provides different output levels to control verbosity:

  1. Silent with Output File (--silent with -o):

    • No output to terminal
    • Response written only to the specified file
  2. Silent without Output File (--silent without -o):

    • Only the model's response is output to the terminal
    • No progress indicator
    • No original prompt display
    • No run command output
  3. Default (no flags):

    • Show progress indicator
    • Show model response
    • Suppress run command output
    • Don't show the sent prompt
  4. Verbose (--verbose):

    • Show run command output
    • Show the fully built prompt (including run command output)
    • Show progress indicator
    • Show model response
    • Show detailed debug information

Note: When using prompts with exclamation points (!), use single quotes instead of double quotes to avoid shell interpretation issues:

# This works correctly:
oneshot 'Hello, world!' -m claude-3-7-sonnet-latest

# This may cause the shell to hang:
oneshot "Hello, world!" -m claude-3-7-sonnet-latest

Content Types

Oneshot supports multiple content types that can be combined in any order. Each content type is formatted with XML tags in the final prompt, making it clear to the model what each part represents, so think of these as sections of your prompt.

  • -c, --context <content> - Context content
  • -t, --task <content> - Task content
  • -i, --instructions <content> - Instructions content
  • -p, --prompt <content> - Prompt content
  • -f, --file <path> - File content
  • -r, --run <command> - Run command and include output
  • --edit <file_path> - File to edit (can be used multiple times)

File Editing with --edit

The --edit flag provides powerful AI-assisted editing capabilities right from the command line. With a simple command, you can have an AI model read your files, understand their content, and apply intelligent changes based on natural language instructions.

# The simplest way to edit a file - just one command!
oneshot "Refactor to use async/await instead of promises" --edit myfile.js

You can also make multiple edits in one command:

oneshot "Fix bugs in these files" --edit file1.js --edit file2.js

This powerful feature enables you to:

  • Refactor complex code with a single instruction
  • Fix bugs by simply describing what's wrong
  • Add features without writing a single line of code yourself
  • Standardize patterns across multiple files
  • Make systematic changes that would be tedious to do manually

How It Works

When you use the --edit flag, Oneshot:

  1. Enables tool usage automatically - The AI gets access to read and modify files
  2. Reads the specified file(s) - The AI analyzes the file's content and structure
  3. Applies intelligent changes - Based on your instructions or the main prompt
  4. Shows you a preview of changes - You can review before they're applied (unless using --yolo)
  5. Writes the changes back - Only after your approval

Basic Usage Examples

# Edit a single file with general instructions
oneshot "Fix error handling in this file" --edit path/to/file.js

# Edit multiple files at once
oneshot "Standardize error handling" --edit file1.js --edit file2.js

# Combine with other content types for more context
oneshot --task "Update deprecated API calls" --context "We're using Node.js 18" --edit src/api.js

Real-World Examples

Here are some practical examples of what you can do with the edit feature:

# Modernize legacy code
oneshot "Refactor to use modern JavaScript features like arrow functions, template literals, and destructuring" --edit legacy.js

# Add JSDoc comments
oneshot "Add comprehensive JSDoc comments to all functions" --edit utils.js

# Fix accessibility issues
oneshot "Fix all accessibility issues in this React component and add proper ARIA attributes" --edit component.jsx

# Update API implementation
oneshot "Update this API client to use the new v2 endpoints as described in https://api.example.com/docs" --edit api.js

# Implement a feature
oneshot "Add a dark mode toggle feature that persists user preference in localStorage" --edit app.js

# Performance optimization
oneshot "Optimize this function for better performance by reducing complexity and avoiding unnecessary calculations" --edit heavyProcess.js

You can use --no-tools to disable tool usage even when using --edit, or use --yolo to auto-approve all tool actions without prompting for confirmation.

File Editing Command

In addition to the --edit flag, Oneshot also provides a dedicated edit command for a more focused file editing experience:

oneshot edit <file_path> [options]

Options:

  • -d, --diff <instructions> - Explicit edit instructions or diff to apply
  • -m, --model <model> - Model to use (defaults to your default model)
  • -s, --system <prompt> - System prompt for the edit
  • --output <file> - Save edits to a new file instead of overwriting
  • --tools - Enable tool usage (enabled by default for edit command)
  • --no-tools - Disable tool usage
  • --yolo - Auto-approve all tool actions without prompting
  • --mcp-server <n> - Use specific MCP server from config

Examples:

# Edit a file with default instructions (AI decides what to improve)
oneshot edit src/app.js

# Edit with specific instructions
oneshot edit src/app.js -d "Add input validation to the login function"

# Edit with a structured diff format
oneshot edit src/app.js -d "--- login function
+++ login function with validation
@@ Change the login function to validate email format and password length"

# Edit with a specific model and save to a new file
oneshot edit src/app.js -m claude-3-opus-latest --output src/app.improved.js

# Edit with a custom system prompt
oneshot edit src/app.js -s "You are a security expert. Find and fix any security issues."

Using Meld Files

Oneshot has built-in support for meld, a prompt scripting language that provides powerful features for creating AI prompts:

  1. Files with extensions .mld, .mld.md are automatically processed with meld.
  2. Files with extension .md can be processed with meld by adding the --meld flag.
  3. Output files follow the naming convention filename-reply.md (with numeric suffixes for duplicates).

Example:

# Process a meld file and send to AI
oneshot prompt.meld -m claude-3-opus-latest

# Process a markdown file with meld
oneshot prompt.md --meld -m o1-mini

Meld allows you to use variables, directives, and scripting in your prompts:

@text greeting = "Hello"
@text name = "World"

${greeting}, ${name}!

@run [ls -la]

See the meld documentation for more details on writing meld files.

Examples

# Send a simple prompt
oneshot "What is the capital of France?"

# Use a file as input
oneshot myfile.md

# Combine multiple content types
oneshot --context "I'm a developer" --task "Explain this code" --file code.js

# Run a command and analyze its output
oneshot --task "Summarize the failing tests" --run "npm test"

# Run multiple commands in sequence
oneshot --run "ls -la" --run "git status" --task "Explain what's going on in this repository"

# Enable tool usage with a specific MCP server
oneshot "Show me the git status" --tools --mcp-server git

# Send prompt and save response to file
oneshot "What is the capital of France?" -o response.txt

# Edit a file with AI assistance
oneshot "Update error handling" --edit src/app.js

# Edit multiple files with specific instructions
oneshot "Standardize API error responses with consistent error codes in src/api.js and update error handling in src/client.js" --edit src/api.js --edit src/client.js

Using Custom Models with --provider

When you need to use a model that isn't recognized by default, you can specify the provider explicitly:

# Use a new OpenAI model that's not yet recognized by Oneshot
oneshot "Tell me about yourself" --model gpt-5-preview --provider openai

# Use a custom Claude model
oneshot "Explain quantum computing" --model claude-3-8-opus --provider anthropic

# Use a model via OpenRouter
oneshot "Write a short story" --model mistral/mistral-small --provider openrouter

The --provider flag allows you to bypass model name validation, which is useful for:

  • Using newly released models before they're officially supported
  • Using proprietary fine-tuned models with custom names
  • Experimenting with models in development

Without the --provider flag, Oneshot tries to determine the provider based on the model name prefix.

Configuration

You can configure your API keys and MCP servers in several ways:

  1. Environment variables:
export ANTHROPIC_API_KEY=<your-key>   # For Claude models
export OPENAI_API_KEY=<your-key>      # For GPT models
export OPENROUTER_API_KEY=<your-key>  # For models via OpenRouter
  1. Config file in ~/.config/oneshot/config.json:
{
  "anthropicApiKey": "your-key",
  "openaiApiKey": "your-key",
  "openrouterApiKey": "your-key",
  "modelAliases": {
    "claude": "anthropic:claude-3-7-sonnet-latest",
    "sonnet": "anthropic:claude-3-7-sonnet-latest",
    "opus": "anthropic:claude-3-opus-latest",
    "haiku": "anthropic:claude-3-5-haiku-latest",
    "4o": "openai:chatgpt-4o-latest",
    "o1": "openai:o1",
    "o1-mini": "openai:o1-mini"
  },
  "mcp": {
    "servers": {
      "filesystem": {
        "command": "npx",
        "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/data"]
      },
      "git": {
        "command": "uvx",
        "args": ["mcp-server-git", "--repository", "./"]
      },
      "fetch": {
        "command": "uvx",
        "args": ["mcp-server-fetch"]
      }
    },
    "defaultServer": "filesystem",
    "toolsAllowed": ["*"],
    "requireToolConfirmation": true
  }
}

Self-Hosted Models Configuration

You can configure self-hosted models to use local or self-hosted LLM servers like Ollama, LM Studio, or any other compatible API server.

Basic Configuration

Add self-hosted models to your config file:

{
  "selfHosted": {
    "local-llama": {
      "url": "http://localhost:8000/v1",
      "provider": "openai",
      "headers": {
        "Custom-Header": "value"
      }
    },
    "local-claude": {
      "url": "http://localhost:8001/v1",
      "provider": "anthropic",
      "authToken": "local-token"
    }
  }
}

Each self-hosted model configuration requires:

  • url: The endpoint URL for your model
  • provider: The API format to use ('openai', 'anthropic', etc.)
  • Optional headers: Custom headers to include with requests
  • Optional authToken: Authentication token if required

Ollama Configuration

To use models from Ollama:

{
  "selfHosted": {
    "ollama-llama3": {
      "url": "http://localhost:11434/v1",
      "provider": "openai"
    },
    "ollama-mistral": {
      "url": "http://localhost:11434/v1",
      "provider": "openai"
    }
  },
  "modelAliases": {
    "llama3": "self-hosted:ollama-llama3",
    "mistral": "self-hosted:ollama-mistral"
  }
}

Use Ollama models with:

# Start Ollama server first
ollama serve

# Use the model with oneshot
oneshot -m llama3 "Your prompt here"

LM Studio Configuration

To use models from LM Studio:

{
  "selfHosted": {
    "lmstudio": {
      "url": "http://localhost:1234/v1",
      "provider": "openai"
    }
  },
  "modelAliases": {
    "local": "self-hosted:lmstudio"
  }
}

Use LM Studio models with:

# Start the LM Studio server first and ensure API server is enabled
# Then use the model with oneshot
oneshot -m local "Your prompt here"

Secret References

You can use 1Password CLI secret references in your config:

{
  "anthropicApiKey": "op://vault-name/anthropic/api-key",
  "openaiApiKey": "op://vault-name/openai/api-key",
  "openrouterApiKey": "op://vault-name/openrouter/api-key",
  }
}

Default Model Aliases

The following model aliases are available by default:

  • claudeanthropic:claude-3-7-sonnet-latest
  • sonnetanthropic:claude-3-7-sonnet-latest
  • opusanthropic:claude-3-opus-latest
  • haikuanthropic:claude-3-5-haiku-latest
  • 4oopenai:chatgpt-4o-latest
  • o1openai:o1
  • o1-miniopenai:o1-mini

You can override these or add your own in your configuration file.

You can also use OpenRouter (maybe -- this is untested) models with the following formats:

  • Provider-specific format: openrouter:openai/gpt-4-turbo, openrouter:anthropic/claude-3-opus, openrouter:meta/llama-3-70b
  • Colon-separated format: openrouter:openai:gpt-4-turbo, openrouter:anthropic:claude-3-opus

These follow the same provider:model pattern used by model aliases, with OpenRouter specified as the top-level provider.

Managing Configuration

View and Set Basic Configuration

View current configuration:

oneshot config

Set API keys:

oneshot config --anthropic <key>
oneshot config --openai <key>

Set model aliases:

oneshot config --alias claude=anthropic:claude-3-latest gpt=openai:chatgpt-4o-latest

Set default model:

oneshot default claude-3-7-sonnet-latest
# or
oneshot --default claude-3-7-sonnet-latest

Configuring Self-Hosted Models (untested wip)

Self-hosted models are configured in your config file (~/.config/oneshot/config.json):

{
  "selfHosted": {
    "local-llama": {
      "url": "http://localhost:8000/v1",
      "provider": "openai"
    },
    "my-local-model": {
      "url": "http://localhost:8001/v1",
      "provider": "anthropic",
      "authToken": "mytoken"
    },
    "custom-model": {
      "url": "http://api.example.com/v1",
      "provider": "openai",
      "headers": {
        "X-Custom": "value"
      }
    }
  }
}

After configuring your self-hosted models, create aliases for easier use:

# Create a model alias
oneshot config --alias llama=local-llama

Examples for Popular Self-Hosted Systems

Using Ollama

For Ollama models:

{
  "selfHosted": {
    "ollama-llama3": {
      "url": "http://localhost:11434/v1",
      "provider": "openai"
    },
    "ollama-mistral": {
      "url": "http://localhost:11434/v1",
      "provider": "openai"
    }
  },
  "modelAliases": {
    "llama3": "self-hosted:ollama-llama3",
    "mistral": "self-hosted:ollama-mistral"
  }
}
# Start Ollama server first
ollama serve

# Use the models
oneshot -m llama3 "Your prompt here"
oneshot -m mistral "Your prompt here"
Using LM Studio

For LM Studio models:

{
  "selfHosted": {
    "lmstudio": {
      "url": "http://localhost:1234/v1", 
      "provider": "openai"
    }
  },
  "modelAliases": {
    "local": "self-hosted:lmstudio"
  }
}
# Start LM Studio and enable the API server
# Then use your model
oneshot -m local "Your prompt here"
# Or use the alias
oneshot -m local "Your prompt here"

Managing MCP Servers

Oneshot provides comprehensive commands for managing MCP servers, similar to Claude Code's approach:

Add a new MCP server:

# Add a server that needs to be launched
oneshot mcp add filesystem --env API_KEY=secret -- npx @modelcontextprotocol/server-filesystem ./data

# Add a URL-based server connection
oneshot mcp add myserver --url http://localhost:8080 --token mytoken

List all configured servers:

oneshot mcp list

Get details for a specific server:

oneshot mcp get filesystem

Remove a server:

oneshot mcp remove filesystem

Set the default server:

oneshot mcp default filesystem

Import servers from Claude desktop configuration:

oneshot mcp import

You can control whether the configuration is stored globally or locally:

# Add to global config (~/.config/oneshot/config.json)
oneshot mcp add myserver --scope global -- npx my-server

# Add to local config (./.config/oneshot/config.json in current directory)
oneshot mcp add myserver --scope local -- npx my-server

Development

Testing

Oneshot uses Vitest for testing. The project has two test commands:

  • npm test or npm run test:oneshot: Runs only the ESM-compatible tests in tests/oneshot/ directory
  • npm run test:all: Runs all tests, including tests that are still being migrated to ESM

Testing Guidelines

When writing tests, especially when mocking dependencies, follow these guidelines:

  1. Place mocks before importing the modules they mock
  2. Create explicit mock functions rather than mocking entire modules when possible
  3. For complex Commander-based CLI tests, test the core implementation logic directly rather than the Commander structure
  4. Use the vi.mock() method with careful attention to hoisting behavior
  5. Add .js extensions to all imports, including in test files

For CLI command testing, we recommend two main approaches:

  • Direct Implementation Testing: Extract the core business logic from Commander handlers and test it directly
  • Mocking Approach: Focus on mocking only the essential dependencies (fs, etc.) rather than the entire CLI framework

A simplified testing approach for CLI commands:

// Mock dependencies
const mockFs = {
  existsSync: vi.fn(),
  readFileSync: vi.fn(),
  writeFileSync: vi.fn()
};
vi.mock('fs', () => mockFs);

// Test the command implementation logic directly
const result = myCommandFunction('arg1', 'arg2');

// Assert expected behavior
expect(mockFs.writeFileSync).toHaveBeenCalled();

License

MIT