oneshot
v2.0.2
Published
One-shot LLM prompts
Downloads
12
Maintainers
Readme
🚧 this is under-construction.gif 🚧
Oneshot (pre-release)
A command-line tool for sending prompts to AI models. Supports OpenAI, Anthropic, OpenRouter, and self-hosted models.
Installation
npm install -g oneshotFeatures
- Send prompts to AI models (OpenAI, Anthropic, OpenRouter, self-hosted models)
- Support for system prompts and variations
- Configuration via environment variables or config files (
~/.config/oneshot/config.json) - Model aliases for common AI models
- Save responses to files
- Control reasoning effort for o1/o3 models
- Model Context Protocol (MCP) support for tool usage
- Flexible prompt construction with multiple content types
- Execute commands and analyze their output
- Process meld files for advanced prompt scripting
- AI-assisted file editing with natural language instructions or explicit diffs
Usage
Basic Usage
Send a prompt directly to an AI model:
oneshot "Your prompt here" [options]Options:
-m, --model <model>- Which model to use (defaults to claude-3-7-sonnet-latest)-s, --system <prompt>- System prompt text-e, --effort <level>- Reasoning effort level (low, med/medium, high)-o, --output <file>- Write response to file--provider <provider>- Specify provider manually (openai, anthropic, openrouter)--silent- Only output model response (or nothing with -o)--verbose- Show detailed output including commands and prompts--tools- Enable tool usage for supported models--yolo- Auto-approve all tool actions without prompting--mcp-server <n>- Use specific MCP server from config--meld- Process .md files as meld files
Output Levels
Oneshot provides different output levels to control verbosity:
Silent with Output File (
--silentwith-o):- No output to terminal
- Response written only to the specified file
Silent without Output File (
--silentwithout-o):- Only the model's response is output to the terminal
- No progress indicator
- No original prompt display
- No run command output
Default (no flags):
- Show progress indicator
- Show model response
- Suppress run command output
- Don't show the sent prompt
Verbose (
--verbose):- Show run command output
- Show the fully built prompt (including run command output)
- Show progress indicator
- Show model response
- Show detailed debug information
Note: When using prompts with exclamation points (
!), use single quotes instead of double quotes to avoid shell interpretation issues:# This works correctly: oneshot 'Hello, world!' -m claude-3-7-sonnet-latest # This may cause the shell to hang: oneshot "Hello, world!" -m claude-3-7-sonnet-latest
Content Types
Oneshot supports multiple content types that can be combined in any order. Each content type is formatted with XML tags in the final prompt, making it clear to the model what each part represents, so think of these as sections of your prompt.
-c, --context <content>- Context content-t, --task <content>- Task content-i, --instructions <content>- Instructions content-p, --prompt <content>- Prompt content-f, --file <path>- File content-r, --run <command>- Run command and include output--edit <file_path>- File to edit (can be used multiple times)
File Editing with --edit
The --edit flag provides powerful AI-assisted editing capabilities right from the command line. With a simple command, you can have an AI model read your files, understand their content, and apply intelligent changes based on natural language instructions.
# The simplest way to edit a file - just one command!
oneshot "Refactor to use async/await instead of promises" --edit myfile.jsYou can also make multiple edits in one command:
oneshot "Fix bugs in these files" --edit file1.js --edit file2.jsThis powerful feature enables you to:
- Refactor complex code with a single instruction
- Fix bugs by simply describing what's wrong
- Add features without writing a single line of code yourself
- Standardize patterns across multiple files
- Make systematic changes that would be tedious to do manually
How It Works
When you use the --edit flag, Oneshot:
- Enables tool usage automatically - The AI gets access to read and modify files
- Reads the specified file(s) - The AI analyzes the file's content and structure
- Applies intelligent changes - Based on your instructions or the main prompt
- Shows you a preview of changes - You can review before they're applied (unless using
--yolo) - Writes the changes back - Only after your approval
Basic Usage Examples
# Edit a single file with general instructions
oneshot "Fix error handling in this file" --edit path/to/file.js
# Edit multiple files at once
oneshot "Standardize error handling" --edit file1.js --edit file2.js
# Combine with other content types for more context
oneshot --task "Update deprecated API calls" --context "We're using Node.js 18" --edit src/api.jsReal-World Examples
Here are some practical examples of what you can do with the edit feature:
# Modernize legacy code
oneshot "Refactor to use modern JavaScript features like arrow functions, template literals, and destructuring" --edit legacy.js
# Add JSDoc comments
oneshot "Add comprehensive JSDoc comments to all functions" --edit utils.js
# Fix accessibility issues
oneshot "Fix all accessibility issues in this React component and add proper ARIA attributes" --edit component.jsx
# Update API implementation
oneshot "Update this API client to use the new v2 endpoints as described in https://api.example.com/docs" --edit api.js
# Implement a feature
oneshot "Add a dark mode toggle feature that persists user preference in localStorage" --edit app.js
# Performance optimization
oneshot "Optimize this function for better performance by reducing complexity and avoiding unnecessary calculations" --edit heavyProcess.jsYou can use --no-tools to disable tool usage even when using --edit, or use --yolo to auto-approve all tool actions without prompting for confirmation.
File Editing Command
In addition to the --edit flag, Oneshot also provides a dedicated edit command for a more focused file editing experience:
oneshot edit <file_path> [options]Options:
-d, --diff <instructions>- Explicit edit instructions or diff to apply-m, --model <model>- Model to use (defaults to your default model)-s, --system <prompt>- System prompt for the edit--output <file>- Save edits to a new file instead of overwriting--tools- Enable tool usage (enabled by default for edit command)--no-tools- Disable tool usage--yolo- Auto-approve all tool actions without prompting--mcp-server <n>- Use specific MCP server from config
Examples:
# Edit a file with default instructions (AI decides what to improve)
oneshot edit src/app.js
# Edit with specific instructions
oneshot edit src/app.js -d "Add input validation to the login function"
# Edit with a structured diff format
oneshot edit src/app.js -d "--- login function
+++ login function with validation
@@ Change the login function to validate email format and password length"
# Edit with a specific model and save to a new file
oneshot edit src/app.js -m claude-3-opus-latest --output src/app.improved.js
# Edit with a custom system prompt
oneshot edit src/app.js -s "You are a security expert. Find and fix any security issues."Using Meld Files
Oneshot has built-in support for meld, a prompt scripting language that provides powerful features for creating AI prompts:
- Files with extensions
.mld,.mld.mdare automatically processed with meld. - Files with extension
.mdcan be processed with meld by adding the--meldflag. - Output files follow the naming convention
filename-reply.md(with numeric suffixes for duplicates).
Example:
# Process a meld file and send to AI
oneshot prompt.meld -m claude-3-opus-latest
# Process a markdown file with meld
oneshot prompt.md --meld -m o1-miniMeld allows you to use variables, directives, and scripting in your prompts:
@text greeting = "Hello"
@text name = "World"
${greeting}, ${name}!
@run [ls -la]See the meld documentation for more details on writing meld files.
Examples
# Send a simple prompt
oneshot "What is the capital of France?"
# Use a file as input
oneshot myfile.md
# Combine multiple content types
oneshot --context "I'm a developer" --task "Explain this code" --file code.js
# Run a command and analyze its output
oneshot --task "Summarize the failing tests" --run "npm test"
# Run multiple commands in sequence
oneshot --run "ls -la" --run "git status" --task "Explain what's going on in this repository"
# Enable tool usage with a specific MCP server
oneshot "Show me the git status" --tools --mcp-server git
# Send prompt and save response to file
oneshot "What is the capital of France?" -o response.txt
# Edit a file with AI assistance
oneshot "Update error handling" --edit src/app.js
# Edit multiple files with specific instructions
oneshot "Standardize API error responses with consistent error codes in src/api.js and update error handling in src/client.js" --edit src/api.js --edit src/client.jsUsing Custom Models with --provider
When you need to use a model that isn't recognized by default, you can specify the provider explicitly:
# Use a new OpenAI model that's not yet recognized by Oneshot
oneshot "Tell me about yourself" --model gpt-5-preview --provider openai
# Use a custom Claude model
oneshot "Explain quantum computing" --model claude-3-8-opus --provider anthropic
# Use a model via OpenRouter
oneshot "Write a short story" --model mistral/mistral-small --provider openrouterThe --provider flag allows you to bypass model name validation, which is useful for:
- Using newly released models before they're officially supported
- Using proprietary fine-tuned models with custom names
- Experimenting with models in development
Without the --provider flag, Oneshot tries to determine the provider based on the model name prefix.
Configuration
You can configure your API keys and MCP servers in several ways:
- Environment variables:
export ANTHROPIC_API_KEY=<your-key> # For Claude models
export OPENAI_API_KEY=<your-key> # For GPT models
export OPENROUTER_API_KEY=<your-key> # For models via OpenRouter- Config file in
~/.config/oneshot/config.json:
{
"anthropicApiKey": "your-key",
"openaiApiKey": "your-key",
"openrouterApiKey": "your-key",
"modelAliases": {
"claude": "anthropic:claude-3-7-sonnet-latest",
"sonnet": "anthropic:claude-3-7-sonnet-latest",
"opus": "anthropic:claude-3-opus-latest",
"haiku": "anthropic:claude-3-5-haiku-latest",
"4o": "openai:chatgpt-4o-latest",
"o1": "openai:o1",
"o1-mini": "openai:o1-mini"
},
"mcp": {
"servers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/data"]
},
"git": {
"command": "uvx",
"args": ["mcp-server-git", "--repository", "./"]
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
},
"defaultServer": "filesystem",
"toolsAllowed": ["*"],
"requireToolConfirmation": true
}
}Self-Hosted Models Configuration
You can configure self-hosted models to use local or self-hosted LLM servers like Ollama, LM Studio, or any other compatible API server.
Basic Configuration
Add self-hosted models to your config file:
{
"selfHosted": {
"local-llama": {
"url": "http://localhost:8000/v1",
"provider": "openai",
"headers": {
"Custom-Header": "value"
}
},
"local-claude": {
"url": "http://localhost:8001/v1",
"provider": "anthropic",
"authToken": "local-token"
}
}
}Each self-hosted model configuration requires:
url: The endpoint URL for your modelprovider: The API format to use ('openai', 'anthropic', etc.)- Optional
headers: Custom headers to include with requests - Optional
authToken: Authentication token if required
Ollama Configuration
To use models from Ollama:
{
"selfHosted": {
"ollama-llama3": {
"url": "http://localhost:11434/v1",
"provider": "openai"
},
"ollama-mistral": {
"url": "http://localhost:11434/v1",
"provider": "openai"
}
},
"modelAliases": {
"llama3": "self-hosted:ollama-llama3",
"mistral": "self-hosted:ollama-mistral"
}
}Use Ollama models with:
# Start Ollama server first
ollama serve
# Use the model with oneshot
oneshot -m llama3 "Your prompt here"LM Studio Configuration
To use models from LM Studio:
{
"selfHosted": {
"lmstudio": {
"url": "http://localhost:1234/v1",
"provider": "openai"
}
},
"modelAliases": {
"local": "self-hosted:lmstudio"
}
}Use LM Studio models with:
# Start the LM Studio server first and ensure API server is enabled
# Then use the model with oneshot
oneshot -m local "Your prompt here"Secret References
You can use 1Password CLI secret references in your config:
{
"anthropicApiKey": "op://vault-name/anthropic/api-key",
"openaiApiKey": "op://vault-name/openai/api-key",
"openrouterApiKey": "op://vault-name/openrouter/api-key",
}
}Default Model Aliases
The following model aliases are available by default:
claude→anthropic:claude-3-7-sonnet-latestsonnet→anthropic:claude-3-7-sonnet-latestopus→anthropic:claude-3-opus-latesthaiku→anthropic:claude-3-5-haiku-latest4o→openai:chatgpt-4o-latesto1→openai:o1o1-mini→openai:o1-mini
You can override these or add your own in your configuration file.
You can also use OpenRouter (maybe -- this is untested) models with the following formats:
- Provider-specific format:
openrouter:openai/gpt-4-turbo,openrouter:anthropic/claude-3-opus,openrouter:meta/llama-3-70b - Colon-separated format:
openrouter:openai:gpt-4-turbo,openrouter:anthropic:claude-3-opus
These follow the same provider:model pattern used by model aliases, with OpenRouter specified as the top-level provider.
Managing Configuration
View and Set Basic Configuration
View current configuration:
oneshot configSet API keys:
oneshot config --anthropic <key>
oneshot config --openai <key>Set model aliases:
oneshot config --alias claude=anthropic:claude-3-latest gpt=openai:chatgpt-4o-latestSet default model:
oneshot default claude-3-7-sonnet-latest
# or
oneshot --default claude-3-7-sonnet-latestConfiguring Self-Hosted Models (untested wip)
Self-hosted models are configured in your config file (~/.config/oneshot/config.json):
{
"selfHosted": {
"local-llama": {
"url": "http://localhost:8000/v1",
"provider": "openai"
},
"my-local-model": {
"url": "http://localhost:8001/v1",
"provider": "anthropic",
"authToken": "mytoken"
},
"custom-model": {
"url": "http://api.example.com/v1",
"provider": "openai",
"headers": {
"X-Custom": "value"
}
}
}
}After configuring your self-hosted models, create aliases for easier use:
# Create a model alias
oneshot config --alias llama=local-llamaExamples for Popular Self-Hosted Systems
Using Ollama
For Ollama models:
{
"selfHosted": {
"ollama-llama3": {
"url": "http://localhost:11434/v1",
"provider": "openai"
},
"ollama-mistral": {
"url": "http://localhost:11434/v1",
"provider": "openai"
}
},
"modelAliases": {
"llama3": "self-hosted:ollama-llama3",
"mistral": "self-hosted:ollama-mistral"
}
}# Start Ollama server first
ollama serve
# Use the models
oneshot -m llama3 "Your prompt here"
oneshot -m mistral "Your prompt here"Using LM Studio
For LM Studio models:
{
"selfHosted": {
"lmstudio": {
"url": "http://localhost:1234/v1",
"provider": "openai"
}
},
"modelAliases": {
"local": "self-hosted:lmstudio"
}
}# Start LM Studio and enable the API server
# Then use your model
oneshot -m local "Your prompt here"
# Or use the alias
oneshot -m local "Your prompt here"Managing MCP Servers
Oneshot provides comprehensive commands for managing MCP servers, similar to Claude Code's approach:
Add a new MCP server:
# Add a server that needs to be launched
oneshot mcp add filesystem --env API_KEY=secret -- npx @modelcontextprotocol/server-filesystem ./data
# Add a URL-based server connection
oneshot mcp add myserver --url http://localhost:8080 --token mytokenList all configured servers:
oneshot mcp listGet details for a specific server:
oneshot mcp get filesystemRemove a server:
oneshot mcp remove filesystemSet the default server:
oneshot mcp default filesystemImport servers from Claude desktop configuration:
oneshot mcp importYou can control whether the configuration is stored globally or locally:
# Add to global config (~/.config/oneshot/config.json)
oneshot mcp add myserver --scope global -- npx my-server
# Add to local config (./.config/oneshot/config.json in current directory)
oneshot mcp add myserver --scope local -- npx my-serverDevelopment
Testing
Oneshot uses Vitest for testing. The project has two test commands:
npm testornpm run test:oneshot: Runs only the ESM-compatible tests intests/oneshot/directorynpm run test:all: Runs all tests, including tests that are still being migrated to ESM
Testing Guidelines
When writing tests, especially when mocking dependencies, follow these guidelines:
- Place mocks before importing the modules they mock
- Create explicit mock functions rather than mocking entire modules when possible
- For complex Commander-based CLI tests, test the core implementation logic directly rather than the Commander structure
- Use the
vi.mock()method with careful attention to hoisting behavior - Add
.jsextensions to all imports, including in test files
For CLI command testing, we recommend two main approaches:
- Direct Implementation Testing: Extract the core business logic from Commander handlers and test it directly
- Mocking Approach: Focus on mocking only the essential dependencies (fs, etc.) rather than the entire CLI framework
A simplified testing approach for CLI commands:
// Mock dependencies
const mockFs = {
existsSync: vi.fn(),
readFileSync: vi.fn(),
writeFileSync: vi.fn()
};
vi.mock('fs', () => mockFs);
// Test the command implementation logic directly
const result = myCommandFunction('arg1', 'arg2');
// Assert expected behavior
expect(mockFs.writeFileSync).toHaveBeenCalled();License
MIT
