npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@meldscience/meld

v0.1.0

Published

pipeable one-shot prompt scripting toolkit

Readme

Oneshot

A set of simple command-line tools for working with AI prompts and code context.

Installation

npm install -g @meldscience/oneshot

Tools

meld

Process a markdown file containing embedded commands and replace them with their output.

meld input.meld.md [options]

Input Format

# My Prompt

Here's what the system looks like:

@cmd[tree src]

And here's the relevant code:

@cmd[cat src/main.ts]

Options

  • -o, --outfile <path> - Output file path (default: input.meld.generated.md)
  • --dry-run - Print commands that would be executed without running them

Examples

Process a prompt file:

meld myprompt.meld.md
# Creates myprompt.meld.generated.md

Specify output location:

meld myprompt.meld.md -o output.md

oneshot

Send a prompt to one or more AI models with optional variations.

oneshot [model...] prompt.md [options]

The tool will send to all specified models in parallel. For example:

oneshot gpt-4 claude-3 claude-2 prompt.md

Options

  • -o, --outfile <path> - Save output to file (default: stdout)
  • --system <prompt> - System prompt
  • --system-file <path> - System prompt from file
  • --variations <json> - Variation prompts as JSON array
  • --variations-file <path> - Variation prompts from file (JSON or YAML)
  • --iterations <number> - Number of responses per variation (default: 1)

Examples

Basic usage:

oneshot claude-3 prompt.md

With system prompt:

oneshot gpt-4 prompt.md --system "You are a helpful programming assistant"

With variations from command line:

oneshot claude-3 prompt.md --variations '["Review as architect", "Review as developer"]'

With variations from file:

# roles.yaml
architect: "Review this system as a software architect"
developer: "Review this system as a senior developer"
security: "Review this system focusing on security implications"
oneshot claude-3 prompt.md --variations-file roles.yaml

Save output to file:

oneshot claude-3 prompt.md -o response.md

Multiple iterations:

oneshot claude-3 prompt.md --iterations 3

oneshotcat

Combine meld and oneshot functionality - process a prompt script and send it to an AI model.

oneshotcat [model] input.meld.md [options]

Accepts all oneshot options plus:

  • --expanded-outfile <path> - Save processed prompt script to file

Examples

Basic usage:

oneshotcat claude-3 myprompt.meld.md

Save both prompt and response:

oneshotcat claude-3 myprompt.meld.md --expanded-outfile expanded.md -o response.md

Output Formats

meld Output

Creates a new markdown file with commands replaced by their output:

# My Prompt

Here's what the system looks like:

src ├── main.ts ├── lib │ └── utils.ts └── tests └── main.test.ts


And here's the relevant code:

```typescript
import { something } from './lib/utils';

export function main() {
  // ...
}

### oneshot/oneshotcat Output

Outputs responses in markdown format, perfect for reading or piping to another command:

```markdown
# Architect Perspective
Based on the codebase structure...

# Developer Perspective
Looking at the implementation...

# Security Perspective
From a security standpoint...

The markdown format makes it easy to:

  1. Read the output directly
  2. Save to a file (-o output.md)
  3. Pipe into another command for meta-analysis
  4. Process with standard Unix text tools

Configuration

Configuration can be provided via:

  1. Environment variables
  2. .meldrc file (in home directory when installed globally)
  3. Command line arguments

Priority: CLI args > .meldrc > env vars

Environment Variables

ANTHROPIC_API_KEY=your_key
OPENAI_API_KEY=your_key
DEFAULT_MODEL=claude-3

.meldrc

{
  "defaultModel": "claude-3",
  "anthropicApiKey": "your_key",
  "openaiApiKey": "your_key"
}

1Password Integration

You can securely store your API keys in 1Password and reference them in your .meldrc:

{
  "defaultModel": "claude-3",
  "anthropicApiKey": "op://vault-name/anthropic/api-key",
  "openaiApiKey": "op://vault-name/openai/api-key"
}

The tool will automatically resolve these references using the 1Password CLI (op) if it's installed.

Configuration Helper

Use the meld-config command to manage your configuration:

# Show current configuration
meld-config --show

# Set API keys
meld-config --anthropic-key <your-key>
meld-config --openai-key <your-key>

# Set default model
meld-config --default-model claude-3

Multiple Models with Variations

You can combine multiple models with variations to get a rich set of perspectives:

oneshot gpt-4 claude-3 prompt.md --variations roles.yaml

This will get each role's perspective from each model, with output like:

# GPT-4: Architect Perspective
[response...]

# GPT-4: Developer Perspective
[response...]

# Claude 3: Architect Perspective
[response...]

# Claude 3: Developer Perspective
[response...]

Command Chaining

The tools are designed to work well with Unix pipes and support chaining for meta-analysis:

# Get multiple AI perspectives then analyze them together
oneshot claude-3 myprompt.md --variations roles.yaml | oneshot claude-3 secondprompt.md

Where secondprompt.md might contain something like:

Review these three responses from different perspectives. Identify:
- Common themes
- Points of dissonance
- A recommended path balancing pragmatism, keeping in mind this is for a single developer/server rather than enterprise software.

---
[Previous responses will be inserted here]

You can also use immediate prompts for quick chains:

oneshot claude-3 myprompt.md --variations roles.yaml | oneshot claude-3 "analyze these perspectives and recommend a path forward"

Module Usage

All tools can also be used programmatically:

import { PromptScript, Oneshot, Oneshotcat } from 'ai-prompt-tools';

// Process a prompt script
const meld = new PromptScript({
  inputFile: 'prompt.meld.md',
  outputFile: 'output.md'
});
const expanded = await meld.process();

// Send to AI
const oneshot = new Oneshot({
  model: 'claude-3',
  promptFile: 'prompt.md',
  variations: ['perspective 1', 'perspective 2']
});
const responses = await oneshot.process();

// Combined usage
const cat = new Oneshotcat({
  model: 'claude-3',
  promptFile: 'prompt.meld.md',
  variations: ['perspective 1', 'perspective 2']
});
const results = await cat.process();