npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@riotprompt/riotprompt

v0.0.17

Published

Get it together, and organize your prompts.

Readme

RiotPrompt

A powerful, flexible prompt building library and CLI tool for AI applications with zero hardcoded assumptions.

Features

  • Structured Prompts: Treat prompts as code with specialized sections for Persona, Instructions, Context, and more.
  • Advanced Prompt Strategies: First-class support for Constraints, Tone, Few-Shot Examples, Reasoning Steps, Response Format, and Safeguards.
  • Model Alignment: Automatically adapts prompt structure to match the specific expectations of each model provider:
    • Anthropic (Claude): Places Personas, Roles, Tone, and Constraints into the system parameter. Additionally, converts schema definitions into forced Tool Use calls, extracting structured results to match OpenAI's output format.
    • OpenAI: Maps generic roles to the appropriate system or developer (for O-series) messages.
    • Gemini: Structurally adapts components into System Instructions and content parts. For structured outputs, it automatically transforms JSON schemas into Gemini's responseSchema format, ensuring strict adherence to the defined structure.
  • CLI Tool: Scaffold, manage, process, and execute prompts directly from the terminal.
  • Model Agnostic: Format prompts for different models (GPT-4, Claude, Gemini, etc.) automatically.
  • Execution Engine: Run prompts against OpenAI, Anthropic, or Gemini APIs directly.
  • Portable: Serialize prompts to JSON or XML for easy exchange between systems.
  • Type-Safe: Full TypeScript support with excellent IntelliSense.

Installation

npm install riotprompt

CLI Usage

RiotPrompt comes with a command-line interface to help you organize, process, and execute prompts.

1. Create a Prompt

Scaffold a new prompt directory structure:

# Create a new prompt in 'my-prompt' directory
npx riotprompt create my-prompt --persona "You are a data expert."

# Import an existing prompt from JSON or XML
npx riotprompt create my-prompt --import existing-prompt.json

This creates a structured directory:

my-prompt/
├── persona.md          # System prompt / Persona definition
├── instructions.md     # Main task instructions
└── context/            # Directory for reference files (data.json, docs.md)

2. Process a Prompt

Compile a prompt directory (or file) into a formatted payload for an LLM, or export it to other formats.

# Format for GPT-4 (output to console)
npx riotprompt process my-prompt -m gpt-4

# Export to JSON (useful for API integrations)
npx riotprompt process my-prompt --format json --output prompt.json

# Export to XML
npx riotprompt process my-prompt --format xml --output prompt.xml

3. Execute a Prompt

Run the prompt directly against an LLM provider.

Provider Setup & API Keys

RiotPrompt supports multiple LLM providers. You'll need to obtain an API key for the provider you wish to use and set it as an environment variable (recommended) or pass it via the -k flag.

Google Gemini
  1. Get API Key: Visit Google AI Studio, sign in, and click "Get API key".
  2. Set Environment Variable: GEMINI_API_KEY
  3. Available Models: Check the Gemini models documentation. Common models include gemini-1.5-pro, gemini-1.5-flash.
OpenAI
  1. Get API Key: Go to the OpenAI Platform, sign up/login, and create a new secret key.
  2. Set Environment Variable: OPENAI_API_KEY
  3. Available Models: See OpenAI Models. Common models: gpt-4o, gpt-4-turbo, gpt-3.5-turbo.
Anthropic (Claude)
  1. Get API Key: Access the Anthropic Console and generate an API key.
  2. Set Environment Variable: ANTHROPIC_API_KEY
  3. Available Models: View Claude Models. Common models: claude-3-opus-20240229, claude-3-sonnet-20240229.

Example .env file:

OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=AIza...

Commands:

# Run with default model (usually gpt-4)
npx riotprompt execute my-prompt

# Run with specific model
npx riotprompt execute my-prompt -m claude-3-opus

# Run with explicit API key (overrides env)
npx riotprompt execute my-prompt -m gpt-4 -k sk-proj-...

# Control parameters
npx riotprompt execute my-prompt -t 0.7 --max-tokens 1000

Configuration

You can configure defaults using a riotprompt.yaml file in your project root:

defaultModel: "gpt-4"
promptsDir: "./prompts"
outputDir: "./output"

Library Usage

You can also use RiotPrompt programmatically in your application.

import { cook, registerTemplates } from 'riotprompt';

// Advanced prompt creation
import { z } from "zod";

const prompt = await cook({
  basePath: __dirname,
  persona: { content: 'You are a helpful AI assistant' },
  // ...
  // Structured Output with Zod
  schema: z.object({
      summary: z.string(),
      tags: z.array(z.string()),
      confidence: z.number().min(0).max(1)
  })
});

// Register and use templates
registerTemplates({
  'analysis': {
    persona: { content: 'You are an expert analyst' },
    instructions: [{ content: 'Provide detailed analysis' }],
  },
});

const analysisPrompt = await cook({
  basePath: __dirname,
  template: 'analysis',
  content: [{ content: dataToAnalyze, title: 'Data' }],
});

Documentation

Full documentation is available at https://tobrien.github.io/riotprompt/.

You can also explore the guides in the source:

Philosophy

RiotPrompt is designed to be completely generic and unopinionated. Unlike other prompt libraries that assume specific use cases, RiotPrompt provides the building blocks for any prompt-based application while maintaining type safety and developer experience.

Architecture

  • Cook Function: Core prompt creation engine
  • Template System: Reusable configuration patterns
  • Content Processing: Flexible content handling (files, directories, inline)
  • Override System: Hierarchical customization
  • Type Safety: Full TypeScript support throughout

Contributing

Contributions are welcome! Please see our Contributing Guide for details.

License

Apache-2.0 License - see LICENSE for details.


Build better prompts, faster.