npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

promptforge

v0.1.0

Published

Adaptive Prompt Intelligence & Orchestration SDK - Manage, optimize, and serve prompts for LLMs with versioning, feedback loops, and multi-provider support

Downloads

45

Readme

PromptForge

Adaptive Prompt Intelligence & Orchestration SDK

npm version License: MIT TypeScript

PromptForge is a production-ready TypeScript SDK and CLI toolkit that helps developers design, optimize, version, evaluate, and serve prompts for Large Language Models (LLMs) intelligently. It combines prompt engineering best practices with reinforcement learning feedback loops and embedding-based optimization.

🎯 Vision

PromptForge acts as the core prompt brain for any AI application — managing the full lifecycle of a prompt from creation → versioning → optimization → evaluation → deployment. It becomes an intelligent layer between your app and the LLMs, learning over time which prompts work best for each task and dynamically improving them based on usage and outcomes.

Think of it as: LangChain + PromptLayer + ML feedback system, unified into one SDK.

✨ Core Features

🔄 Prompt Versioning & Registry

  • Maintain a centralized registry of all prompts with complete metadata
  • Semantic diff for prompt versions (compare embeddings)
  • Version pinning for reproducibility
  • Full audit trail with ownership and tags

📝 Prompt Templates with Variables

  • Support variable interpolation {{user_input}}, {{context}}, etc.
  • Dynamic runtime context injection
  • JSON/YAML-based template storage
  • System prompts and few-shot examples

📊 Prompt Performance Tracking

  • Each execution logged with metadata: model, latency, tokens, cost, feedback
  • Store all prompt results for analytics
  • Aggregate metrics and percentile calculations
  • Per-provider performance benchmarking

🔁 Feedback & Reinforcement System

  • Collect user or system feedback (thumbs up/down, quality scores, reward signals)
  • Use reinforcement learning principles to update prompt scoring
  • Auto-promote top-performing prompts and deprecate weak ones
  • Composite scoring: Score = α * UserFeedback + β * PerformanceMetric + γ * CostEfficiency

🧠 Embedding-Based Optimization

  • Generate embeddings for all prompts and responses
  • Semantic similarity and clustering
  • Suggest prompt rewrites using LLM self-evaluation
  • Identify redundant or overlapping prompts

🔌 Cross-Model Compatibility

  • Supported Providers: OpenAI, Anthropic, Google (Gemini), Mistral, Ollama
  • Automatic Fallback: If one provider fails, automatically fallback to backup
  • Cost Optimization: Choose models based on quality/cost tradeoffs
  • Local Models: Full support for self-hosted models via Ollama

🎯 Prompt Evaluation Harness

  • Benchmark prompts on predefined datasets or user-defined examples
  • Metrics: accuracy, consistency, coherence, relevance, semantic similarity
  • Auto-report leaderboard of best-performing prompt+model pairs
  • A/B testing between prompt versions

⚡ Semantic Prompt Caching

  • Cache previous results using embeddings (semantic caching)
  • Reuse LLM responses if query is similar enough (saves cost & latency)
  • Configurable similarity threshold and TTL

🔐 Security & Compliance

  • Secure key management
  • PII filtering and redaction
  • Complete audit logs for traceability
  • Role-based access control ready

🚀 Quick Start

Installation

# Install globally
npm install -g promptforge

# Or use npx
npx promptforge init my-project

Initialize Project

forge init my-project
cd my-project

# Configure API keys in .env
code .env

Create Your First Prompt

# prompts/greeting.yaml
name: greeting
description: A friendly greeting prompt
variables:
  - name
  - language
content: |
  Hello {{name}}! Welcome to PromptForge.
  Please greet me in {{language}}.
systemPrompt: You are a friendly assistant that provides warm greetings.

Push Prompt to Registry

forge push greeting --template ./prompts/greeting.yaml

Execute Prompt

forge execute greeting --input '{"name": "Alice", "language": "Spanish"}'

List All Prompts

forge list
forge stats

📚 SDK Usage

Basic Example

import { PromptForge, LLMProvider } from 'promptforge';

// Initialize
const forge = new PromptForge({
  projectName: 'my-app',
  database: {
    type: 'postgresql',
    url: process.env.DATABASE_URL,
  },
  defaultProvider: LLMProvider.OPENAI,
  fallbackProviders: [LLMProvider.ANTHROPIC, LLMProvider.GOOGLE],
});

// Create a prompt
const prompt = await forge.createPrompt({
  name: 'summarize',
  content: 'Summarize the following text: {{text}}',
  owner: 'team-ai',
  tags: ['summarization', 'production'],
  template: {
    variables: ['text'],
  },
});

// Execute prompt
const result = await forge.executePrompt({
  promptName: 'summarize',
  input: {
    text: 'Long article content...',
  },
  llmConfig: {
    provider: LLMProvider.OPENAI,
    model: 'gpt-4o-mini',
    temperature: 0.3,
  },
});

console.log(result.output);
console.log('Cost:', result.metrics.cost);
console.log('Latency:', result.metrics.latencyMs);

// Track feedback
await forge.trackFeedback({
  executionId: result.id,
  promptId: prompt.id,
  type: FeedbackType.THUMBS_UP,
  score: 0.9,
  comment: 'Great summary!',
});

Evaluation Example

// Evaluate prompt against a dataset
const evaluation = await forge.evaluatePrompt({
  promptId: prompt.id,
  examples: [
    {
      input: { text: 'Sample article 1...' },
      expectedOutput: 'Expected summary 1',
    },
    {
      input: { text: 'Sample article 2...' },
      expectedOutput: 'Expected summary 2',
    },
  ],
});

console.log('Overall Score:', evaluation.overallScore);
console.log('Accuracy:', evaluation.metrics.accuracy);
console.log('Consistency:', evaluation.metrics.consistency);

Template Engine Example

import { TemplateEngine } from 'promptforge';

const engine = new TemplateEngine();

const template = {
  id: '...',
  promptId: '...',
  content: 'Translate "{{text}}" to {{language}}',
  variables: ['text', 'language'],
  format: 'text',
};

const rendered = engine.render(template, {
  text: 'Hello world',
  language: 'French',
});

console.log(rendered); // Translate "Hello world" to French

🏗️ Architecture

promptforge/
├── src/
│   ├── core/
│   │   ├── forge.ts           # Main SDK class
│   │   ├── registry.ts        # Prompt versioning & storage
│   │   ├── template-engine.ts # Template parsing & rendering
│   │   ├── metrics.ts         # Performance tracking
│   │   ├── llm-adapters.ts    # Multi-provider integration
│   │   ├── cache.ts           # Semantic caching
│   │   ├── feedback.ts        # Feedback & scoring
│   │   └── evaluation.ts      # Evaluation engine
│   ├── cli/
│   │   └── index.ts           # CLI commands
│   ├── utils/
│   │   ├── logger.ts          # Logging utility
│   │   └── config-loader.ts   # Configuration management
│   └── types.ts               # TypeScript types & schemas
├── tests/
├── examples/
├── docs/
└── migrations/

📖 CLI Commands

| Command | Description | |---------|-------------| | forge init [name] | Initialize a new PromptForge project | | forge push <name> | Create or update a prompt | | forge list | List all prompts in registry | | forge execute <name> | Execute a prompt with inputs | | forge eval <name> | Evaluate prompt performance | | forge optimize | Optimize prompts based on feedback | | forge stats | Show registry statistics |

🎯 Use Cases

  • Customer Support Bots: Version and optimize support prompts based on satisfaction scores
  • Content Generation: A/B test different prompt variations for blog posts or marketing copy
  • RAG Systems: Manage and version retrieval-augmented generation prompts
  • Multi-Agent Systems: Coordinate prompts across multiple AI agents
  • Enterprise LLM Ops: Centralized prompt management for teams with audit trails

🔧 Configuration

Environment Variables

# LLM Provider API Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
MISTRAL_API_KEY=...

# Database
DATABASE_URL=postgresql://localhost:5432/promptforge
REDIS_URL=redis://localhost:6379

# Vector Database (Pinecone)
PINECONE_API_KEY=...
PINECONE_ENVIRONMENT=us-west1-gcp
PINECONE_INDEX_NAME=promptforge-embeddings

# Features
ENABLE_SEMANTIC_CACHE=true
ENABLE_AUTO_OPTIMIZATION=true

Configuration File (promptforge.json)

{
  "projectName": "my-app",
  "version": "1.0.0",
  "defaultProvider": "openai",
  "fallbackProviders": ["anthropic", "google"],
  "optimization": {
    "enabled": true,
    "autoPromote": false,
    "scoreThreshold": 0.8
  },
  "telemetry": {
    "enabled": true
  }
}

🧪 Testing

npm test
npm run test:coverage

📦 Building

npm run build
npm run watch

🤝 Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

📄 License

MIT © Yash Gupta

🔗 Links

  • GitHub: https://github.com/gyash1512/PromptForge
  • npm: https://www.npmjs.com/package/promptforge
  • Documentation: Full API Docs
  • Examples: Example Projects

🙏 Acknowledgments

Built with ❤️ by Yash Gupta

Inspired by LangChain, PromptLayer, and the amazing AI community.


Ready to forge better prompts? ⚒️

npx promptforge init my-project