npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

pit-manager

v0.1.34

Published

Centralized prompt management system for Human Behavior AI agents

Readme

PIT - Prompt Intelligence Tracker

Python TypeScript License

The simplest way to track, version, and optimize your AI prompts

Quick StartSimplified APIDocumentationExamples

Overview

PIT provides dead-simple prompt tracking with automatic versioning, cost analytics, and chain execution tracking. Built for production AI applications that need visibility into prompt performance.

Installation for Human Behavior Workers

Note: Currently, only Human Behavior workers have access to the Python CLI, which is the recommended version. The TypeScript CLI via npx pit is functional but limited.

Step 1: Clone and Install the Python CLI

# Clone the repository
git clone [email protected]:humanbehavior-gh/pit.git

# Navigate to the directory
cd pit

# Install the Python CLI in development mode
pip install -e .

# Verify installation
pit help
# or
pit docs

Step 2: Configure Environment

Create a .env file in your project directory with your API keys:

# LLM Provider Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...

# PIT Configuration (for online mode)
PIT_REPO_KEY=your-repo-key
PIT_SUPABASE_URL=your-supabase-url
PIT_SUPABASE_KEY=your-supabase-key

Step 3: Install TypeScript Package

# If pit-manager is already in your package.json
pnpm install

# Otherwise, install it directly
pnpm install pit-manager

Step 4: Initialize Your Repository

# Initialize with online backend (recommended)
pit init --online

# This creates a prompts/ folder in your project

Step 5: Add Your First Prompt

Create a prompt template in the prompts/ folder:

# prompts/assistant.md
---
version: 1.0.0
description: General assistant prompt
---

You are a {{role}} assistant specialized in {{domain}}.

Task: {{task}}

Please be {{tone}} in your response.

Using the Simplified API with TypeScript

The simplified API makes it incredibly easy to track and version your prompts with just a few lines of code.

Basic Usage

import { prompts, model } from 'pit-manager';

// Load and render a prompt template
const prompt = prompts("assistant.md", [
  "helpful AI",      // replaces {{role}}
  "data analysis",   // replaces {{domain}}
  "analyze sales",   // replaces {{task}}
  "concise"         // replaces {{tone}}
]);

// Execute with automatic tracking
const response = await model.complete(
  "gpt-4",           // model name
  prompt,            // rendered prompt
  "analysis-task"    // tag for tracking
);

console.log(response.content);

That's it! Every execution is automatically:

  • Tracked with timing and token usage
  • Versioned in your local repository
  • Synced to the online backend (if configured)
  • Linked in chains when you pass responses between calls

Automatic Chaining

Create multi-step workflows by passing responses between calls:

// Step 1: Analyze data
const analysis = await model.complete(
  "gpt-4",
  prompts("analyze.md", [data]),
  "analyze"
);

// Step 2: Generate summary (automatically chains!)
const summary = await model.complete(
  "claude-3-opus",
  analysis,  // Pass the previous response
  "summarize"
);

// Step 3: Translate (chain continues)
const translation = await model.complete(
  "gemini-pro",
  summary,
  "translate"
);

Structured Output

Get typed responses using native provider capabilities:

// Define your output structure
interface Analysis {
  sentiment: 'positive' | 'negative' | 'neutral';
  confidence: number;
  keywords: string[];
}

// Get structured response
const result = await model.complete(
  "gpt-4",
  "Analyze: PIT is amazing for tracking prompts!",
  "sentiment",
  { 
    schema: {
      type: "object",
      properties: {
        sentiment: { type: "string", enum: ["positive", "negative", "neutral"] },
        confidence: { type: "number" },
        keywords: { type: "array", items: { type: "string" } }
      },
      required: ["sentiment", "confidence", "keywords"]
    }
  }
);

// TypeScript knows the shape!
console.log(result.content.sentiment);   // "positive"
console.log(result.content.confidence);  // 0.95
console.log(result.content.keywords);    // ["PIT", "amazing", "tracking", "prompts"]

Multimodal Support

Handle images and other media:

// Analyze an image
const imageAnalysis = await model.complete(
  "gpt-4-vision",
  {
    text: "What's in this image?",
    images: ["path/to/image.png"]
  },
  "image-analysis"
);

// Process Base64 encoded images
const base64Analysis = await model.complete(
  "claude-3-opus",
  {
    text: "Describe this chart",
    images: [`data:image/png;base64,${base64String}`]
  },
  "chart-analysis"
);

API Reference

prompts(template, variables)

Load and render a prompt template:

const prompt = prompts("template.md", ["var1", "var2", "var3"]);
  • template: Name of the template file in prompts/ folder
  • variables: Array of values to replace {{placeholders}} in order

model.complete(model, prompt, tag, options?)

Execute a model with automatic tracking:

const response = await model.complete(
  model: string,           // "gpt-4", "claude-3", "gemini-pro", etc.
  prompt: string | object, // Prompt text or multimodal content
  tag: string,            // Tag for tracking and analytics
  options?: {             // Optional parameters
    schema?: object,      // JSON schema for structured output
    temperature?: number,
    maxTokens?: number,
    // ... other provider-specific options
  }
);

Returns a ModelResponse object:

{
  content: string | object,  // The response content
  model: string,             // Model used
  promptHash: string,        // SHA-256 of prompt
  executionId: string,       // Unique execution ID
  metadata: {
    tag: string,
    provider: string,
    chainId?: string,        // Present if part of a chain
    tokens: {
      prompt: number,
      completion: number,
      total: number
    },
    latencyMs: number,
    structured: boolean
  }
}

Viewing Your Data

CLI Commands

# View execution history
pit log

# Show execution analytics
pit analytics summary

# Launch interactive dashboard
pit dashboard

# View token usage
pit analytics tokens --days 7

# Track costs by model
pit analytics cost --group-by model

Web Dashboard

# Start the web dashboard
pit dashboard --web

# Access at http://localhost:3000

Complete Example: Research Pipeline

import { prompts, model } from 'pit-manager';

async function researchPipeline(topic: string) {
  // Step 1: Generate research questions
  const questions = await model.complete(
    "gpt-4",
    prompts("research/questions.md", [topic]),
    "generate-questions",
    {
      schema: {
        type: "object",
        properties: {
          questions: { 
            type: "array", 
            items: { type: "string" } 
          }
        }
      }
    }
  );

  // Step 2: Research each question (parallel execution)
  const research = await Promise.all(
    questions.content.questions.map(q =>
      model.complete(
        "claude-3-opus",
        prompts("research/investigate.md", [q]),
        "research"
      )
    )
  );

  // Step 3: Synthesize findings
  const synthesis = await model.complete(
    "gpt-4",
    research.map(r => r.content).join("\n\n"),
    "synthesize"
  );

  // Step 4: Generate final report
  const report = await model.complete(
    "gpt-4",
    synthesis,
    "final-report",
    {
      schema: {
        type: "object",
        properties: {
          title: { type: "string" },
          summary: { type: "string" },
          findings: { 
            type: "array",
            items: {
              type: "object",
              properties: {
                finding: { type: "string" },
                confidence: { type: "string" },
                evidence: { type: "string" }
              }
            }
          },
          recommendations: {
            type: "array",
            items: { type: "string" }
          }
        }
      }
    }
  );

  return report.content;
}

// Run the pipeline
const findings = await researchPipeline("AI safety");
console.log(findings);

Advanced Features

Branching for Experiments

# Create a new branch for experimentation
pit branch experiment/new-prompts
pit checkout experiment/new-prompts

# Edit your prompts and test
# ... make changes ...

# Merge back when satisfied
pit checkout main
pit merge experiment/new-prompts

Template Management

# List all templates
pit templates list

# Show template details
pit templates show assistant.md

# Compare template versions
pit diff prompts/assistant.md HEAD~1

Cost Optimization

# Analyze costs by tag
pit analytics cost --group-by tag --days 30

# Find expensive prompts
pit analytics expensive --limit 10

# Compare model costs
pit analytics compare gpt-4 claude-3-opus

Repository Structure

After initialization, your project will have:

your-project/
├── prompts/              # Your prompt templates
│   ├── assistant.md
│   ├── analyzer.md
│   └── summarizer.md
├── .pit/                 # PIT repository (auto-managed)
│   ├── config.json      # Repository configuration
│   ├── HEAD             # Current branch reference
│   └── objects/         # Content-addressed storage
├── .env                 # Your API keys
└── package.json         # Your project config

Best Practices

  1. Use descriptive tags: Tags are your primary way to filter and analyze executions
  2. Version your prompts: Commit prompt changes with meaningful messages
  3. Chain related calls: Pass responses between calls to maintain context
  4. Use structured output: Get typed, validated responses when possible
  5. Monitor costs: Regularly check analytics to optimize spending

Environment Variables

Required for online mode:

# LLM Providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...

# PIT Backend
PIT_REPO_KEY=your-repo-key
PIT_SUPABASE_URL=https://your-project.supabase.co
PIT_SUPABASE_KEY=your-supabase-anon-key

Testing

Running Unit Tests

# Run all unit tests
npm test

# Run specific test suite
npm test -- --testPathPattern=storage
npm test -- --testPathPattern=chains
npm test -- --testPathPattern=versioning

Running End-to-End Tests

The complete end-to-end test validates the entire PIT system including:

  • Repository initialization
  • Prompt template management
  • Model execution with chain tracking
  • Storage and versioning operations
  • Database persistence
# Run the complete end-to-end test
./test-e2e-complete.sh

# The test will:
# 1. Create a temporary test directory
# 2. Initialize a PIT repository
# 3. Test prompt templates and model execution
# 4. Verify chain tracking and storage
# 5. Clean up after completion

For integration testing with real LLM providers:

# Set your API keys first
export OPENAI_API_KEY="your-key"
export ANTHROPIC_API_KEY="your-key"

# Run integration tests
tsx test/integration/test_simplified_api.ts
tsx test/integration/test-typescript-workflow.ts

Troubleshooting

Common Issues

"pit: command not found"

# Ensure you installed with pip install -e .
# Check your PATH includes Python scripts
echo $PATH | grep -i python

"Cannot find module 'pit-manager'"

# Ensure you ran pnpm install
pnpm install pit-manager

"No prompts folder found"

# Initialize your repository
pit init --online

Support

License

MIT License - see LICENSE for details.