pit-manager
v0.1.34
Published
Centralized prompt management system for Human Behavior AI agents
Maintainers
Readme
PIT - Prompt Intelligence Tracker
The simplest way to track, version, and optimize your AI prompts
Quick Start • Simplified API • Documentation • Examples
Overview
PIT provides dead-simple prompt tracking with automatic versioning, cost analytics, and chain execution tracking. Built for production AI applications that need visibility into prompt performance.
Installation for Human Behavior Workers
Note: Currently, only Human Behavior workers have access to the Python CLI, which is the recommended version. The TypeScript CLI via
npx pitis functional but limited.
Step 1: Clone and Install the Python CLI
# Clone the repository
git clone [email protected]:humanbehavior-gh/pit.git
# Navigate to the directory
cd pit
# Install the Python CLI in development mode
pip install -e .
# Verify installation
pit help
# or
pit docsStep 2: Configure Environment
Create a .env file in your project directory with your API keys:
# LLM Provider Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
# PIT Configuration (for online mode)
PIT_REPO_KEY=your-repo-key
PIT_SUPABASE_URL=your-supabase-url
PIT_SUPABASE_KEY=your-supabase-keyStep 3: Install TypeScript Package
# If pit-manager is already in your package.json
pnpm install
# Otherwise, install it directly
pnpm install pit-managerStep 4: Initialize Your Repository
# Initialize with online backend (recommended)
pit init --online
# This creates a prompts/ folder in your projectStep 5: Add Your First Prompt
Create a prompt template in the prompts/ folder:
# prompts/assistant.md
---
version: 1.0.0
description: General assistant prompt
---
You are a {{role}} assistant specialized in {{domain}}.
Task: {{task}}
Please be {{tone}} in your response.Using the Simplified API with TypeScript
The simplified API makes it incredibly easy to track and version your prompts with just a few lines of code.
Basic Usage
import { prompts, model } from 'pit-manager';
// Load and render a prompt template
const prompt = prompts("assistant.md", [
"helpful AI", // replaces {{role}}
"data analysis", // replaces {{domain}}
"analyze sales", // replaces {{task}}
"concise" // replaces {{tone}}
]);
// Execute with automatic tracking
const response = await model.complete(
"gpt-4", // model name
prompt, // rendered prompt
"analysis-task" // tag for tracking
);
console.log(response.content);That's it! Every execution is automatically:
- Tracked with timing and token usage
- Versioned in your local repository
- Synced to the online backend (if configured)
- Linked in chains when you pass responses between calls
Automatic Chaining
Create multi-step workflows by passing responses between calls:
// Step 1: Analyze data
const analysis = await model.complete(
"gpt-4",
prompts("analyze.md", [data]),
"analyze"
);
// Step 2: Generate summary (automatically chains!)
const summary = await model.complete(
"claude-3-opus",
analysis, // Pass the previous response
"summarize"
);
// Step 3: Translate (chain continues)
const translation = await model.complete(
"gemini-pro",
summary,
"translate"
);Structured Output
Get typed responses using native provider capabilities:
// Define your output structure
interface Analysis {
sentiment: 'positive' | 'negative' | 'neutral';
confidence: number;
keywords: string[];
}
// Get structured response
const result = await model.complete(
"gpt-4",
"Analyze: PIT is amazing for tracking prompts!",
"sentiment",
{
schema: {
type: "object",
properties: {
sentiment: { type: "string", enum: ["positive", "negative", "neutral"] },
confidence: { type: "number" },
keywords: { type: "array", items: { type: "string" } }
},
required: ["sentiment", "confidence", "keywords"]
}
}
);
// TypeScript knows the shape!
console.log(result.content.sentiment); // "positive"
console.log(result.content.confidence); // 0.95
console.log(result.content.keywords); // ["PIT", "amazing", "tracking", "prompts"]Multimodal Support
Handle images and other media:
// Analyze an image
const imageAnalysis = await model.complete(
"gpt-4-vision",
{
text: "What's in this image?",
images: ["path/to/image.png"]
},
"image-analysis"
);
// Process Base64 encoded images
const base64Analysis = await model.complete(
"claude-3-opus",
{
text: "Describe this chart",
images: [`data:image/png;base64,${base64String}`]
},
"chart-analysis"
);API Reference
prompts(template, variables)
Load and render a prompt template:
const prompt = prompts("template.md", ["var1", "var2", "var3"]);- template: Name of the template file in
prompts/folder - variables: Array of values to replace
{{placeholders}}in order
model.complete(model, prompt, tag, options?)
Execute a model with automatic tracking:
const response = await model.complete(
model: string, // "gpt-4", "claude-3", "gemini-pro", etc.
prompt: string | object, // Prompt text or multimodal content
tag: string, // Tag for tracking and analytics
options?: { // Optional parameters
schema?: object, // JSON schema for structured output
temperature?: number,
maxTokens?: number,
// ... other provider-specific options
}
);Returns a ModelResponse object:
{
content: string | object, // The response content
model: string, // Model used
promptHash: string, // SHA-256 of prompt
executionId: string, // Unique execution ID
metadata: {
tag: string,
provider: string,
chainId?: string, // Present if part of a chain
tokens: {
prompt: number,
completion: number,
total: number
},
latencyMs: number,
structured: boolean
}
}Viewing Your Data
CLI Commands
# View execution history
pit log
# Show execution analytics
pit analytics summary
# Launch interactive dashboard
pit dashboard
# View token usage
pit analytics tokens --days 7
# Track costs by model
pit analytics cost --group-by modelWeb Dashboard
# Start the web dashboard
pit dashboard --web
# Access at http://localhost:3000Complete Example: Research Pipeline
import { prompts, model } from 'pit-manager';
async function researchPipeline(topic: string) {
// Step 1: Generate research questions
const questions = await model.complete(
"gpt-4",
prompts("research/questions.md", [topic]),
"generate-questions",
{
schema: {
type: "object",
properties: {
questions: {
type: "array",
items: { type: "string" }
}
}
}
}
);
// Step 2: Research each question (parallel execution)
const research = await Promise.all(
questions.content.questions.map(q =>
model.complete(
"claude-3-opus",
prompts("research/investigate.md", [q]),
"research"
)
)
);
// Step 3: Synthesize findings
const synthesis = await model.complete(
"gpt-4",
research.map(r => r.content).join("\n\n"),
"synthesize"
);
// Step 4: Generate final report
const report = await model.complete(
"gpt-4",
synthesis,
"final-report",
{
schema: {
type: "object",
properties: {
title: { type: "string" },
summary: { type: "string" },
findings: {
type: "array",
items: {
type: "object",
properties: {
finding: { type: "string" },
confidence: { type: "string" },
evidence: { type: "string" }
}
}
},
recommendations: {
type: "array",
items: { type: "string" }
}
}
}
}
);
return report.content;
}
// Run the pipeline
const findings = await researchPipeline("AI safety");
console.log(findings);Advanced Features
Branching for Experiments
# Create a new branch for experimentation
pit branch experiment/new-prompts
pit checkout experiment/new-prompts
# Edit your prompts and test
# ... make changes ...
# Merge back when satisfied
pit checkout main
pit merge experiment/new-promptsTemplate Management
# List all templates
pit templates list
# Show template details
pit templates show assistant.md
# Compare template versions
pit diff prompts/assistant.md HEAD~1Cost Optimization
# Analyze costs by tag
pit analytics cost --group-by tag --days 30
# Find expensive prompts
pit analytics expensive --limit 10
# Compare model costs
pit analytics compare gpt-4 claude-3-opusRepository Structure
After initialization, your project will have:
your-project/
├── prompts/ # Your prompt templates
│ ├── assistant.md
│ ├── analyzer.md
│ └── summarizer.md
├── .pit/ # PIT repository (auto-managed)
│ ├── config.json # Repository configuration
│ ├── HEAD # Current branch reference
│ └── objects/ # Content-addressed storage
├── .env # Your API keys
└── package.json # Your project configBest Practices
- Use descriptive tags: Tags are your primary way to filter and analyze executions
- Version your prompts: Commit prompt changes with meaningful messages
- Chain related calls: Pass responses between calls to maintain context
- Use structured output: Get typed, validated responses when possible
- Monitor costs: Regularly check analytics to optimize spending
Environment Variables
Required for online mode:
# LLM Providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
# PIT Backend
PIT_REPO_KEY=your-repo-key
PIT_SUPABASE_URL=https://your-project.supabase.co
PIT_SUPABASE_KEY=your-supabase-anon-keyTesting
Running Unit Tests
# Run all unit tests
npm test
# Run specific test suite
npm test -- --testPathPattern=storage
npm test -- --testPathPattern=chains
npm test -- --testPathPattern=versioningRunning End-to-End Tests
The complete end-to-end test validates the entire PIT system including:
- Repository initialization
- Prompt template management
- Model execution with chain tracking
- Storage and versioning operations
- Database persistence
# Run the complete end-to-end test
./test-e2e-complete.sh
# The test will:
# 1. Create a temporary test directory
# 2. Initialize a PIT repository
# 3. Test prompt templates and model execution
# 4. Verify chain tracking and storage
# 5. Clean up after completionFor integration testing with real LLM providers:
# Set your API keys first
export OPENAI_API_KEY="your-key"
export ANTHROPIC_API_KEY="your-key"
# Run integration tests
tsx test/integration/test_simplified_api.ts
tsx test/integration/test-typescript-workflow.tsTroubleshooting
Common Issues
"pit: command not found"
# Ensure you installed with pip install -e .
# Check your PATH includes Python scripts
echo $PATH | grep -i python"Cannot find module 'pit-manager'"
# Ensure you ran pnpm install
pnpm install pit-manager"No prompts folder found"
# Initialize your repository
pit init --onlineSupport
License
MIT License - see LICENSE for details.
