@promptmetrics/sdk
v1.0.2
Published
Official TypeScript/JavaScript SDK for PromptMetrics API - Manage and execute LLM prompts with version control, monitoring, and analytics
Maintainers
Readme
PromptMetrics SDK
The platform built for prompt engineers
PromptMetrics is a platform for managing, versioning, and monitoring your LLM prompts. Track every prompt execution, analyze costs, and debug AI workflows with built-in tracing.
This repo contains the official TypeScript/JavaScript SDK for PromptMetrics.
Table of Contents
- Installation
- Quick Start
- Configuration
- Core Features
- Tracing & Observability
- API Reference
- Examples
- Support
Installation
npm install @promptmetrics/sdkOr with yarn:
yarn add @promptmetrics/sdkQuick Start
import { PromptMetrics } from "@promptmetrics/sdk";
// Initialize the client
const pm = new PromptMetrics({
apiKey: process.env.PROMPTMETRICS_API_KEY, // pm_xxxxx
});
// Run a template directly
const result = await pm.templates.run("customer-support", {
variables: {
customer_name: "John Doe",
issue: "billing problem",
},
});
console.log(result.response_object);Configuration
Basic Configuration
const pm = new PromptMetrics({
apiKey: "pm_xxxxx", // Required: Your workspace API key
});Advanced Configuration
const pm = new PromptMetrics({
apiKey: "pm_xxxxx", // Required: Your workspace API key
timeout: 30000, // Optional: Request timeout in ms (default: 30000)
maxRetries: 3, // Optional: Max retry attempts (default: 3)
debug: false, // Optional: Enable debug logging (default: false)
});Environment Variables
# Optional (can also be passed in config)
PROMPTMETRICS_API_KEY=pm_xxxxxCore Features
Templates
Templates are containers for your prompt versions. Each template can have multiple versions with different configurations.
Get a Template
// By name
const template = await pm.templates.get("customer-support");
// By ID
const template = await pm.templates.get("template_id_123");Get Specific Version
// Get production version (default)
const template = await pm.templates.get("customer-support");
// Get specific environment
const template = await pm.templates.get("customer-support", {
env_label: "staging",
});
// Get specific version number
const template = await pm.templates.get("customer-support", {
version: 3,
});Run a Template
// Run template directly (uses production version with canary support)
const result = await pm.templates.run("customer-support", {
variables: {
customer_name: "John Doe",
issue: "billing problem",
},
});
// Run with environment label
const result = await pm.templates.run("customer-support", {
env_label: "staging",
variables: { customer_name: "John" },
});
// Run with model override
const result = await pm.templates.run("customer-support", {
variables: { customer_name: "John" },
model: "gpt-4",
pm_tags: ["production", "customer-support"],
});List Templates
const result = await pm.templates.list({
page: 1,
limit: 20,
search: "support", // Optional: Search by name
});
console.log(result.templates);
console.log(result.pagination);Versions
Versions represent specific configurations of a template (messages, model, parameters).
Get a Version
const version = await pm.versions.get("version_id_123");Run a Version
// Basic usage
const result = await pm.versions.run("version_id_123", {
variables: {
customer_name: "John Doe",
issue: "billing problem",
},
});
// With custom parameters
const result = await pm.versions.run("version_id_123", {
variables: { input: "Hello" },
parameters: {
temperature: 0.7,
max_tokens: 500,
},
});
// With custom model
const result = await pm.versions.run("version_id_123", {
variables: { input: "Hello" },
model: "gpt-4",
});Update Version Metadata
await pm.versions.update("version_id_123", {
metadata: {
department: "customer_support",
priority: "high",
},
});Update Environment Label
// Promote to production
await pm.versions.update("version_id_123", {
env_label: "production",
});Prompt Logs
Prompt logs capture every execution of a template version, including inputs, outputs, costs, and performance metrics.
List Logs
const result = await pm.logs.list({
page: 1,
limit: 50,
template_id: "template_123", // Optional: Filter by template
status: "SUCCESS", // Optional: Filter by status
start_date: "2025-01-01T00:00:00Z", // Optional: Date range
end_date: "2025-01-31T23:59:59Z",
});
console.log(result.logs);
console.log(result.pagination);LLM Providers
Get information about available LLM providers and models.
List Providers
const providers = await pm.providers.list();
providers.forEach((provider) => {
console.log(provider.name); // e.g., "OpenAI"
console.log(provider.models); // Available models
});Tracing & Observability
PromptMetrics provides powerful tracing capabilities to monitor and debug your AI workflows.
@traceable Decorator
Automatically track function execution with the @traceable decorator.
Basic Usage
import { PromptMetrics } from "@promptmetrics/sdk";
const pm = new PromptMetrics({ apiKey: "pm_xxxxx" });
class DataProcessor {
@pm.traceable({ name: "process_data" })
async processData(input: string) {
// Your logic here
return processedData;
}
}
const processor = new DataProcessor();
await processor.processData("test");
// ✅ Automatically tracked and sent to PromptMetricsWith Metadata
class CustomerService {
@pm.traceable({
name: "handle_support_request",
metadata: {
service: "customer_support",
priority: "high",
},
})
async handleRequest(customerId: string, message: string) {
// Function logic
return response;
}
}With Tags
class PaymentProcessor {
@pm.traceable({
name: "process_payment",
tags: ["payment", "critical", "production"],
})
async processPayment(amount: number) {
// Payment logic
return result;
}
}With Grouping
class ConversationHandler {
@pm.traceable({
name: "handle_conversation_turn",
})
async handleTurn(message: string, conversationId: string) {
// All nested calls automatically grouped
const enriched = await this.enrichMessage(message);
const response = await this.generateResponse(enriched);
return response;
}
@pm.traceable({ name: "enrich_message" })
async enrichMessage(message: string) {
// Automatically linked to parent trace
return enrichedMessage;
}
@pm.traceable({ name: "generate_response" })
async generateResponse(message: string) {
// Automatically linked to parent trace
return response;
}
}Decorator Options
interface TraceableOptions {
name?: string; // Function name (default: method name)
type?: "CUSTOM" | "LLM"; // Function type (default: "CUSTOM")
metadata?: Record<string, unknown>; // Static metadata
tags?: string[]; // Tags for categorization
disabled?: boolean; // Disable tracing (for performance)
}Manual Tracking
Track metadata, scores, and groups dynamically within traced functions.
Track Metadata
@pm.traceable({ name: "process_data" })
async processData(data: string) {
// Process data
const result = processRawData(data);
// Add metadata dynamically
await pm.track.metadata({
records_processed: result.length,
data_source: "crm_system",
processing_version: "2.0",
});
return result;
}Track Scores
@pm.traceable({ name: "validate_response" })
async validateResponse(response: string) {
// Run quality checks
const coherence = checkCoherence(response);
const relevance = checkRelevance(response);
const safety = checkSafety(response);
// Track scores
await pm.track.score({ criteria: "coherence", value: coherence });
await pm.track.score({ criteria: "relevance", value: relevance });
await pm.track.score({ criteria: "safety", value: safety });
return { passed: coherence > 0.8 && relevance > 0.8 && safety > 0.9 };
}Track Groups
@pm.traceable({ name: "handle_conversation" })
async handleConversation(conversationId: string) {
// Set group dynamically (overrides decorator)
pm.track.group({
group_id: conversationId,
group_type: "conversation",
});
// All nested calls inherit this group
const result = await this.processMessage();
return result;
}Batch Operations
Efficiently send multiple traces at once (useful for offline processing or historical data import).
Batch Create Traces
const traces = [
{
trace_id: "trace_123",
span_id: "span_abc",
function_name: "step_1",
start_time: new Date("2025-12-08T10:00:00Z"),
end_time: new Date("2025-12-08T10:00:01Z"),
duration_ms: 1000,
status: "SUCCESS",
metadata: { step: 1 },
},
{
trace_id: "trace_123",
span_id: "span_def",
function_name: "step_2",
start_time: new Date("2025-12-08T10:00:01Z"),
end_time: new Date("2025-12-08T10:00:03Z"),
duration_ms: 2000,
status: "SUCCESS",
metadata: { step: 2 },
},
// ... up to 100 traces
];
const result = await pm.traces.createBatch(traces);
console.log(`Created: ${result.summary.successful}`);
console.log(`Failed: ${result.summary.failed}`);
// Handle errors
result.errors.forEach((error) => {
console.error(`Trace ${error.index} failed: ${error.error}`);
});Use Cases for Batch Operations
1. Historical Data Import
// Import traces from legacy system
async function importLegacyTraces() {
const legacyTraces = await fetchFromLegacyDB();
// Convert to PromptMetrics format
const traces = legacyTraces.map((legacy) => ({
trace_id: legacy.id,
span_id: legacy.span,
function_name: legacy.operation,
start_time: new Date(legacy.started_at),
end_time: new Date(legacy.ended_at),
duration_ms: legacy.duration,
status: legacy.success ? "SUCCESS" : "ERROR",
metadata: legacy.context,
}));
// Import in batches of 100
for (let i = 0; i < traces.length; i += 100) {
const batch = traces.slice(i, i + 100);
await pm.traces.createBatch(batch);
}
}2. Offline Processing
// Buffer traces and send periodically
class TraceBuffer {
private buffer: CreateTraceOptions[] = [];
add(trace: CreateTraceOptions) {
this.buffer.push(trace);
if (this.buffer.length >= 50) {
this.flush();
}
}
async flush() {
if (this.buffer.length === 0) return;
const toSend = this.buffer.splice(0, 100);
await pm.traces.createBatch(toSend);
}
}LLM Request Correlation
When you call pm.templates.run() or pm.versions.run() inside a @traceable function, the LLM request is automatically linked to your trace.
Automatic Correlation
class AIService {
@pm.traceable({ name: "generate_support_response" })
async generateResponse(customerMessage: string) {
// Option 1: Run template directly (auto-linked to trace!)
const result = await pm.templates.run("support-template", {
variables: {
customer_message: customerMessage,
context: "support",
},
});
// Option 2: Run specific version (also auto-linked!)
const result2 = await pm.versions.run("version_123", {
variables: { customer_message: customerMessage },
});
// The prompt_log will have:
// - trace_id: current trace ID
// - span_id: current span ID
// - group_id: current group ID (if set)
return result;
}
}Complete Workflow Example
class CustomerSupportWorkflow {
@pm.traceable({
name: "handle_support_request",
})
async handleRequest(message: string, conversationId: string) {
// Set group for entire workflow
pm.track.group({
group_id: conversationId,
group_type: "conversation",
});
// Step 1: Custom function (traced)
const enriched = await this.enrichCustomerData(message);
// Step 2: LLM call using template.run() (auto-linked!)
const response = await pm.templates.run("support-template", {
variables: { message: enriched.text },
});
// Step 3: Custom function (traced)
const validation = await this.validateResponse(response);
return validation;
}
@pm.traceable({ name: "enrich_customer_data" })
async enrichCustomerData(message: string) {
// Custom logic
return enrichedData;
}
@pm.traceable({ name: "validate_response" })
async validateResponse(response: any) {
// Quality checks
await pm.track.score({ criteria: "coherence", value: 0.92 });
return validation;
}
}Result: Complete end-to-end visibility of your AI workflow:
- Custom functions tracked as traces
- LLM calls tracked as prompt logs
- All linked by
trace_id,span_id, andgroup_id
API Reference
Client Initialization
new PromptMetrics(config: PromptMetricsConfig)Templates
pm.templates.get(identifier: string, options?: GetTemplateOptions): Promise<Template>
pm.templates.list(options?: ListPromptsOptions): Promise<ListPromptsResponse>Versions
pm.versions.get(versionId: string): Promise<TemplateVersion>
pm.versions.run(versionId: string, options?: RunVersionOptions): Promise<PromptLog>
pm.versions.update(versionId: string, options: UpdateVersionOptions): Promise<TemplateVersion>Logs
pm.logs.list(options?: ListLogsOptions): Promise<{ logs: PromptLog[]; pagination: PaginationMeta }>Providers
pm.providers.list(): Promise<ProviderWithModels[]>Traces
pm.traces.create(options: CreateTraceOptions): Promise<Trace>
pm.traces.createBatch(traces: CreateTraceOptions[]): Promise<BatchCreateResult>
pm.traces.getBySpanId(spanId: string): Promise<Trace>
pm.traces.getTrace(traceId: string): Promise<TraceTreeNode[]>
pm.traces.getGroup(groupId: string): Promise<Trace[]>
pm.traces.list(options?: ListTracesOptions): Promise<TraceListResponse>
pm.traces.addScore(spanId: string, options: AddTraceScoreOptions): Promise<Trace>
pm.traces.updateMetadata(spanId: string, options: UpdateTraceMetadataOptions): Promise<Trace>
pm.traces.getAnalytics(options: { start_date: string; end_date: string }): Promise<TraceAnalytics>Tracking
pm.track.metadata(metadata: Record<string, unknown>): Promise<void>
pm.track.score(options: { criteria: string; value: number }): Promise<void>
pm.track.group(options: { group_id: string; group_type: string }): Promise<void>Decorator
@pm.traceable(options?: TraceableOptions)Examples
Basic Template Execution
import { PromptMetrics } from "@promptmetrics/sdk";
const pm = new PromptMetrics({ apiKey: process.env.PROMPTMETRICS_API_KEY });
async function main() {
// Get template
const template = await pm.templates.get("greeting");
// Run version
const result = await pm.versions.run(template.versions[0]._id, {
variables: { name: "Alice" },
});
console.log(result.response_object.choices[0].message.content);
}
main();Traced AI Workflow
import { PromptMetrics } from "@promptmetrics/sdk";
const pm = new PromptMetrics({ apiKey: process.env.PROMPTMETRICS_API_KEY });
class AIWorkflow {
@pm.traceable({
name: "process_document",
tags: ["document", "processing"],
})
async processDocument(document: string) {
// Step 1: Extract entities
const entities = await this.extractEntities(document);
// Step 2: Summarize
const summary = await this.summarize(document);
// Step 3: Generate insights
const insights = await this.generateInsights(entities, summary);
return { entities, summary, insights };
}
@pm.traceable({ name: "extract_entities" })
async extractEntities(text: string) {
const result = await pm.versions.run("entity-extraction", {
variables: { text },
});
await pm.track.metadata({ entity_count: result.entities.length });
return result.entities;
}
@pm.traceable({ name: "summarize" })
async summarize(text: string) {
const result = await pm.versions.run("summarization", {
variables: { text },
});
await pm.track.score({ criteria: "conciseness", value: 0.85 });
return result.summary;
}
@pm.traceable({ name: "generate_insights" })
async generateInsights(entities: any[], summary: string) {
const result = await pm.versions.run("insight-generation", {
variables: { entities: JSON.stringify(entities), summary },
});
return result.insights;
}
}
const workflow = new AIWorkflow();
await workflow.processDocument("Long document text...");Conversation Tracking
import { PromptMetrics } from "@promptmetrics/sdk";
const pm = new PromptMetrics({ apiKey: process.env.PROMPTMETRICS_API_KEY });
class ChatBot {
@pm.traceable({
name: "handle_message",
})
async handleMessage(message: string, conversationId: string) {
// Set conversation group
pm.track.group({
group_id: conversationId,
group_type: "conversation",
});
// Generate response
const result = await pm.versions.run("chatbot-template", {
variables: { message },
});
// Track quality
await pm.track.score({ criteria: "helpfulness", value: 0.9 });
return result.response_object.choices[0].message.content;
}
}
const bot = new ChatBot();
await bot.handleMessage("Hello!", "conv_123");
await bot.handleMessage("How are you?", "conv_123");
// Both messages grouped under conversation "conv_123"Support
- Documentation: docs.promptmetrics.com
- Email: [email protected]
License
MIT
