@jsoma/piece-opik
v0.1.0
Published
Opik piece for ActivePieces - LLM observability, evaluation, and prompt management
Downloads
7
Maintainers
Readme
Opik Piece for ActivePieces
Integrate Opik LLM observability and evaluation platform into your ActivePieces workflows to track, monitor, and improve AI-powered automations.
Overview
This piece enables you to:
- Track LLM workflows with detailed traces and spans
- Manage prompts with versioning from Opik's prompt library
- Apply guardrails for PII detection and topic moderation
- Log feedback scores to measure quality and performance
- Debug issues with comprehensive observability
Perfect for newsrooms and content teams building AI workflows that need visibility, quality control, and continuous improvement.
Installation
Add this piece to your ActivePieces instance through the admin panel or by installing the npm package:
npm install @activepieces/piece-opikConfiguration
Authentication
The Opik piece supports both:
- Opik Cloud: Use your API key from Comet.com
- Self-hosted Opik: Connect to your local instance
Configuration fields:
- Opik URL: Your Opik instance URL
- Cloud:
https://www.comet.com/opik/api - Self-hosted:
http://localhost:5173/api
- Cloud:
- API Key: Required for Opik Cloud (optional for self-hosted)
- Workspace: Optional workspace name
Available Actions
🔍 Tracing Actions
Start Trace
Begin tracking a workflow execution.
Inputs:
name: Descriptive trace nameinput: Input data (JSON)metadata: Additional context (JSON)tags: Categories for organizationthread_id: Group related tracesproject_name: Opik project
Output: trace_id for use in subsequent actions
End Trace
Complete a trace with final results.
Inputs:
trace_id: ID from Start Traceoutput: Final output (JSON)metadata: Additional metadataerror: Error message if failed
Log Span
Track individual steps within a trace.
Inputs:
trace_id: Parent trace IDname: Step nametype: LLM/Tool/Agent/Generalinput/output: Step datametadata: Additional context- Token usage for LLM calls
Output: span_id for nested spans or feedback
📝 Prompt Management
Get Prompt
Retrieve prompt templates from Opik.
Inputs:
name: Prompt nameversion: Specific version (optional, latest by default)
Output:
template: The prompt templatemetadata: Associated metadataversion: Version number
🛡️ Quality Control
Check Guardrails
Screen content for issues.
Types:
- PII Detection: Find and redact sensitive information
- Topic Moderation: Ensure content stays on-topic
Inputs:
text: Content to checktype: PII or Topic- Configuration for allowed/disallowed topics or PII entities
Output:
passed: Whether check passedviolations: List of issues foundredacted_text: Cleaned version (for PII)
Log Feedback
Score traces or spans for quality metrics.
Inputs:
entity_type: Trace or Spanentity_id: ID to scorename: Metric name (accuracy, relevance, etc.)value: Numeric scorereason: Optional explanation
Example Workflow: Newsroom Tip Processing
1. Trigger: New tip submission via form
2. Opik: Start Trace
- name: "tip_processing"
- metadata: {source: "web_form", date: "2024-01-15"}
3. Opik: Check Guardrails (PII)
- type: "pii"
- text: [tip content]
4. Opik: Get Prompt
- name: "tip_evaluator"
5. LLM: Evaluate tip relevance
- prompt: [from step 4]
- input: [redacted tip from step 3]
6. Opik: Log Span
- name: "evaluation"
- type: "llm"
- input/output: [LLM data]
7. Branch: If relevant
a. Opik: Get Prompt ("follow_up_questions")
b. LLM: Generate questions
c. Opik: Log Span ("question_generation")
d. Opik: Get Prompt ("beat_categorizer")
e. LLM: Categorize beat
f. Opik: Check Guardrails (Topic - valid beats)
g. Google Sheets: Add row
8. Opik: Log Feedback
- name: "tip_quality"
- value: [0-10 based on relevance]
9. Opik: End Trace
- output: {processed: true, beat: "politics"}Use Cases
Newsroom Workflows
- Tip evaluation: Assess reader submissions
- Content moderation: Screen for PII and off-topic content
- Beat categorization: Route stories to appropriate teams
- Quality tracking: Monitor prompt effectiveness
General AI Workflows
- Customer support: Track chatbot interactions
- Content generation: Monitor article/report creation
- Data extraction: Trace document processing
- API monitoring: Track LLM API usage and costs
Best Practices
- Always use Start/End Trace pairs to capture complete workflow execution
- Log Spans for key steps to identify bottlenecks and failures
- Use meaningful names for traces and spans for easier debugging
- Apply guardrails early in workflows to prevent downstream issues
- Version your prompts in Opik for A/B testing and rollback
- Log feedback scores to track quality over time
- Use thread_id to group related workflows (e.g., all tips from same source)
- Include metadata for filtering and analysis in Opik dashboard
Debugging
Common issues and solutions:
Authentication Failed
- Verify API key is correct for Opik Cloud
- Check URL format (include
/apisuffix) - Ensure self-hosted instance is running
Trace Not Appearing
- Confirm trace was ended with End Trace action
- Check project_name matches your Opik project
- Verify network connectivity to Opik server
Guardrails Not Working
- Ensure guardrails are enabled in your Opik instance
- Check configuration for topic lists or PII entities
- Verify text input is not empty
Advanced Features
Nested Spans
Create hierarchical traces by using parent_span_id:
1. Log Span (name: "research_phase") → span_id: "abc123"
2. Log Span (name: "web_search", parent_span_id: "abc123")
3. Log Span (name: "summarization", parent_span_id: "abc123")Custom Metadata
Add searchable metadata to traces and spans:
{
"department": "politics",
"reporter": "jane.doe",
"priority": "high",
"word_count": 500
}Batch Processing
Process multiple items with thread grouping:
thread_id: "daily_tips_2024_01_15"
// All tips from the same day share thread_idDevelopment
Building from Source
npm install
npm run buildTesting
npm testContributing
Contributions welcome! Please submit PRs with:
- New actions for Opik features
- Bug fixes
- Documentation improvements
- Example workflows
Resources
License
MIT
