miharu-ai
v0.3.0
Published
Open-source framework for LLMOps (LLM operations monitoring) that runs on TypeScript
Maintainers
Readme
🌸 miharuAI
Zero-configuration LLMOps monitoring for TypeScript applications
miharuAI is an open-source framework that automatically monitors your LLM API calls with zero configuration. Simply install and import - no code changes required.
✨ Features
- 🚀 Zero Configuration - Automatic OpenAI SDK monkey patching
- 📊 Comprehensive Monitoring - Token usage, costs, latency, and errors
- ⚡ High Performance - Less than 5% latency overhead
- 🔍 Advanced Analytics - Statistical analysis, anomaly detection, regression alerts
- 💾 Flexible Storage - SQLite, Supabase, or custom adapters
- 📈 Session Management - Concurrent session tracking with analytics
- 🛡️ Production Ready - Memory-efficient, fault-tolerant design
- 📋 Export Capabilities - JSON, CSV, analytics reports
🚀 Quick Start
Installation
npm install miharu-ai
# or
yarn add miharu-ai
# or
pnpm add miharu-aiBasic Usage (Zero Configuration)
// Simply import before using OpenAI - that's it!
import 'miharu-ai'
import OpenAI from 'openai'
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
})
// Your existing OpenAI code works unchanged
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello, world!' }]
})
console.log(response.choices[0].message.content)
// miharuAI automatically tracks this call in the background ✨That's it! miharuAI will automatically:
- 📊 Track token usage and costs
- ⏱️ Monitor latency and performance
- 🔍 Detect errors and anomalies
- 💾 Store data locally (SQLite by default)
🛠️ Configuration
Basic Configuration
import { MiharuAI } from 'miharu-ai'
// Initialize with custom configuration
const miharu = new MiharuAI({
// Storage configuration
storage: {
type: 'sqlite',
options: {
filename: './miharu-data.db'
}
},
// Performance monitoring
analytics: {
enabled: true,
reportingInterval: 60000 // 1 minute
},
// Session management
sessions: {
enabled: true,
timeout: 30 * 60 * 1000 // 30 minutes
}
})
// Your OpenAI usage remains the same
import OpenAI from 'openai'
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })Supabase Storage
import { MiharuAI } from 'miharu-ai'
const miharu = new MiharuAI({
storage: {
type: 'supabase',
options: {
url: process.env.SUPABASE_URL,
key: process.env.SUPABASE_ANON_KEY,
tableName: 'llm_calls'
}
}
})📊 Accessing Analytics
Real-time Metrics
import { MiharuAI } from 'miharu-ai'
const miharu = new MiharuAI()
// Get current performance metrics
const metrics = miharu.getMetrics()
console.log(`
📊 Performance Metrics:
• Total Calls: ${metrics.totalCalls}
• Average Latency: ${metrics.averageLatency}ms
• P95 Latency: ${metrics.p95Latency}ms
• Error Rate: ${metrics.errorRate}%
• Total Cost: $${metrics.totalCost.toFixed(4)}
• Total Tokens: ${metrics.totalTokens}
`)Session Analytics
// Get session insights
const sessionId = 'user-session-123'
const analytics = miharu.getSessionAnalytics(sessionId)
console.log(`
🎯 Session Quality Score: ${analytics.qualityScore}/100
⚡ Session Efficiency: ${analytics.efficiency}%
🛡️ Reliability: ${analytics.reliability}%
💰 Total Cost: $${analytics.totalCost.toFixed(4)}
📞 Total Calls: ${analytics.totalCalls}
`)Export Data
// Export session data
const exportResult = await miharu.exportSessions({
format: 'json',
timeRange: {
start: Date.now() - 24 * 60 * 60 * 1000, // Last 24 hours
end: Date.now()
}
})
console.log(`Exported ${exportResult.totalRecords} sessions to ${exportResult.files[0]}`)
// Export analytics report
const reportResult = await miharu.exportAnalyticsReport()
console.log(`Analytics report saved to ${reportResult.files[0]}`)🔍 Advanced Features
Performance Monitoring
import { MiharuAI } from 'miharu-ai'
const miharu = new MiharuAI({
analytics: {
enabled: true,
regressionDetection: true,
gcMonitoring: true
}
})
// Get performance alerts
const alerts = miharu.getPerformanceAlerts()
alerts.forEach(alert => {
if (alert.severity === 'critical') {
console.log(`🚨 CRITICAL: ${alert.description}`)
console.log(`💡 Recommendations: ${alert.recommendations.join(', ')}`)
}
})Session Management
// Start a tracked session
const sessionId = await miharu.startSession('user-123', {
tags: ['production', 'chat-bot'],
metadata: { userId: 'user-123', feature: 'chat' }
})
// Your OpenAI calls are automatically tracked under this session
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
})
// End the session
await miharu.endSession(sessionId)Custom Event Tracking
// Track custom events
miharu.trackEvent('user_login', {
userId: 'user-123',
timestamp: Date.now()
})
// Track business metrics
miharu.trackMetric('user_satisfaction', 4.5, {
sessionId: 'session-123',
feature: 'chat-completion'
})📈 Monitoring Dashboard Data
Real-time Performance
// Get live performance statistics
const stats = miharu.getRealtimeStats()
console.log(`
🚀 System Performance:
• Active Sessions: ${stats.activeSessions}
• Requests/min: ${stats.requestsPerMinute}
• Avg Response Time: ${stats.averageLatency}ms
• Memory Usage: ${stats.memoryUsageMB}MB
• Error Rate: ${stats.errorRate}%
`)Historical Analysis
// Analyze trends over time
const trends = miharu.analyzeTrends({
timeRange: {
start: Date.now() - 7 * 24 * 60 * 60 * 1000, // Last 7 days
end: Date.now()
}
})
console.log(`
📊 7-Day Trends:
• Latency: ${trends.latency} (${trends.latencyChange > 0 ? '📈' : '📉'})
• Cost: ${trends.cost} (${trends.costChange > 0 ? '📈' : '📉'})
• Quality: ${trends.quality} (${trends.qualityChange > 0 ? '📈' : '📉'})
`)🛡️ Production Best Practices
Error Handling
import { MiharuAI } from 'miharu-ai'
const miharu = new MiharuAI({
errorHandling: {
retryAttempts: 3,
retryDelay: 1000,
failureThreshold: 0.1 // 10% error rate threshold
}
})
// miharuAI automatically handles transient failures
// and provides detailed error analyticsMemory Management
const miharu = new MiharuAI({
performance: {
maxMemoryMB: 50,
batchSize: 100,
flushInterval: 30000, // 30 seconds
compressionEnabled: true
}
})Data Retention
const miharu = new MiharuAI({
storage: {
retentionDays: 30, // Keep data for 30 days
cleanupInterval: 24 * 60 * 60 * 1000, // Daily cleanup
maxStorageSize: 1024 * 1024 * 1024 // 1GB limit
}
})🧪 Testing Support
import { MiharuAI } from 'miharu-ai'
// Disable monitoring in tests
const miharu = new MiharuAI({
enabled: process.env.NODE_ENV !== 'test',
storage: {
type: process.env.NODE_ENV === 'test' ? 'memory' : 'sqlite'
}
})📋 Migration from Other Tools
From Manual Tracking
// Before miharuAI (manual tracking)
const startTime = Date.now()
const response = await openai.chat.completions.create({...})
const endTime = Date.now()
await logCall({
latency: endTime - startTime,
tokens: response.usage.total_tokens,
cost: calculateCost(response.usage)
})
// After miharuAI (automatic)
import 'miharu-ai'
const response = await openai.chat.completions.create({...})
// Everything tracked automatically! ✨🤝 Contributing
We welcome contributions! Please see our Contributing Guide for details.
📄 License
MIT © miharuAI
🔗 Links
Made with ❤️ for the LLM community
miharuAI helps you build better LLM applications by providing the insights you need to optimize performance, control costs, and ensure reliability.
