@haiec/openai
v1.0.0
Published
OpenAI SDK wrapper with automatic usage tracking and cost calculation for HAIEC AI Inventory
Maintainers
Readme
@haiec/openai
OpenAI SDK wrapper with automatic usage tracking and cost calculation for HAIEC AI Inventory.
Features
- ✅ Automatic Usage Tracking - Logs every API call to HAIEC
- ✅ Cost Calculation - Calculates costs based on model pricing
- ✅ Low Overhead - <10ms additional latency
- ✅ Error Handling - Gracefully handles tracking failures
- ✅ Drop-in Replacement - Works exactly like the official OpenAI SDK
Installation
npm install @haiec/openai openaiUsage
Basic Example
import TrackedOpenAI from '@haiec/openai';
const client = new TrackedOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
haiecApiKey: process.env.HAIEC_API_KEY!,
});
const response = await client.chat.completions.create({
model: 'gpt-4-turbo',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);Configuration
const client = new TrackedOpenAI({
apiKey: 'sk-...', // Your OpenAI API key
haiecApiKey: 'haiec_...', // Your HAIEC API key
haiecEndpoint: 'https://haiec.com/api/v1/inventory/usage/log', // Optional: Custom endpoint
organizationId: 'org-...', // Optional: OpenAI organization ID
});Self-Hosted HAIEC
const client = new TrackedOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
haiecApiKey: process.env.HAIEC_API_KEY!,
haiecEndpoint: 'https://your-haiec-instance.com/api/v1/inventory/usage/log',
});Supported Models & Pricing
| Model | Input Cost (per 1M tokens) | Output Cost (per 1M tokens) | |-------|---------------------------|----------------------------| | gpt-4-turbo | $10.00 | $30.00 | | gpt-4 | $30.00 | $60.00 | | gpt-4-32k | $60.00 | $120.00 | | gpt-3.5-turbo | $0.50 | $1.50 | | gpt-3.5-turbo-16k | $3.00 | $4.00 |
Pricing is automatically updated based on OpenAI's latest rates.
What Gets Tracked
For each API call, the following data is logged to HAIEC:
- Provider:
openai - Model: e.g.,
gpt-4-turbo - Endpoint: e.g.,
chat.completions - Request Tokens: Number of input tokens
- Response Tokens: Number of output tokens
- Cost: Calculated cost in USD
- Latency: Response time in milliseconds
- Status Code: HTTP status code
- Error Message: If the request failed
Performance
- Overhead: <10ms per request
- Async Logging: Usage is logged asynchronously to avoid blocking
- Error Handling: Tracking failures don't affect your API calls
Error Handling
If usage tracking fails, the error is logged to console but your OpenAI API call continues normally:
try {
const response = await client.chat.completions.create({...});
// Your response is returned even if tracking fails
} catch (error) {
// Only OpenAI API errors are thrown
}API Compatibility
This wrapper maintains 100% compatibility with the official OpenAI SDK. All methods and parameters work exactly the same.
Getting Your HAIEC API Key
- Sign up at haiec.com
- Navigate to Settings > API Keys
- Click "Create API Key"
- Copy your key and add it to your environment variables
Support
- Documentation: docs.haiec.com
- Issues: GitHub Issues
- Email: [email protected]
License
MIT
