@revenium/openai
v1.1.2
Published
Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI
Downloads
55
Readme
Revenium OpenAI Middleware for Node.js
Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI
A professional-grade Node.js middleware that seamlessly integrates with OpenAI and Azure OpenAI to provide automatic usage tracking, billing analytics, and comprehensive metadata collection. Features native TypeScript support with zero type casting required and supports both Chat Completions API, Embeddings API, and Responses API.
Go-aligned API for consistent cross-language development!
Features
- Go-Aligned API - Same
Initialize()/GetClient()pattern as Go implementation - Seamless Integration - Native TypeScript support, no type casting required
- Optional Metadata - Track users, organizations, and business context (all fields optional)
- Multiple API Support - Chat Completions, Embeddings, and Responses API
- Azure OpenAI Support - Full Azure OpenAI integration with automatic detection
- Type Safety - Complete TypeScript support with IntelliSense
- Streaming Support - Handles regular and streaming requests seamlessly
- Fire-and-Forget - Never blocks your application flow
- Automatic .env Loading - Loads environment variables automatically
Getting Started
1. Create Project Directory
# Create project directory and navigate to it
mkdir my-openai-project
cd my-openai-project
# Initialize npm project
npm init -y
# Install packages
npm install @revenium/openai openai dotenv tsx
npm install --save-dev typescript @types/node2. Configure Environment Variables
Create a .env file in your project root. See .env.example for all available configuration options.
Minimum required configuration:
REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
REVENIUM_METERING_BASE_URL=https://api.revenium.ai
OPENAI_API_KEY=sk_your_openai_api_key_hereNOTE: Replace the placeholder values with your actual API keys.
3. Run Your First Example
For complete examples and usage patterns, see examples/README.md.
Requirements
- Node.js 16+
- OpenAI package v5.0.0 or later
- TypeScript 5.0+ (for TypeScript projects)
What Gets Tracked
The middleware automatically captures comprehensive usage data:
Usage Metrics
- Token Counts - Input tokens, output tokens, total tokens
- Model Information - Model name, provider (OpenAI/Azure), API version
- Request Timing - Request duration, response time
- Cost Calculation - Estimated costs based on current pricing
Business Context (Optional)
- User Tracking - Subscriber ID, email, credentials
- Organization Data - Organization ID, subscription ID, product ID
- Task Classification - Task type, agent identifier, trace ID
- Quality Metrics - Response quality scores, task identifiers
Technical Details
- API Endpoints - Chat completions, embeddings, responses API
- Request Types - Streaming vs non-streaming
- Error Tracking - Failed requests, error types, retry attempts
- Environment Info - Development vs production usage
API Overview
The middleware provides a Go-aligned API with the following main functions:
Initialize(config?)- Initialize the middleware (from environment or explicit config)GetClient()- Get the global Revenium client instanceConfigure(config)- Alias forInitialize()for programmatic configurationIsInitialized()- Check if the middleware is initializedReset()- Reset the global client (useful for testing)
For complete API documentation and usage examples, see examples/README.md.
Tool Metering
Track execution of custom tools and external API calls with automatic timing, error handling, and metadata collection.
Quick Example
import { meterTool, setToolContext } from '@revenium/openai';
setToolContext({
agent: 'my-agent',
traceId: 'session-123'
});
const result = await meterTool('weather-api', async () => {
return await fetch('https://api.example.com/weather');
}, {
operation: 'get_forecast',
outputFields: ['temperature', 'humidity']
});Functions
meterTool(toolId, fn, metadata?)
Wraps a function with automatic metering. Captures duration, success/failure, and errors. Returns function result unchanged.
reportToolCall(toolId, report)
Manually report a tool call that was already executed. Useful when wrapping is not possible.
Context Management
setToolContext(ctx)- Set context for all subsequent tool callsgetToolContext()- Get current contextclearToolContext()- Clear contextrunWithToolContext(ctx, fn)- Run function with scoped context
Metadata Options
| Field | Description |
|-------|-------------|
| operation | Tool operation name (e.g., "search", "scrape") |
| outputFields | Array of field names to auto-extract from result |
| usageMetadata | Custom metrics (e.g., tokens, results count) |
| agent, traceId, etc. | Context fields (inherited from setToolContext) |
Metadata Fields
The middleware supports the following optional metadata fields for tracking:
| Field | Type | Description |
| ----------------------- | ------ | ------------------------------------------------------------ |
| traceId | string | Unique identifier for session or conversation tracking |
| taskType | string | Type of AI task being performed (e.g., "chat", "embedding") |
| agent | string | AI agent or bot identifier |
| organizationName | string | Organization or company name (used for lookup/auto-creation) |
| productName | string | Your product or feature name (used for lookup/auto-creation) |
| subscriptionId | string | Subscription plan identifier |
| responseQualityScore | number | Custom quality rating (0.0-1.0) |
| subscriber.id | string | Unique user identifier |
| subscriber.email | string | User email address |
| subscriber.credential | object | Authentication credential (name and value fields) |
All metadata fields are optional. For complete metadata documentation and usage examples, see:
examples/README.md- All usage examples- Revenium API Reference - Complete API documentation
Trace Visualization Fields
The middleware automatically captures trace visualization fields for distributed tracing and analytics:
| Field | Type | Description | Environment Variable |
| --------------------- | ------ | ------------------------------------------------------------------------------- | ---------------------------------- |
| environment | string | Deployment environment (production, staging, development) | REVENIUM_ENVIRONMENT, NODE_ENV |
| operationType | string | Operation classification (CHAT, EMBED, etc.) - automatically detected | N/A (auto-detected) |
| operationSubtype | string | Additional detail (function_call, etc.) - automatically detected | N/A (auto-detected) |
| retryNumber | number | Retry attempt number (0 for first attempt, 1+ for retries) | REVENIUM_RETRY_NUMBER |
| parentTransactionId | string | Parent transaction reference for distributed tracing | REVENIUM_PARENT_TRANSACTION_ID |
| transactionName | string | Human-friendly operation label | REVENIUM_TRANSACTION_NAME |
| region | string | Cloud region (us-east-1, etc.) - auto-detected from AWS/Azure/GCP | AWS_REGION, REVENIUM_REGION |
| credentialAlias | string | Human-readable credential name | REVENIUM_CREDENTIAL_ALIAS |
| traceType | string | Categorical identifier (alphanumeric, hyphens, underscores only, max 128 chars) | REVENIUM_TRACE_TYPE |
| traceName | string | Human-readable label for trace instances (max 256 chars) | REVENIUM_TRACE_NAME |
All trace visualization fields are optional. The middleware will automatically detect and populate these fields when possible.
Example Configuration
REVENIUM_ENVIRONMENT=production
REVENIUM_REGION=us-east-1
REVENIUM_CREDENTIAL_ALIAS=OpenAI Production Key
REVENIUM_TRACE_TYPE=customer_support
REVENIUM_TRACE_NAME=Support Ticket #12345
REVENIUM_PARENT_TRANSACTION_ID=parent-txn-123
REVENIUM_TRANSACTION_NAME=Answer Customer Question
REVENIUM_RETRY_NUMBER=0Terminal Summary Output
The middleware can optionally print a cost/metrics summary to the terminal after each API request. This is useful during development to see token usage and estimated costs without checking the dashboard.
Enabling Terminal Summary
Set the following environment variables:
# Use 'true' or 'human' for human-readable output, 'json' for JSON output
REVENIUM_PRINT_SUMMARY=true
REVENIUM_TEAM_ID=your-team-id-hereOr configure programmatically:
Initialize({
reveniumApiKey: "hak_your-api-key",
printSummary: true, // or 'human' or 'json'
teamId: "your-team-id",
});Output Formats
Human-Readable Format (default)
Set REVENIUM_PRINT_SUMMARY=true or REVENIUM_PRINT_SUMMARY=human:
============================================================
📊 REVENIUM USAGE SUMMARY
============================================================
🤖 Model: gpt-4o-mini
🏢 Provider: OpenAI
⏱️ Duration: 1.23s
💬 Token Usage:
📥 Input Tokens: 150
📤 Output Tokens: 250
📊 Total Tokens: 400
💰 Cost: $0.000450
============================================================JSON Format
Set REVENIUM_PRINT_SUMMARY=json for machine-readable output:
{
"model": "gpt-4o-mini",
"provider": "OpenAI",
"durationSeconds": 1.23,
"inputTokenCount": 150,
"outputTokenCount": 250,
"totalTokenCount": 400,
"cost": 0.00045,
"traceId": "abc-123"
}The JSON output includes all the same fields as the human-readable format and is ideal for log parsing, automation, and integration with other tools.
Note: The teamId is required to display cost information. If not provided, the summary will show token usage but the cost field will be null with a costStatus of "unavailable". When teamId is set but the cost hasn't been aggregated yet, the cost field will be null with a costStatus of "pending". You can find your team ID in the Revenium web application.
Prompt Capture
The middleware can capture prompts and responses for analysis. This feature is disabled by default for privacy and performance.
Configuration
Enable prompt capture globally via environment variable:
REVENIUM_CAPTURE_PROMPTS=true
REVENIUM_MAX_PROMPT_SIZE=50000 # Optional: default is 50000 charactersOr enable per-request via metadata:
const response = await client.chat.completions.create(
{
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }],
},
{
usageMetadata: { capturePrompts: true },
},
);Security
Captured prompts are automatically sanitized to remove sensitive credentials:
- API keys (OpenAI, Anthropic, Perplexity)
- AWS access keys
- GitHub tokens
- JWT tokens
- Bearer tokens
- Passwords and secrets
Prompts exceeding maxPromptSize are truncated and marked with promptsTruncated: true.
Configuration Options
Environment Variables
For a complete list of all available environment variables with examples, see .env.example.
Examples
The package includes comprehensive examples in the examples/ directory.
Getting Started
npm run example:getting-startedOpenAI Examples
| Example | Command | Description |
| ------------------------------- | ----------------------------------- | --------------------------------- |
| openai/basic.ts | npm run example:openai-basic | Chat completions and embeddings |
| openai/metadata.ts | npm run example:openai-metadata | All metadata fields demonstration |
| openai/streaming.ts | npm run example:openai-stream | Streaming chat completions |
| openai/responses-basic.ts | npm run example:openai-res-basic | Responses API usage |
| openai/responses-embed.ts | npm run example:openai-res-embed | Embeddings with Responses API |
| openai/responses-streaming.ts | npm run example:openai-res-stream | Streaming Responses API |
Azure OpenAI Examples
| Example | Command | Description |
| --------------------------- | ---------------------------------- | ----------------------------- |
| azure/basic.ts | npm run example:azure-basic | Azure chat completions |
| azure/stream.ts | npm run example:azure-stream | Azure streaming |
| azure/responses-basic.ts | npm run example:azure-res-basic | Azure Responses API |
| azure/responses-stream.ts | npm run example:azure-res-stream | Azure Responses API streaming |
For complete example documentation, setup instructions, and usage patterns, see examples/README.md.
How It Works
- Initialize: Call
Initialize()to set up the middleware with your configuration - Get Client: Call
GetClient()to get a wrapped OpenAI client instance - Make Requests: Use the client normally - all requests are automatically tracked
- Async Tracking: Usage data is sent to Revenium in the background (fire-and-forget)
- Transparent Response: Original OpenAI responses are returned unchanged
The middleware never blocks your application - if Revenium tracking fails, your OpenAI requests continue normally.
Supported APIs:
- Chat Completions API (
client.chat().completions().create()) - Embeddings API (
client.embeddings().create()) - Responses API (
client.responses().create()andclient.responses().createStreaming())
Troubleshooting
Common Issues
No tracking data appears:
- Verify environment variables are set correctly in
.env - Enable debug logging by setting
REVENIUM_DEBUG=truein.env - Check console for
[Revenium]log messages - Verify your
REVENIUM_METERING_API_KEYis valid
Client not initialized error:
- Make sure you call
Initialize()beforeGetClient() - Check that your
.envfile is in the project root - Verify
REVENIUM_METERING_API_KEYis set
Azure OpenAI not working:
- Verify all Azure environment variables are set (see
.env.example) - Check that
AZURE_OPENAI_ENDPOINTandAZURE_OPENAI_API_KEYare correct - Ensure you're using a valid deployment name in the
modelparameter
Debug Mode
Enable detailed logging by adding to your .env:
REVENIUM_DEBUG=trueGetting Help
If issues persist:
- Enable debug logging (
REVENIUM_DEBUG=true) - Check the
examples/directory for working examples - Review
examples/README.mdfor detailed setup instructions - Contact [email protected] with debug logs
Supported Models
This middleware works with any OpenAI model. For the complete model list, see the OpenAI Models Documentation.
API Support Matrix
The following table shows what has been tested and verified with working examples:
| Feature | Chat Completions | Embeddings | Responses API | | --------------------- | ---------------- | ---------- | ------------- | | OpenAI Basic | Yes | Yes | Yes | | OpenAI Streaming | Yes | No | Yes | | Azure Basic | Yes | No | Yes | | Azure Streaming | Yes | No | Yes | | Metadata Tracking | Yes | Yes | Yes | | Token Counting | Yes | Yes | Yes |
Note: "Yes" = Tested with working examples in examples/ directory
Documentation
For detailed documentation, visit docs.revenium.io
Contributing
See CONTRIBUTING.md
Testing
The middleware includes comprehensive automated tests that fail the build when something is wrong.
Run All Tests
Run unit, integration, and performance tests:
npm testRun Tests with Coverage
npm run test:coverageRun Tests in Watch Mode
npm run test:watchTest Requirements
All tests are designed to:
- ✅ Fail the build when something is wrong (
process.exit(1)) - ✅ Pass when everything works correctly (
process.exit(0)) - ✅ Provide clear error messages
- ✅ Test trace field validation, environment detection, and region detection
Code of Conduct
Security
See SECURITY.md
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
For issues, feature requests, or contributions:
- Website: www.revenium.ai
- GitHub Repository: revenium/revenium-middleware-openai-node
- Issues: Report bugs or request features
- Documentation: docs.revenium.io
- Email: [email protected]
Built by Revenium
