@mieweb/mie-fintel
v1.0.0
Published
Telemetry SDK for MIE applications
Keywords
Readme
mie-fintel
A "Universal Connector" for MIE and Bluehive applications. This package wraps OpenTelemetry to provide a zero-configuration, lazy-loaded API for Metrics, Traces, Logs, and Context propagation.
Goal: Simplify most of telemetry tasks (counting things, timing functions, tagging logs) while still exposing full access to the raw OpenTelemetry API when needed. This is so we can send metrics/traces/etc to a Otel collector.
The below is an example approach we could use this for:
graph LR
App["Node.js App"]
SDK["mie-fintel<br/>(OTel SDK)"]
Collector["OTel Collector"]
PMM["PMM Server<br/>(or any other similar service)"]
App --> SDK
SDK --"OTLP (gRPC/HTTP)<br/>mTLS supported"--> Collector
Collector --"Remote Write<br/>Port 8080"--> PMM
style App fill:#255b9c,stroke:#333
style SDK fill:#5c1b5c,stroke:#333
style Collector fill:#6b0d0d,stroke:#333
style PMM fill:#357d1a,stroke:#333� Installation
Since this package is hosted privately on GitHub, install it using the git URL:
npm install github:mieweb/mie-fintel�🚀 Quick Start
1. Instrumentation (App Entry Point)
Initialize the SDK before any other imports in your application (e.g., instrumentation.js or server.js).
// instrumentation.js
const { init } = require('mie-fintel');
// --- Option A: gRPC (Default) ---
// Recommended for performance and efficiency.
// Uses port 4317 by default.
init({
serviceName: 'my-service-name',
collectorUrl: 'http://localhost:4317',
transport: 'grpc',
});
// --- Option B: HTTP ---
// Use this if firewalls or proxies block gRPC/HTTP2 traffic.
// Uses port 4318 by default.
/*
init({
serviceName: 'my-service-name',
collectorUrl: 'http://localhost:4318',
transport: 'http',
});
*/
// --- Option C: gRPC with mTLS ---
// Use this for secure, authenticated connections to the collector.
/*
init({
serviceName: 'my-service-name',
collectorUrl: 'collector.example.com:4317',
transport: 'grpc',
tls: {
certPath: './certs/client.crt',
keyPath: './certs/client.key',
caPath: './certs/ca.pem'
}
});
*/2. Usage (Anywhere in your code)
You don't need to create meters, request tracers, or pass instances around. Just import and use.
const { fintel, fintrace, finlogs, fincontext } = require('mie-fintel');📊 Metrics Module (fintel)
Metrics provide quantitative measurements of your application's behavior over time. The metrics module automatically handles the creation and caching of OpenTelemetry instruments, so you can focus on recording data rather than managing infrastructure.
counter(name, value, attributes, options)
Counters are monotonically increasing values that measure the total count or sum of events. They never decrease (except when the process restarts). Use counters to track totals like request counts, error counts, bytes processed, or total revenue.
When to use: Tracking cumulative values that only go up - total requests, completed tasks, errors encountered, or monetary amounts.
// Simple increment
fintel.counter('app.requests.total');
// Increment with value and attributes
fintel.counter('app.order.cost', 50.0, { currency: 'USD', plan: 'pro' });
// With description metadata (recommended)
fintel.counter('app.connection.errors', 1, {}, { description: 'Total connection failures' });histogram(name, value, attributes, options)
Histograms record the distribution of values over time, tracking both individual measurements and statistical aggregates (sum, count, min, max, percentiles). They're essential for understanding performance characteristics and variability in your application.
When to use: Measuring durations (request latency, processing time), sizes (payload size, database result set), or any metric where you need to understand the distribution of values, not just totals. Always specify the unit option (e.g., 'seconds', 'bytes', 'milliseconds') for proper visualization in monitoring tools.
// Record duration with unit configuration
const duration = 45.5;
fintel.histogram(
'ivr.call.duration',
duration,
{ call_type: 'inbound', region: 'us-east-1' },
{ description: 'Duration of the IVR session', unit: 'seconds' }
);upDown(name, value, attributes, options)
UpDownCounters (also called gauges) track values that can both increase and decrease. Unlike regular counters, they represent a current state or level that fluctuates over time.
When to use: Tracking current levels like active connections, items in a queue, concurrent requests, memory usage, or any resource that can both increase and decrease. Call with positive values to increment, negative values to decrement.
// Increment
fintel.upDown('app.job.queue', 1, { priority: 'high' });
// Decrement
fintel.upDown('app.job.queue', -1, { priority: 'high' });observableGauge(name, callback, options)
Observable gauges are periodically sampled by the OpenTelemetry SDK rather than being directly recorded by your code. You provide a callback function that returns the current value, and the SDK calls it automatically at collection intervals.
When to use: Monitoring values that are expensive to calculate or that you want sampled at regular intervals rather than on every change - like process memory, CPU usage, connection pool size, or external resource states. The callback runs asynchronously on the SDK's schedule.
fintel.observableGauge(
'process.uptime',
(observable) => {
// The callback receives an ObservableResult to set the value
observable.observe(process.uptime(), { host: 'server-01' });
},
{ description: 'Process uptime in seconds', unit: 'seconds' }
);historicalGauge(name, value, unit, timestampISO, attributes, collectorUrl, serviceName)
Pushes a metric data point with a specific historical timestamp directly to the OTLP collector. This bypasses the standard SDK batching and sends an immediate HTTP OTLP request (requires an HTTP-enabled collector endpoint).
When to use: Backfilling accumulated data, migration scripts, or when the event time is in the past and strict timestamp accuracy is required.
await fintel.historicalGauge(
'legacy.job.duration',
124.5,
'seconds',
'2024-01-01T12:00:00.000Z',
{ job_id: '99' },
'http://localhost:4318',
'my-service'
);🕵️ Trace Module (fintrace)
Tracing provides end-to-end visibility into request flows by tracking the execution path through your application and across services. Traces are composed of "spans" - individual units of work that record timing, metadata, and relationships between operations.
fn(name, callback)
This is a convenience wrapper that handles the complete span lifecycle automatically. It creates a span, executes your async function within that span's context, records any errors that occur, and ensures the span is properly closed - even if exceptions are thrown.
When to use: Wrapping any significant operation you want to monitor - database queries, API calls, business logic functions, or processing steps. The span automatically captures timing, errors, and can be enriched with custom attributes via span.setAttribute(). Nested fintrace.fn() calls automatically create parent-child span relationships.
const user = await fintrace.fn('db.getUser', async (span) => {
span.setAttribute('user.id', '12345');
// ... do your DB work ...
return await db.query('SELECT * ...');
});
// Span automatically ends here.getTracer()
Returns the underlying OpenTelemetry tracer instance for advanced use cases where you need fine-grained control over span creation and lifecycle.
When to use: When fintrace.fn() doesn't provide enough flexibility - for example, when you need to pass spans across different scopes, create spans that outlive a single function call, or use advanced OpenTelemetry features. For most cases, fintrace.fn() is recommended as it handles cleanup and error recording automatically.
📝 Logs Module (finlogs)
Structured logging provides human-readable messages enriched with machine-parseable metadata. Logs emitted through this module are automatically correlated with active traces, enabling you to see logs in context with the request flow that generated them.
emit(body, severity, attributes)
Emits a log record with a message, severity level, and optional structured attributes. Logs are automatically associated with the current trace context, creating a unified view of events and execution flow.
When to use: Recording significant events, debugging information, warnings, or errors throughout your application. Use attributes to add structured context (user IDs, request IDs, error codes) that can be queried and filtered in your logging platform.
finlogs.emit('Payment processed successfully', 'INFO', { orderId: '998877' });
finlogs.emit('Connection timeout', 'ERROR', { retries: 3 });Available severities: DEBUG, INFO, WARN, ERROR, FATAL.
🎒 Context Module (fincontext)
Context propagation allows you to pass metadata through your application without explicit function parameters. Using OpenTelemetry's baggage system, you can attach key-value pairs that automatically flow through asynchronous operations, function calls, and even across service boundaries.
withBaggage(key, value, callback)
Executes the provided callback function with the specified key-value pair available in the execution context. The baggage is automatically available to all code that runs within (or is called from) the callback, including async operations.
When to use: Passing request-scoped data like user IDs, tenant IDs, transaction IDs, or feature flags through your call stack without threading them through every function parameter. Particularly useful for cross-cutting concerns that many layers of your application need access to.
fincontext.withBaggage('transaction_id', 'tx-555', async () => {
// Inside this function (and any functions it calls), the baggage is set.
await processOrder();
});getBaggage(key)
Retrieves a value from the current execution context that was previously set with withBaggage(). Returns the value if found, or undefined if the key doesn't exist in the current context.
When to use: Accessing context data that was set higher in the call stack. This allows deeply nested functions or utility modules to access request-scoped information without requiring it to be passed through every intermediate function call.
function processOrder() {
const txId = fincontext.getBaggage('transaction_id'); // Returns 'tx-555'
console.log(`Processing ${txId}`);
}⚙️ Configuration
The init function takes a configuration object:
| Option | Default | Description |
| :--------------------- | :---------------------- | :------------------------------------------------------------------------------------------- |
| serviceName | unknown-service | Name of your service in monitoring tools. |
| collectorUrl | http://localhost:4317 | OTLP endpoint of your collector. |
| transport | grpc | Transport protocol: 'grpc' (port 4317) or 'http' (port 4318). |
| insecure | false | Set true to disable TLS entirely (gRPC only). |
| tls | null | TLS configuration object. See TLS / mTLS Authentication below. |
| exportIntervalMillis | 60000 | How often (in ms) the SDK exports accumulated metrics to the collector. |
init({
serviceName: 'bluehive-ivr',
collectorUrl: process.env.OTEL_URL,
});Environment Variables
The SDK supports standard OpenTelemetry environment variables as fallbacks. The resolution order is: explicit config > environment variable > default value.
| Environment Variable | Maps To | Example |
| :-------------------------------------- | :--------------------- | :----------------------------------- |
| OTEL_SERVICE_NAME | serviceName | my-service |
| OTEL_EXPORTER_OTLP_ENDPOINT | collectorUrl | http://collector:4317 |
| OTEL_EXPORTER_OTLP_PROTOCOL | transport | grpc, http/protobuf, http/json |
| OTEL_EXPORTER_OTLP_INSECURE | insecure | true or false |
| OTEL_EXPORTER_OTLP_CERTIFICATE | tls.caPath | /certs/ca.pem |
| OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE | tls.certPath | /certs/client.crt |
| OTEL_EXPORTER_OTLP_CLIENT_KEY | tls.keyPath | /certs/client.key |
| OTEL_METRIC_EXPORT_INTERVAL | exportIntervalMillis | 30000 |
This allows zero-code configuration in containerized environments:
# Docker / Kubernetes — no code changes needed
OTEL_SERVICE_NAME=bluehive-ivr
OTEL_EXPORTER_OTLP_ENDPOINT=https://collector.prod:4317
OTEL_EXPORTER_OTLP_PROTOCOL=grpc// The SDK picks up env vars automatically
init();🔐 TLS / mTLS Authentication
The SDK supports four security modes for connecting to an OTel collector, for both gRPC and HTTP transports.
Mode 1: Insecure (no encryption)
No TLS at all. Only use for local development.
init({
serviceName: 'my-service',
collectorUrl: 'http://localhost:4317',
transport: 'grpc',
insecure: true,
});Mode 2: System CAs (default)
Uses the operating system's trusted certificate authorities. This is the default when no tls or insecure option is provided.
init({
serviceName: 'my-service',
collectorUrl: 'https://collector.example.com:4317',
transport: 'grpc',
});Mode 3: One-way TLS (custom CA)
The client verifies the server using a custom CA certificate, but the server does not verify the client. Use this when your collector uses a self-signed or private CA certificate.
init({
serviceName: 'my-service',
collectorUrl: 'https://collector.example.com:4317',
transport: 'grpc',
tls: {
caPath: '/path/to/ca.pem',
},
});Mode 4: mTLS (mutual TLS)
Both the client and server authenticate each other. The client presents a certificate signed by a CA that the server trusts, and the server presents a certificate signed by a CA that the client trusts. This is the most secure option and is recommended for production.
init({
serviceName: 'my-service',
collectorUrl: 'collector.example.com:4317',
transport: 'grpc',
tls: {
certPath: '/path/to/client.crt', // Client certificate
keyPath: '/path/to/client.key', // Client private key
caPath: '/path/to/ca.pem', // CA certificate to verify the server
},
});The tls object accepts:
| Property | Required | Description |
| :--------- | :------- | :--------------------------------------------------------------------------------------------------------- |
| caPath | Yes | Path to the CA certificate (public key of the Certificate Authority) used to verify the server's identity. |
| certPath | For mTLS | Path to the client certificate presented to the server. |
| keyPath | For mTLS | Path to the client private key. |
Note: All three properties (
caPath,certPath,keyPath) must be provided for mTLS. If onlycaPathis provided, the SDK falls back to one-way TLS. Certificates are loaded once at startup viafs.readFileSync— a restart is required to pick up new certificates.
mTLS Example with Environment Variables
A typical pattern is to control TLS via environment variables:
// instrumentation.js
const { init } = require('mie-fintel');
const path = require('path');
const useMTLS = process.env.OTEL_USE_MTLS === 'true';
const config = {
serviceName: 'my-service',
collectorUrl: process.env.OTEL_COLLECTOR_URL || 'http://localhost:4317',
transport: process.env.OTEL_TRANSPORT || 'grpc',
};
if (useMTLS) {
config.tls = {
certPath: path.join(__dirname, 'certs', 'client.crt'),
keyPath: path.join(__dirname, 'certs', 'client.key'),
caPath: path.join(__dirname, 'certs', 'ca.pem'),
};
}
init(config);# .env
OTEL_USE_MTLS=true
OTEL_TRANSPORT=grpc
OTEL_COLLECTOR_URL=collector.example.com:4317Transport & Port Considerations
| Transport | Default Port | Collector URL Format |
| :-------- | :----------- | :-------------------------------------------------------- |
| gRPC | 4317 | collector.example.com:4317 (no scheme) |
| HTTP | 4318 | https://collector.example.com:4318 (include https://) |
Important: If a reverse proxy (e.g., nginx) sits in front of the collector, ensure it is configured for TCP passthrough (not TLS termination) so that mTLS handshakes reach the collector directly. HTTP/HTTPS proxy modes will terminate TLS at the proxy, breaking client certificate validation.
📝 Complete Example
Here's a simple working example showing all the key features:
Directory Structure
my-app/
├── package.json
├── instrumentation.js
└── server.jsinstrumentation.js
Initialize the SDK first:
// instrumentation.js
const { init } = require('mie-fintel');
init({
serviceName: 'my-app',
collectorUrl: process.env.OTEL_COLLECTOR_URL || 'http://localhost:4317',
});server.js
Your application with all telemetry features:
// server.js
// IMPORTANT: Load instrumentation first!
require('./instrumentation');
const express = require('express');
const { fintel, fintrace, finlogs, fincontext } = require('mie-fintel');
const app = express();
app.use(express.json());
app.post('/api/process', async (req, res) => {
// Use fintrace.fn to automatically create a span
const result = await fintrace.fn('process.request', async (span) => {
// Add attributes to the trace
span.setAttribute('user.id', req.body.userId);
// Count requests
fintel.counter('requests.total', 1, { endpoint: '/api/process' });
// Use fincontext to pass data through async operations
return await fincontext.withBaggage('request_id', req.body.id, async () => {
// Simulate some work and measure duration
const startTime = Date.now();
await doSomeWork(req.body);
const duration = (Date.now() - startTime) / 1000;
// Record duration as a histogram
fintel.histogram(
'request.duration',
duration,
{ endpoint: '/api/process' },
{ unit: 'seconds', description: 'Request processing time' }
);
// Log success
finlogs.emit('Request processed', 'INFO', { userId: req.body.userId });
return { success: true };
});
});
res.json(result);
});
async function doSomeWork(data) {
// This function can access baggage from parent context
const requestId = fincontext.getBaggage('request_id');
console.log(`Processing request ${requestId}`);
// Simulate async work
await new Promise((resolve) => setTimeout(resolve, 100));
}
app.listen(3000, () => {
console.log('Server running on port 3000');
});package.json
{
"name": "my-app",
"version": "1.0.0",
"scripts": {
"start": "node -r ./instrumentation.js server.js"
},
"dependencies": {
"express": "^4.18.0",
"mie-fintel": "github:mieweb/mie-fintel"
}
}That's it! This example demonstrates:
- ✅ Tracing with
fintrace.fn() - ✅ Metrics: counters and histograms
- ✅ Structured logging
- ✅ Context propagation with baggage
🐳 Local Testing Setup with Docker
(Not required - only implement if you want to recreate the local setup mentioned in the flow diagram)
For local development and testing, you can run the full observability stack using Docker. This section shows how to set up PMM Server (for viewing data) and the OpenTelemetry Collector (for receiving and forwarding telemetry).
PMM Server Setup (Not required - only implement if you want to recreate the local setup mentioned in the flow diagram)
PMM Server provides the dashboard where you can visualize your metrics and traces. We map port 8080 locally to avoid conflicts with other services.
docker run -d \
-p 8080:80 \
-p 8443:443 \
--name pmm-server \
--restart always \
percona/pmm-server:2Note: Wait approximately 60 seconds for PMM Server to initialize. Once ready, visit http://localhost:8080 and log in with:
- Username:
admin - Password:
admin
OpenTelemetry Collector Setup (Not required - only implement if you want to recreate the local setup mentioned in the flow diagram)
The OTel Collector receives telemetry data from your application and forwards it to PMM Server using Prometheus Remote Write protocol.
Configuration File
This repository includes a pre-configured collector configuration at otel/otel-collector-config.yaml. This configuration file:
- Receives OTLP data on ports 4317 (gRPC) and 4318 (HTTP)
- Forwards metrics to PMM Server using Prometheus Remote Write
- Exposes health check and metrics endpoints
Start the Collector
Run the OpenTelemetry Collector from your project root, mounting the configuration file:
docker run --name otel-collector --restart always -d \
-p 4317:4317 \
-p 4318:4318 \
-p 8888:8888 \
-v $(pwd)/otel/otel-collector-config.yaml:/etc/otel-collector-config.yaml \
otel/opentelemetry-collector-contrib:latest \
--config=/etc/otel-collector-config.yamlPorts:
4317- OTLP gRPC (what your app uses)4318- OTLP HTTP8888- Collector metrics and health checks
Verify the Setup
Check containers are running:
docker psYou should see both
pmm-serverandotel-collectorrunning.Test the collector endpoint:
curl http://localhost:8888/metricsShould return Prometheus-formatted metrics from the collector itself.
Run your application with the instrumentation configured to point to
http://localhost:4317.View metrics in PMM: Navigate to http://localhost:8080 and explore your application's metrics in the dashboard.
Cleanup
To stop and remove the containers when you're done:
docker stop pmm-server otel-collector
docker rm pmm-server otel-collector🧪 Testing with the MIE OpenSource OTel Collector
For quick testing without setting up your own collector, you can use the shared MIE OpenSource OTel Collector:
// instrumentation.js
const { init } = require('mie-fintel');
init({
serviceName: 'bluehive-ivr-local',
collectorUrl:
process.env.OTEL_COLLECTOR_URL || 'https://otel-collector-recieving.os.mieweb.org',
transport: 'http',
insecure: true,
});This collector is already configured to forward metrics to a PMM Server via Prometheus Remote Write.
Where to see results:
| Dashboard | URL | Description |
| ----------------- | -------------------------------------------------------------------------------------------------- | ------------------------------------------------- |
| PMM Server | https://pmm-test.os.mieweb.org/ | Full metrics dashboard (login: admin / admin) |
| Collector Metrics | https://otel-metrics-test.os.mieweb.org/metrics | Raw Prometheus metrics from the collector itself |
Example of screenshot of the PMM explore page towards the end for some Otel metrics that came in. This is all from the above made PMM server and Otel collector for testing:
