@depths/client
v0.1.3
Published
Opinionated OTel compatible telemetry client SDK for Depths telemetry server.
Readme
@depths/client
Minimal, opinionated telemetry client for TypeScript that emits OTLP-compatible JSON to the Depths Python telemetry server (FastAPI, v0.2.0). No OpenTelemetry SDK dependency; small surface; works in browsers, Expo (Go-compatible), and Node.
- Traces with spans (scoped helpers and handles)
- Logs auto-linked to the active span
- Metrics: counter, gauge, histogram (client-side binning with bounds + LRU)
- Simple batching and periodic flush; best-effort flush on app close
- OTLP JSON on the wire, so it plays nicely with other OTLP tooling (spec: traces, metrics, logs) ([OpenTelemetry][1])
We propagate context using the W3C Trace Context header
traceparent. If you instrument your outgoing calls or continue incoming headers, traces remain stitched across services. ([W3C][2])
Install
npm install @depths/clientQuick start
Browser / Expo
import { DepthsClient } from '@depths/client';
const depths = new DepthsClient({
baseUrl: 'https://telemetry.myapp.com', // the endpoint where your depths telemetry server is running
projectId: 'proj_123',
serviceName: 'web',
// optional:
// serviceVersion: '1.0.0',
// resource: { env: 'prod' },
// flushIntervalMs: 2000,
// maxQueue: 200,
// gzip: true,
autoInstrumentFetch: true, // adds W3C traceparent when a span is active
});
// Trace with scoped spans
await depths.withTrace('checkout', async (trace) => {
await trace.withSpan('load-cart', async () => {/* ... */});
await trace.withSpan('payment', async (span) => {
span.addEvent('payment.intent.created', { provider: 'stripe' });
});
});
// Log (auto-linked to active span if any)
depths.log({ level: 'INFO', body: 'user signed up', attrs: { plan: 'pro' } });
// Metrics
const counter = depths.metrics.counter('orders.created');
counter.add(1, { region: 'eu' });
const latency = depths.metrics.histogram('http.server.latency_ms');
// custom bounds: depths.metrics.histogram('...', { bounds: [0,5,10,20,50,100] });
latency.record(37, { route: '/checkout', status: 200 });
// Flush on demand (also runs on interval and pagehide)
await depths.flush();instrumentFetch()(controlled byautoInstrumentFetch) appendstraceparentto outgoingfetchwhen a span is active.- On web, the client will try
navigator.sendBeacon()during pagehide/visibilitychange for a best-effort final flush.sendBeaconis a standard async POST designed for unload-time telemetry. ([MDN Web Docs][3]) - If available and enabled, the browser path compresses large payloads with the Compression Streams API (gzip). ([MDN Web Docs][4])
Expo (Go) works like “browser”: use DepthsClient. You can also hook React Native AppState:
// React Native / Expo
import { AppState } from 'react-native';
depths.registerAppState(AppState); // flush when the app goes background/inactiveNode
import { DepthsNode } from '@depths/client';
const depths = new DepthsNode({
baseUrl: 'http://127.0.0.1:4318',
projectId: 'proj_123',
serviceName: 'api',
serviceVersion: '1.2.3',
// resource, flushIntervalMs, maxQueue, gzip...
});
// Continue upstream traces (manual)
const handle = depths.adoptFromTraceparent(req.headers['traceparent']);
if (handle) {
await handle.withSpan('handler', async () => {
depths.log({ level: 'INFO', body: { path: req.url } });
});
handle.end();
} else {
await depths.withTrace('handler', async (t) => {
// ...
});
}
// or wrap a handler (Express/Hono style)
export const handler = depths.wrapHandler(async (req, res) => {
await depths.withSpan('db', async () => {/* ... */});
});- Node uses AsyncLocalStorage under the hood to propagate the active span across async boundaries, so logs/metrics inside
with*blocks automatically link to the span. ALS is the recommended API in Node for async context scoping. ([Node.js][5]) - Node transport uses
fetch. In modern Node,fetchis built-in and powered by Undici. ([Node.js][6]) - Large payloads are gzipped with
zlibwhen enabled.
Tracing
API surface:
withTrace(name, fn, opts?)→ creates a root span and runsfn(traceHandle)traceHandle.withSpan(name, fn, opts?)→ runs a child span under the current active spanwithSpan(name, fn, opts?)→ runs a span under the current active span, or starts a fresh trace if none is activecreateTrace(name, opts?)→ returns aTraceHandle(manual lifecycle)span.addEvent(name, attrs?),span.setAttributes(attrs),span.recordException(err, attrs?),span.end(status?)
All of these are implemented as shown here:
Context propagation
Outgoing (browser):
autoInstrumentFetchadds atraceparentheader tofetchwhen a span is active.traceparentis defined by the W3C Trace Context spec. ([W3C][2])Incoming (Node):
adoptFromTraceparent(header)returns aTraceHandleseeded with the upstream trace id. Usehandle.withSpan(...)to run code under that trace, thenhandle.end().wrapHandler(fn)parses the header, creates a request root span (continuing the upstream trace when present), activates the context while your handler runs, and ends the span.
Logs
depths.log({
level: 'INFO', // 'TRACE' | 'DEBUG' | 'INFO' | 'WARN' | 'ERROR' | 'FATAL'
body: { action: 'signup' }, // string | number | boolean | object
attrs: { plan: 'pro' }, // optional attributes
});When a span is active, the log record automatically includes traceId and spanId so it correlates in the backend. This happens in both façades.
Metrics
Counters and gauges attach trace_id and span_id attributes at record time if a span is active, so you can slice by trace. Histograms aggregate locally and add the link at flush time.
// Counter
const orders = depths.metrics.counter('orders.created');
orders.add(1, { region: 'us' });
// Gauge
const queueLen = depths.metrics.gauge('queue.length');
queueLen.record(42, { shard: 'a' });
// Histogram
const latency = depths.metrics.histogram('http.server.latency_ms', {
// optional:
// bounds: [0, 5, 10, 20, 50, 100, 250, 500, 1000],
// maxSeries: 200,
});
latency.record(123, { route: '/checkout', status: 200 });Implementation details you can rely on:
- Histogram binning with explicit bounds (your choice) or a sensible 2-based default. Per series (distinct attribute set) we track
count,sum,min,max, andbucketCounts[]. - LRU capping of series to avoid unbounded memory growth.
- Delta export: on flush we snapshot and reset each histogram’s accumulators, and we send
aggregationTemporality: 2(DELTA).
Flushing and lifecycle
The SDK keeps in-memory queues for spans, logs, and metrics and flushes every
flushIntervalMs(default 2s).It also flushes when queues hit
maxQueueand on app close where possible:- Web:
pagehideandvisibilitychangetry to usesendBeaconfor a last-second flush. ([MDN Web Docs][3]) - Node: final flush on
beforeExit,SIGINT, andSIGTERM.
- Web:
You can always call
await depths.flush()anddepths.close()yourself.
What goes over the wire
We serialize OTLP JSON envelopes:
resource{Spans|Logs|Metrics}wrapping a single resource and a single scope (instrumentation scope:name = depths-analytics-ts,version = 0.0.1).- Spans include ids, start/end times, attributes, events, and status.
- Logs include time, severity, body, attributes, and optional
traceId/spanId. - Metrics include number data points for counters/gauges, histogram points for histograms with explicit bounds and bucket counts.
Endpoints:
POST {baseUrl}/v1/traces | /v1/logs | /v1/metrics- Headers:
x-depths-project-id: <projectId>,content-type: application/json - If compressed,
content-encoding: gzip(All of the above are implemented intransport.ts.)
Because the wire format is standard OTLP JSON, the Depths Python telemetry server ingests these requests directly and can forward or process them alongside other OTLP sources. See the OpenTelemetry OTLP spec for the overall data model and transport. ([OpenTelemetry][1])
Notes on environments
- Browsers:
sendBeaconis used for unload-time sending when requested. ([MDN Web Docs][3]) - Compression:
CompressionStream('gzip')is used when available. Support is good in modern Chromium-based browsers and improving elsewhere. ([MDN Web Docs][4]) - Node: async context is tracked with
AsyncLocalStorage. ([Node.js][5])fetchis available natively in modern Node and uses Undici under the hood. ([Node.js][6])
FAQ
Do I need the full OpenTelemetry SDK? No. This client is intentionally minimal and sends valid OTLP JSON over HTTP.
How are logs and metrics linked to traces?
If there is an active span when you call log or metrics.*.record/add, the SDK attaches the span or trace ids automatically so you can correlate in the Depths backend.
Can I continue upstream traces on my server?
Yes. Either call adoptFromTraceparent(header) and then run work under the returned handle, or wrap the handler with wrapHandler(fn) for a one-liner.
License
Apache-2.0
