npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@depths/client

v0.1.3

Published

Opinionated OTel compatible telemetry client SDK for Depths telemetry server.

Readme

@depths/client

Minimal, opinionated telemetry client for TypeScript that emits OTLP-compatible JSON to the Depths Python telemetry server (FastAPI, v0.2.0). No OpenTelemetry SDK dependency; small surface; works in browsers, Expo (Go-compatible), and Node.

  • Traces with spans (scoped helpers and handles)
  • Logs auto-linked to the active span
  • Metrics: counter, gauge, histogram (client-side binning with bounds + LRU)
  • Simple batching and periodic flush; best-effort flush on app close
  • OTLP JSON on the wire, so it plays nicely with other OTLP tooling (spec: traces, metrics, logs) ([OpenTelemetry][1])

We propagate context using the W3C Trace Context header traceparent. If you instrument your outgoing calls or continue incoming headers, traces remain stitched across services. ([W3C][2])


Install

npm install @depths/client

Quick start

Browser / Expo

import { DepthsClient } from '@depths/client';

const depths = new DepthsClient({
  baseUrl: 'https://telemetry.myapp.com', // the endpoint where your depths telemetry server is running
  projectId: 'proj_123',
  serviceName: 'web',
  // optional:
  // serviceVersion: '1.0.0',
  // resource: { env: 'prod' },
  // flushIntervalMs: 2000,
  // maxQueue: 200,
  // gzip: true,
  autoInstrumentFetch: true, // adds W3C traceparent when a span is active
});

// Trace with scoped spans
await depths.withTrace('checkout', async (trace) => {
  await trace.withSpan('load-cart', async () => {/* ... */});
  await trace.withSpan('payment', async (span) => {
    span.addEvent('payment.intent.created', { provider: 'stripe' });
  });
});

// Log (auto-linked to active span if any)
depths.log({ level: 'INFO', body: 'user signed up', attrs: { plan: 'pro' } });

// Metrics
const counter = depths.metrics.counter('orders.created');
counter.add(1, { region: 'eu' });

const latency = depths.metrics.histogram('http.server.latency_ms');
// custom bounds: depths.metrics.histogram('...', { bounds: [0,5,10,20,50,100] });
latency.record(37, { route: '/checkout', status: 200 });

// Flush on demand (also runs on interval and pagehide)
await depths.flush();
  • instrumentFetch() (controlled by autoInstrumentFetch) appends traceparent to outgoing fetch when a span is active.
  • On web, the client will try navigator.sendBeacon() during pagehide/visibilitychange for a best-effort final flush. sendBeacon is a standard async POST designed for unload-time telemetry. ([MDN Web Docs][3])
  • If available and enabled, the browser path compresses large payloads with the Compression Streams API (gzip). ([MDN Web Docs][4])

Expo (Go) works like “browser”: use DepthsClient. You can also hook React Native AppState:

// React Native / Expo
import { AppState } from 'react-native';
depths.registerAppState(AppState); // flush when the app goes background/inactive

Node

import { DepthsNode } from '@depths/client';

const depths = new DepthsNode({
  baseUrl: 'http://127.0.0.1:4318',
  projectId: 'proj_123',
  serviceName: 'api',
  serviceVersion: '1.2.3',
  // resource, flushIntervalMs, maxQueue, gzip...
});

// Continue upstream traces (manual)
const handle = depths.adoptFromTraceparent(req.headers['traceparent']);
if (handle) {
  await handle.withSpan('handler', async () => {
    depths.log({ level: 'INFO', body: { path: req.url } });
  });
  handle.end();
} else {
  await depths.withTrace('handler', async (t) => {
    // ...
  });
}

// or wrap a handler (Express/Hono style)
export const handler = depths.wrapHandler(async (req, res) => {
  await depths.withSpan('db', async () => {/* ... */});
});
  • Node uses AsyncLocalStorage under the hood to propagate the active span across async boundaries, so logs/metrics inside with* blocks automatically link to the span. ALS is the recommended API in Node for async context scoping. ([Node.js][5])
  • Node transport uses fetch. In modern Node, fetch is built-in and powered by Undici. ([Node.js][6])
  • Large payloads are gzipped with zlib when enabled.

Tracing

API surface:

  • withTrace(name, fn, opts?) → creates a root span and runs fn(traceHandle)
  • traceHandle.withSpan(name, fn, opts?) → runs a child span under the current active span
  • withSpan(name, fn, opts?) → runs a span under the current active span, or starts a fresh trace if none is active
  • createTrace(name, opts?) → returns a TraceHandle (manual lifecycle)
  • span.addEvent(name, attrs?), span.setAttributes(attrs), span.recordException(err, attrs?), span.end(status?)

All of these are implemented as shown here:

Context propagation

  • Outgoing (browser): autoInstrumentFetch adds a traceparent header to fetch when a span is active. traceparent is defined by the W3C Trace Context spec. ([W3C][2])

  • Incoming (Node):

    • adoptFromTraceparent(header) returns a TraceHandle seeded with the upstream trace id. Use handle.withSpan(...) to run code under that trace, then handle.end().
    • wrapHandler(fn) parses the header, creates a request root span (continuing the upstream trace when present), activates the context while your handler runs, and ends the span.

Logs

depths.log({
  level: 'INFO',                // 'TRACE' | 'DEBUG' | 'INFO' | 'WARN' | 'ERROR' | 'FATAL'
  body: { action: 'signup' },   // string | number | boolean | object
  attrs: { plan: 'pro' },       // optional attributes
});

When a span is active, the log record automatically includes traceId and spanId so it correlates in the backend. This happens in both façades.


Metrics

Counters and gauges attach trace_id and span_id attributes at record time if a span is active, so you can slice by trace. Histograms aggregate locally and add the link at flush time.

// Counter
const orders = depths.metrics.counter('orders.created');
orders.add(1, { region: 'us' });

// Gauge
const queueLen = depths.metrics.gauge('queue.length');
queueLen.record(42, { shard: 'a' });

// Histogram
const latency = depths.metrics.histogram('http.server.latency_ms', {
  // optional:
  // bounds: [0, 5, 10, 20, 50, 100, 250, 500, 1000],
  // maxSeries: 200,
});
latency.record(123, { route: '/checkout', status: 200 });

Implementation details you can rely on:

  • Histogram binning with explicit bounds (your choice) or a sensible 2-based default. Per series (distinct attribute set) we track count, sum, min, max, and bucketCounts[].
  • LRU capping of series to avoid unbounded memory growth.
  • Delta export: on flush we snapshot and reset each histogram’s accumulators, and we send aggregationTemporality: 2 (DELTA).

Flushing and lifecycle

  • The SDK keeps in-memory queues for spans, logs, and metrics and flushes every flushIntervalMs (default 2s).

  • It also flushes when queues hit maxQueue and on app close where possible:

    • Web: pagehide and visibilitychange try to use sendBeacon for a last-second flush. ([MDN Web Docs][3])
    • Node: final flush on beforeExit, SIGINT, and SIGTERM.
  • You can always call await depths.flush() and depths.close() yourself.


What goes over the wire

  • We serialize OTLP JSON envelopes:

    • resource{Spans|Logs|Metrics} wrapping a single resource and a single scope (instrumentation scope: name = depths-analytics-ts, version = 0.0.1).
    • Spans include ids, start/end times, attributes, events, and status.
    • Logs include time, severity, body, attributes, and optional traceId/spanId.
    • Metrics include number data points for counters/gauges, histogram points for histograms with explicit bounds and bucket counts.
  • Endpoints:

    • POST {baseUrl}/v1/traces | /v1/logs | /v1/metrics
    • Headers: x-depths-project-id: <projectId>, content-type: application/json
    • If compressed, content-encoding: gzip (All of the above are implemented in transport.ts.)

Because the wire format is standard OTLP JSON, the Depths Python telemetry server ingests these requests directly and can forward or process them alongside other OTLP sources. See the OpenTelemetry OTLP spec for the overall data model and transport. ([OpenTelemetry][1])


Notes on environments

  • Browsers: sendBeacon is used for unload-time sending when requested. ([MDN Web Docs][3])
  • Compression: CompressionStream('gzip') is used when available. Support is good in modern Chromium-based browsers and improving elsewhere. ([MDN Web Docs][4])
  • Node: async context is tracked with AsyncLocalStorage. ([Node.js][5]) fetch is available natively in modern Node and uses Undici under the hood. ([Node.js][6])

FAQ

Do I need the full OpenTelemetry SDK? No. This client is intentionally minimal and sends valid OTLP JSON over HTTP.

How are logs and metrics linked to traces? If there is an active span when you call log or metrics.*.record/add, the SDK attaches the span or trace ids automatically so you can correlate in the Depths backend.

Can I continue upstream traces on my server? Yes. Either call adoptFromTraceparent(header) and then run work under the returned handle, or wrap the handler with wrapHandler(fn) for a one-liner.


License

Apache-2.0