npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

vmetrics-js

v0.0.3

Published

High-performance TypeScript client for VictoriaMetrics with automatic batching and buffering

Readme

vmetrics-js

npm version TypeScript License: GPL-3.0

A high-performance TypeScript/JavaScript client for VictoriaMetrics, optimized for IoT and high-throughput time-series data ingestion.


⚠️ EARLY STAGE PROJECT - NOT PRODUCTION READY

This project is in very early development and should NOT be used in production environments yet. Currently, only a rudimentary push API is implemented. Many features are missing, and the API may change significantly. Use at your own risk!

Project Focus: We are concentrating on the push API (actively sending metrics to VictoriaMetrics). For exposing a /metrics endpoint that VictoriaMetrics can scrape (pull API), use prom-client instead.

Contributions and feedback are welcome to help make this library production-ready.


Features

  • Automatic Batching: Efficiently groups data points into batches to minimize network overhead
  • Smart Buffering: Configurable buffer with automatic flushing based on size or time intervals
  • Type-Safe: Full TypeScript support with comprehensive type definitions
  • InfluxDB Line Protocol: Uses the standard InfluxDB line protocol format
  • Thread-Safe: Built-in mutex protection for concurrent write operations
  • Zero Dependencies: Only one lightweight dependency (async-mutex)
  • Error Resilient: Automatic retry mechanism with buffer re-queueing on failures
  • Easy Integration: Simple, intuitive API for quick adoption

Installation

npm install vmetrics-js
yarn add vmetrics-js
pnpm add vmetrics-js

Quick Start

import { VictoriaMetricsClient } from 'vmetrics-js'

// Create a client instance
const client = new VictoriaMetricsClient({
    url: 'http://0.0.0.0:8428',
    batchSize: 1000, // Flush after 1000 points
    flushInterval: 5000, // Flush every 5 seconds
})

// Write data points
client.writePoint({
    measurement: 'temperature',
    tags: {
        sensor_id: 'sensor-001',
        location: 'warehouse-a',
    },
    fields: {
        value: 23.5,
        humidity: 65.2,
    },
    timestamp: new Date(), // Optional, uses server time if omitted
})

// Graceful shutdown (flushes remaining buffer)
await client.shutdown()

Multiple Field Types

client.writePoint({
    measurement: 'system_stats',
    tags: {
        host: 'server-01',
        region: 'us-east-1',
    },
    fields: {
        cpu_usage: 45.2, // number
        memory_used: 8589934592, // number (bytes)
        disk_full: false, // boolean
        status: 'healthy', // string
    },
})

Configuration Guide

Choosing batchSize

  • High-frequency data (IoT, sensors): Use larger batches (1000-5000) to reduce network overhead
  • Low-frequency data (application metrics): Use smaller batches (100-500) for more frequent updates
  • Rule of thumb: Set to expected points per flushInterval

Choosing flushInterval

  • Real-time monitoring: 1000-5000ms (1-5 seconds)
  • Background analytics: 10000-60000ms (10-60 seconds)
  • Trade-off: Shorter intervals = fresher data but more network requests

Architecture

How It Works

┌─────────────┐
│ Application │
└──────┬──────┘
       │ writePoint()
       ▼
┌─────────────────┐
│  Write Buffer   │◄─── Mutex Protected
└──────┬──────────┘
       │
       ├─► Batch Size Reached? ──► Flush ───┐
       └─► Time Interval? ───────► Flush ───┐
                                            │
                                            ▼
                                     ┌─────────────────┐
                                     │ VictoriaMetrics │
                                     └─────────────────┘
  1. Write: writePoint() adds data to an in-memory buffer (instant return)
  2. Trigger: Flush happens when buffer reaches batchSize OR flushInterval expires
  3. Send: All buffered points are sent in a single HTTP POST request
  4. Retry: On failure, data is re-queued to the buffer

VictoriaMetrics Setup

For local development, you can run VictoriaMetrics with Podman/Docker:

docker run -d --name victoriametrics \
  -p 8428:8428 \
  victoriametrics/victoria-metrics:latest

Access the UI at: http://0.0.0.0:8428

Troubleshooting

Data Not Appearing in VictoriaMetrics

  1. Verify VictoriaMetrics is running: curl http://0.0.0.0:8428/health
  2. Check for errors: Look for [VMClient] error logs in your console
  3. Force a flush: Call await client.flush() to send data immediately

Performance Issues

  1. Increase batch size: Larger batches = fewer HTTP requests
  2. Increase flush interval: Less frequent network I/O
  3. Use tags wisely: Too many unique tag combinations = high cardinality = slow queries

Authentication Errors

Ensure you're using the correct authentication method:

const client = new VictoriaMetricsClient({
    url: 'http://0.0.0.0:8428',
    auth: { bearer: 'your-token-here' },
})

Testing

Run the test suite:

npm test

Run the example:

npm run example

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

Development Setup

# Clone the repository
git clone [email protected]:zimmerling/vmetrics-js.git

cd vmetrics-js

# Install dependencies
npm install

# Run tests
npm test

# Build
npm run build

# Lint
npm run lint

# Format code
npm run format

Current Limitations

This library is in early development and currently provides only basic functionality:

What's implemented:

  • ✅ Basic push API using InfluxDB Line Protocol (client pushes metrics to VictoriaMetrics)
  • ✅ Buffering and batching
  • ✅ Basic error handling and retry (re-queue on failure)
  • ✅ Bearer token authentication

What's missing:

  • ❌ Pull API (/metrics endpoint) - Use prom-client for this
  • ❌ Compression support
  • ❌ Circuit breaker / exponential backoff
  • ❌ Connection pooling
  • ❌ Prometheus remote write protocol
  • ❌ Comprehensive error handling
  • ❌ Browser support
  • ❌ Production-grade reliability features
  • ❌ Performance metrics and observability

Scope & Focus

This library focuses exclusively on the push API - actively sending metrics from your application to VictoriaMetrics.

For pull-based metrics (exposing a /metrics endpoint that VictoriaMetrics scrapes), use:

  • prom-client - The standard library for exposing Prometheus-compatible metrics endpoints

Roadmap

Push API improvements:

  • [ ] Compression support (gzip)
  • [ ] Circuit breaker for failed endpoints
  • [ ] Retry with exponential backoff
  • [ ] Connection pooling and keep-alive
  • [ ] Prometheus remote write protocol support
  • [ ] Metrics about client performance (points buffered, flush duration, etc.)
  • [ ] Browser support (via fetch API)
  • [ ] Comprehensive test coverage
  • [ ] Production hardening

Not planned (use other tools):

  • Pull API / /metrics endpoint → Use prom-client
  • Query API for reading data → Use VictoriaMetrics HTTP API directly

Links

Star this repo if you find it useful!