npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@ruvector/gnn-wasm

v0.1.0

Published

WebAssembly bindings for ruvector-gnn - Graph Neural Network layers for browsers

Downloads

1,134

Readme

RuVector GNN WASM

WebAssembly bindings for RuVector Graph Neural Network operations.

Features

  • GNN Layer Operations: Multi-head attention, GRU updates, layer normalization
  • Tensor Compression: Adaptive compression based on access frequency
  • Differentiable Search: Soft attention-based similarity search
  • Hierarchical Forward: Multi-layer GNN processing

Installation

npm install ruvector-gnn-wasm

Usage

Initialize

import init, {
  JsRuvectorLayer,
  JsTensorCompress,
  differentiableSearch,
  SearchConfig
} from 'ruvector-gnn-wasm';

await init();

GNN Layer

// Create a GNN layer
const layer = new JsRuvectorLayer(
  4,    // input dimension
  8,    // hidden dimension
  2,    // number of attention heads
  0.1   // dropout rate
);

// Forward pass
const nodeEmbedding = new Float32Array([1.0, 2.0, 3.0, 4.0]);
const neighbors = [
  new Float32Array([0.5, 1.0, 1.5, 2.0]),
  new Float32Array([2.0, 3.0, 4.0, 5.0])
];
const edgeWeights = new Float32Array([0.3, 0.7]);

const output = layer.forward(nodeEmbedding, neighbors, edgeWeights);
console.log('Output dimension:', layer.outputDim);

Tensor Compression

const compressor = new JsTensorCompress();

// Compress based on access frequency
const embedding = new Float32Array(128).fill(0.5);
const compressed = compressor.compress(embedding, 0.5); // 50% access frequency

// Decompress
const decompressed = compressor.decompress(compressed);

// Or specify compression level explicitly
const compressedPQ8 = compressor.compressWithLevel(embedding, "pq8");

// Get compression ratio
const ratio = compressor.getCompressionRatio(0.5); // Returns ~2.0 for half precision

Compression Levels

Access frequency determines compression:

  • f > 0.8: Full precision (no compression) - hot data
  • f > 0.4: Half precision (2x compression) - warm data
  • f > 0.1: 8-bit PQ (4x compression) - cool data
  • f > 0.01: 4-bit PQ (8x compression) - cold data
  • f <= 0.01: Binary (32x compression) - archive data

Differentiable Search

const query = new Float32Array([1.0, 0.0, 0.0]);
const candidates = [
  new Float32Array([1.0, 0.0, 0.0]),  // Perfect match
  new Float32Array([0.9, 0.1, 0.0]),  // Close match
  new Float32Array([0.0, 1.0, 0.0])   // Orthogonal
];

const config = new SearchConfig(2, 1.0); // k=2, temperature=1.0
const result = differentiableSearch(query, candidates, config);

console.log('Top indices:', result.indices);
console.log('Weights:', result.weights);

API Reference

JsRuvectorLayer

class JsRuvectorLayer {
  constructor(
    inputDim: number,
    hiddenDim: number,
    heads: number,
    dropout: number
  );

  forward(
    nodeEmbedding: Float32Array,
    neighborEmbeddings: Float32Array[],
    edgeWeights: Float32Array
  ): Float32Array;

  readonly outputDim: number;
}

JsTensorCompress

class JsTensorCompress {
  constructor();

  compress(embedding: Float32Array, accessFreq: number): object;
  compressWithLevel(embedding: Float32Array, level: string): object;
  decompress(compressed: object): Float32Array;
  getCompressionRatio(accessFreq: number): number;
}

Compression levels: "none", "half", "pq8", "pq4", "binary"

differentiableSearch

function differentiableSearch(
  query: Float32Array,
  candidateEmbeddings: Float32Array[],
  config: SearchConfig
): { indices: number[], weights: number[] };

SearchConfig

class SearchConfig {
  constructor(k: number, temperature: number);
  k: number;          // Number of results
  temperature: number; // Softmax temperature (lower = sharper)
}

cosineSimilarity

function cosineSimilarity(a: Float32Array, b: Float32Array): number;

Building from Source

# Install wasm-pack
curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh

# Build for Node.js
wasm-pack build --target nodejs

# Build for browser
wasm-pack build --target web

# Build for bundler (webpack, etc.)
wasm-pack build --target bundler

Performance

  • GNN layers use efficient attention mechanisms
  • Compression reduces memory usage by 2-32x
  • All operations are optimized for WASM
  • No garbage collection during forward passes

License

MIT