npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@traceai/redis

v0.1.0

Published

TraceAI instrumentation for Redis Vector Search (RediSearch)

Downloads

15

Readme

@traceai/redis

OpenTelemetry instrumentation for Redis Vector Search (RediSearch) in Node.js/TypeScript applications.

Installation

npm install @traceai/redis
# or
pnpm add @traceai/redis
# or
yarn add @traceai/redis

Prerequisites

  • Node.js >= 18
  • Redis client (redis >= 4.0.0)
  • Redis Stack or Redis with RediSearch module
  • OpenTelemetry SDK configured in your application

Quick Start

import { RedisInstrumentation } from "@traceai/redis";
import { createClient } from "redis";

// Initialize instrumentation
const instrumentation = new RedisInstrumentation({
  traceConfig: {
    maskInputs: false,
    maskOutputs: false,
  },
});

// Enable instrumentation
instrumentation.enable();

// Manually instrument the redis module
import * as redis from "redis";
instrumentation.manuallyInstrument(redis);

// Now all Redis operations will be traced
const client = createClient({ url: process.env.REDIS_URL });
await client.connect();

// Vector search with FT.SEARCH (traced)
const results = await client.ft.search("idx:products", "*=>[KNN 10 @embedding $vector AS score]", {
  PARAMS: {
    vector: Buffer.from(new Float32Array([0.1, 0.2, 0.3]).buffer),
  },
  RETURN: ["name", "price", "score"],
  SORTBY: { BY: "score" },
  DIALECT: 2,
});

Configuration Options

interface RedisInstrumentationConfig {
  // Enable/disable the instrumentation
  enabled?: boolean;

  // Capture query vectors in span attributes
  captureQueryVectors?: boolean;

  // Capture result documents
  captureDocuments?: boolean;
}

interface TraceConfigOptions {
  // Mask sensitive input data
  maskInputs?: boolean;

  // Mask sensitive output data
  maskOutputs?: boolean;
}

Traced Operations

The instrumentation automatically traces Redis operations with special handling for vector search:

Vector Search Operations

  • FT.SEARCH - Vector similarity search
  • FT.AGGREGATE - Aggregation with vector scoring
  • FT.CREATE - Create search index

Hash Operations (for vector storage)

  • HSET - Store hash with vector field
  • HGET / HMGET - Retrieve hash fields
  • HDEL - Delete hash

JSON Operations (for document storage)

  • JSON.SET - Store JSON document
  • JSON.GET - Retrieve JSON document

Span Attributes

Each traced operation includes relevant attributes:

| Attribute | Description | | ------------------------- | ---------------------------------------------- | | db.system | Always "redis" | | db.operation | Operation name (e.g., "FT.SEARCH", "HSET") | | db.redis.index_name | Search index name | | db.redis.query | Search query | | db.redis.vector_field | Vector field name | | db.redis.k | Number of neighbors (KNN) | | db.redis.result_count | Number of results returned |

Real-World Use Cases

1. E-commerce Product Search

import { RedisInstrumentation } from "@traceai/redis";
import { createClient, SchemaFieldTypes, VectorAlgorithms } from "redis";

const instrumentation = new RedisInstrumentation();
instrumentation.enable();

const client = createClient({ url: process.env.REDIS_URL });
await client.connect();

// Create product search index
async function createProductIndex() {
  try {
    await client.ft.create(
      "idx:products",
      {
        "$.name": { type: SchemaFieldTypes.TEXT, AS: "name" },
        "$.description": { type: SchemaFieldTypes.TEXT, AS: "description" },
        "$.category": { type: SchemaFieldTypes.TAG, AS: "category" },
        "$.price": { type: SchemaFieldTypes.NUMERIC, AS: "price" },
        "$.embedding": {
          type: SchemaFieldTypes.VECTOR,
          AS: "embedding",
          ALGORITHM: VectorAlgorithms.HNSW,
          TYPE: "FLOAT32",
          DIM: 384,
          DISTANCE_METRIC: "COSINE",
        },
      },
      { ON: "JSON", PREFIX: "product:" }
    );
  } catch (e: any) {
    if (!e.message.includes("Index already exists")) throw e;
  }
}

// Semantic product search (traced)
async function searchProducts(queryVector: number[], filters?: ProductFilters) {
  const query = buildSearchQuery(filters);

  const results = await client.ft.search(
    "idx:products",
    `${query}=>[KNN 20 @embedding $vector AS score]`,
    {
      PARAMS: {
        vector: Buffer.from(new Float32Array(queryVector).buffer),
      },
      RETURN: ["name", "description", "price", "category", "score"],
      SORTBY: { BY: "score" },
      LIMIT: { from: 0, size: 20 },
      DIALECT: 2,
    }
  );

  return results.documents.map((doc) => ({
    id: doc.id.replace("product:", ""),
    ...doc.value,
  }));
}

function buildSearchQuery(filters?: ProductFilters): string {
  const parts: string[] = ["*"];

  if (filters?.category) {
    parts.push(`@category:{${filters.category}}`);
  }
  if (filters?.minPrice !== undefined) {
    parts.push(`@price:[${filters.minPrice} +inf]`);
  }
  if (filters?.maxPrice !== undefined) {
    parts.push(`@price:[-inf ${filters.maxPrice}]`);
  }

  return parts.length > 1 ? `(${parts.slice(1).join(" ")})` : parts[0];
}

2. Real-time Recommendation Cache

async function cacheUserRecommendations(
  userId: string,
  recommendations: Recommendation[]
) {
  const pipeline = client.multi();

  // Store recommendations as JSON with embeddings
  for (const rec of recommendations) {
    pipeline.json.set(`rec:${userId}:${rec.itemId}`, "$", {
      itemId: rec.itemId,
      name: rec.name,
      score: rec.score,
      embedding: rec.embedding,
      cachedAt: Date.now(),
    });
  }

  // Set expiration
  for (const rec of recommendations) {
    pipeline.expire(`rec:${userId}:${rec.itemId}`, 3600); // 1 hour TTL
  }

  await pipeline.exec();
}

// Find similar to what user is viewing (traced)
async function getSimilarFromCache(userId: string, itemEmbedding: number[]) {
  const results = await client.ft.search(
    "idx:recommendations",
    `@userId:{${userId}}=>[KNN 5 @embedding $vector AS similarity]`,
    {
      PARAMS: {
        vector: Buffer.from(new Float32Array(itemEmbedding).buffer),
      },
      RETURN: ["itemId", "name", "score", "similarity"],
      SORTBY: { BY: "similarity" },
      DIALECT: 2,
    }
  );

  return results.documents;
}

3. Session-based Semantic Search

interface SearchSession {
  sessionId: string;
  queries: { text: string; embedding: number[]; timestamp: number }[];
}

async function sessionAwareSearch(
  sessionId: string,
  queryEmbedding: number[],
  recentQueryWeight = 0.3
) {
  // Get session context
  const sessionData = await client.json.get(`session:${sessionId}`);

  let searchVector = queryEmbedding;

  if (sessionData?.queries?.length > 0) {
    // Blend with recent query embeddings for context
    const recentEmbedding = sessionData.queries[sessionData.queries.length - 1].embedding;
    searchVector = queryEmbedding.map(
      (val, i) =>
        val * (1 - recentQueryWeight) + recentEmbedding[i] * recentQueryWeight
    );
  }

  // Search with blended vector (traced)
  const results = await client.ft.search(
    "idx:content",
    "*=>[KNN 10 @embedding $vector AS score]",
    {
      PARAMS: {
        vector: Buffer.from(new Float32Array(searchVector).buffer),
      },
      RETURN: ["title", "content", "score"],
      DIALECT: 2,
    }
  );

  // Update session
  await client.json.arrAppend(`session:${sessionId}`, "$.queries", {
    text: "",
    embedding: queryEmbedding,
    timestamp: Date.now(),
  });

  return results.documents;
}

4. Real-time Anomaly Detection

async function detectAnomaly(metricVector: number[], threshold = 0.8) {
  // Search for similar historical patterns (traced)
  const similar = await client.ft.search(
    "idx:metrics",
    "*=>[KNN 10 @embedding $vector AS similarity]",
    {
      PARAMS: {
        vector: Buffer.from(new Float32Array(metricVector).buffer),
      },
      RETURN: ["timestamp", "label", "similarity"],
      DIALECT: 2,
    }
  );

  const normalPatterns = similar.documents.filter(
    (doc) => doc.value.label === "normal" && parseFloat(doc.value.similarity) > threshold
  );

  const isAnomaly = normalPatterns.length < 3;

  if (isAnomaly) {
    // Store anomaly for future reference
    await client.json.set(`anomaly:${Date.now()}`, "$", {
      embedding: metricVector,
      timestamp: Date.now(),
      similarPatterns: similar.documents.slice(0, 3),
    });
  }

  return {
    isAnomaly,
    confidence: 1 - (normalPatterns.length / 10),
    similarPatterns: similar.documents,
  };
}

5. Multi-index Federated Search

async function federatedSearch(queryVector: number[]) {
  // Search across multiple indices in parallel (all traced)
  const [products, articles, support] = await Promise.all([
    client.ft.search(
      "idx:products",
      "*=>[KNN 5 @embedding $vector AS score]",
      {
        PARAMS: { vector: Buffer.from(new Float32Array(queryVector).buffer) },
        RETURN: ["name", "price", "score"],
        DIALECT: 2,
      }
    ),
    client.ft.search(
      "idx:articles",
      "*=>[KNN 5 @embedding $vector AS score]",
      {
        PARAMS: { vector: Buffer.from(new Float32Array(queryVector).buffer) },
        RETURN: ["title", "excerpt", "score"],
        DIALECT: 2,
      }
    ),
    client.ft.search(
      "idx:support",
      "*=>[KNN 5 @embedding $vector AS score]",
      {
        PARAMS: { vector: Buffer.from(new Float32Array(queryVector).buffer) },
        RETURN: ["question", "answer", "score"],
        DIALECT: 2,
      }
    ),
  ]);

  return {
    products: products.documents,
    articles: articles.documents,
    supportFAQs: support.documents,
  };
}

6. Geo + Vector Hybrid Search

async function nearbySemanticSearch(
  queryVector: number[],
  location: { lat: number; lon: number },
  radiusKm: number
) {
  // Combine geo filter with vector search (traced)
  const results = await client.ft.search(
    "idx:places",
    `@location:[${location.lon} ${location.lat} ${radiusKm} km]=>[KNN 20 @embedding $vector AS relevance]`,
    {
      PARAMS: {
        vector: Buffer.from(new Float32Array(queryVector).buffer),
      },
      RETURN: ["name", "address", "location", "relevance"],
      SORTBY: { BY: "relevance" },
      DIALECT: 2,
    }
  );

  return results.documents.map((doc) => ({
    ...doc.value,
    id: doc.id,
  }));
}

Index Creation Examples

// HNSW index for high-recall scenarios
await client.ft.create(
  "idx:documents",
  {
    embedding: {
      type: SchemaFieldTypes.VECTOR,
      ALGORITHM: VectorAlgorithms.HNSW,
      TYPE: "FLOAT32",
      DIM: 1536,
      DISTANCE_METRIC: "COSINE",
      M: 40,
      EF_CONSTRUCTION: 200,
    },
  },
  { ON: "HASH", PREFIX: "doc:" }
);

// FLAT index for exact search on smaller datasets
await client.ft.create(
  "idx:cache",
  {
    embedding: {
      type: SchemaFieldTypes.VECTOR,
      ALGORITHM: VectorAlgorithms.FLAT,
      TYPE: "FLOAT32",
      DIM: 384,
      DISTANCE_METRIC: "L2",
    },
  },
  { ON: "JSON", PREFIX: "cache:" }
);

Integration with OpenTelemetry

import { NodeSDK } from "@opentelemetry/sdk-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import { RedisInstrumentation } from "@traceai/redis";

const sdk = new NodeSDK({
  traceExporter: new OTLPTraceExporter({
    url: "http://localhost:4318/v1/traces",
  }),
  instrumentations: [new RedisInstrumentation()],
});

sdk.start();

Performance Tips

  1. Use HNSW for large datasets (>100k vectors)
  2. Tune M and EF_CONSTRUCTION based on recall requirements
  3. Use connection pooling for high-throughput scenarios
  4. Leverage Redis pipelining for batch operations
  5. Set appropriate TTLs for cached vectors

License

Apache-2.0