npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@revenium/google-vertex

v0.2.2

Published

Transparent TypeScript middleware for automatic Revenium usage tracking with Google Vertex AI

Readme

Revenium Middleware for Vertex AI (Node.js)

npm version Node.js Documentation Website License: MIT

Automatically track and meter your Vertex AI (Enterprise) API usage with Revenium. This middleware provides seamless integration with Google Cloud's Vertex AI platform, requiring minimal code changes.

Features

  • Enterprise-Grade: Google Cloud Vertex AI integration
  • Complete Metering: Track tokens, costs, and performance metrics
  • Prompt Capture: Optional capture of prompts and responses with automatic credential sanitization
  • Custom Metadata: Add business context to your AI usage
  • Streaming Support: Real-time streaming with analytics
  • Vector Embeddings: Full embedding support with 768 dimensions
  • Secure Authentication: Google Cloud IAM integration
  • Type Safe: Full TypeScript support with comprehensive types
  • Analytics: Detailed usage analytics and reporting
  • Multi-Region: Deploy across Google Cloud regions

Why Choose Vertex AI?

| Feature | Google AI | Vertex AI | | ------------------ | ------------------- | ------------------------- | | Authentication | API Key | Google Cloud IAM | | Security | Basic | Enterprise-grade | | Compliance | Limited | SOC 2, HIPAA, etc. | | Monitoring | Basic | Advanced Cloud Monitoring | | SLA | None | Enterprise SLA | | Multi-region | No | Yes | | Use Case | Development/Testing | Production/Enterprise |

Supported Models

Important: The model parameter is required when calling any controller method. You must specify the model explicitly in your code.

This middleware supports all models available in Vertex AI. The middleware does not maintain a hardcoded list of models, ensuring compatibility with new models as Google releases them.

For the latest available models, see:

Example usage:

// Without metadata (clean, simple)
const result = await controller.createChat(
  ["Your prompt here"],
  "gemini-2.0-flash-001", // required model parameter
);

// With metadata (optional)
const metadata = {
  subscriberId: "user-123",
  subscriberEmail: "[email protected]",
  organizationId: "org-456",
  productId: "product-789",
};

const resultWithMetadata = await controller.createChat(
  ["Your prompt here"],
  "gemini-2.0-flash-001",
  metadata, // optional metadata parameter
);

Getting Started

Quick Start

npm install @revenium/google-vertex

For complete setup instructions and usage examples, see the examples linked throughout this guide.

Step-by-Step Guide

The following guide walks you through setting up Revenium middleware in your project:

Step 1: Install the Package

npm install @revenium/google-vertex

Step 2: Get Your API Keys and Credentials

Revenium API Key:

  1. Go to Revenium Dashboard
  2. Sign up or log in
  3. Navigate to API Keys section
  4. Copy your API key (starts with hak_)

Vertex AI Credentials:

  1. Go to Google Cloud Console - Service Accounts
  2. Select your project or create a new one
  3. Create a service account with Vertex AI permissions
  4. Create and download a JSON key file
  5. Note your Project ID and preferred Location (e.g., us-central1)

Step 3: Setup Vertex AI Credentials

Create a keys directory:

# Create keys directory
mkdir keys

Add Your Service Account JSON

  1. Download your Google Cloud service account JSON file
  2. Save it as vertex.json in the keys directory
  3. Your project structure should look like:
    my-vertex-ai-project/
    ├── keys/
    │   └── vertex.json
    ├── .env
    └── package.json

Step 4: Setup Environment Variables

Create a .env file in your project root:

# Create .env file
echo. > .env  # On Windows (CMD TERMINAL)
touch .env  # On Mac/Linux (CMD TERMINAL)
# OR
#PowerShell
New-Item -Path .env -ItemType File

Note: A template .env.example file is included in this package at node_modules/@revenium/google-vertex/examples/.env.example with all required variables and helpful comments.

Copy and paste the following into .env:

# Vertex AI Configuration
GOOGLE_CLOUD_PROJECT=your_gcp_project_id
GOOGLE_CLOUD_LOCATION=us-central1
GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/vertex-service-account-key.json

# Revenium Configuration
REVENIUM_METERING_API_KEY=your_revenium_api_key_here

# Optional: For development/testing (defaults to https://api.revenium.ai)
# REVENIUM_METERING_BASE_URL=https://api.revenium.ai

# Optional: Enable debug logging
REVENIUM_LOG_LEVEL=INFO

Note: Replace each your_..._here with your actual values. You must set GOOGLE_APPLICATION_CREDENTIALS to the absolute path of your vertex.json file (use pwd in terminal to find it).

Step 5: Implement in Your Code

Use the examples as reference for implementing the middleware in your project:

import { VertexAIController } from "@revenium/google-vertex";

const controller = new VertexAIController();
const result = await controller.createChat(
  ["What is artificial intelligence?"],
  "gemini-2.0-flash-001",
);

For a complete working example, see: Vertex AI Basic Example

Use the examples as reference for complete implementation including:

  • How to initialize the controller
  • Making API calls with automatic metering
  • Handling streaming responses
  • Adding custom metadata to track business context

Step 6: Update package.json (Optional)

Add test scripts and module type to your package.json:

{
  "name": "my-vertex-ai-project",
  "version": "1.0.0",
  "type": "module",
  "scripts": {
    "test-vertex": "node test-vertex.js"
  },
  "dependencies": {
    "@revenium/google-vertex": "^0.1.1"
  }
}

Important: If you get a "Cannot use import statement outside a module" error, make sure your package.json includes "type": "module" as shown above.

Next Steps

For more advanced usage including streaming and custom metadata, see the complete examples:


Running Examples from Cloned Repository

If you've cloned this repository from GitHub and want to run the included examples to see how the middleware works (without modifying the middleware source code):

Setup

# Clone the repository
git clone https://github.com/revenium/revenium-middleware-google-node.git
cd revenium-middleware-google-node

# Install dependencies
npm install

# Build the packages
npm run build

# Configure environment variables
cp .env.example .env
# Edit .env with your API keys

# Setup Vertex AI credentials
mkdir keys
# Add your vertex.json service account file to keys/
# Set absolute path in .env: GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/project/keys/vertex.json

Run Examples

Using npm scripts:

# Vertex AI examples
npm run example:vertex:basic      # Basic chat completion
npm run example:vertex:streaming  # Streaming response

Or use npx tsx directly:

npx tsx packages/google-vertex/examples/basic.ts
npx tsx packages/google-vertex/examples/streaming.ts

For detailed information about each example, see the examples directory.

Want to Modify the Middleware Code?

If you're planning to modify the examples or experiment with the code, the setup above is sufficient. However, if you want to modify the middleware source code itself (files in packages/google-vertex/src/), you'll need to understand the development workflow.

See Local Development and Contributing below for the complete development guide.


For complete working examples, see:

Advanced Usage

Streaming Responses

Basic streaming pattern:

const result = await controller.createStreaming(
  ["Your prompt here"],
  "gemini-2.0-flash-001",
);

for await (const chunk of result.stream) {
  // Process streaming chunks
}

For complete streaming examples with performance tracking and metadata, see:

Text Embeddings

Basic embedding pattern:

const result = await controller.createEmbedding(
  "Text to embed",
  "text-embedding-004",
);

For complete embedding examples, see:

Custom Metadata Tracking

Add business context to your AI usage. See the Revenium Metering API Reference for complete header options.

Metadata Fields

The usageMetadata parameter supports the following fields for detailed tracking:

| Field | Description | Use Case | | -------------------------- | ---------------------------------------- | ------------------------------ | | traceId | Session/conversation tracking identifier | Distributed tracing, debugging | | taskType | AI task categorization | Cost analysis by workload type | | subscriberId | User identifier | Billing, rate limiting | | subscriberEmail | User email address | Support, compliance | | subscriberCredentialName | Auth credential name | Track API keys | | subscriberCredential | Auth credential value | Security auditing | | organizationId | Organization ID | Multi-tenant cost allocation | | subscriptionId | Subscription plan ID | Plan limit tracking | | productId | Product/feature ID | Feature cost attribution | | agent | AI agent identifier | Distinguish workflows | | responseQualityScore | Quality rating 0.0-1.0 | Performance analysis | | modelSource | Routing layer (e.g., GOOGLE_VERTEX_AI) | Integration analytics | | systemFingerprint | Provider-issued model fingerprint | Debugging and attribution | | temperature | Sampling temperature applied | Compare response creativity | | errorReason | Upstream error message | Error monitoring | | mediationLatency | Gateway latency in ms | Diagnose mediation overhead |

Note: The Vertex middleware accepts these fields in a flat structure. Internally, subscriber fields are transformed to a nested structure (subscriber.id, subscriber.email, subscriber.credential.name, subscriber.credential.value) before being sent to the Revenium API.

Usage Example

Basic pattern with metadata:

const customMetadata = {
  subscriberId: "user-123",
  subscriberEmail: "[email protected]",
  organizationId: "org-456",
  productId: "chat-app",
};

const result = await controller.createChat(
  ["Your prompt here"],
  "gemini-2.0-flash-001", // required model parameter
  customMetadata, // optional metadata parameter
);

For complete metadata examples with all available fields, see:

Trace Visualization Fields

The middleware automatically captures trace visualization fields for distributed tracing and analytics:

| Field | Type | Description | Environment Variable | | --------------------- | ------ | ------------------------------------------------------------------------------- | ---------------------------------- | | environment | string | Deployment environment (production, staging, development) | REVENIUM_ENVIRONMENT, NODE_ENV | | operationType | string | Operation classification (CHAT, EMBED, etc.) - automatically detected | N/A (auto-detected) | | operationSubtype | string | Additional detail (function_call, etc.) - automatically detected | N/A (auto-detected) | | retryNumber | number | Retry attempt number (0 for first attempt, 1+ for retries) | REVENIUM_RETRY_NUMBER | | parentTransactionId | string | Parent transaction reference for distributed tracing | REVENIUM_PARENT_TRANSACTION_ID | | transactionName | string | Human-friendly operation label | REVENIUM_TRANSACTION_NAME | | region | string | Cloud region (us-east-1, etc.) - auto-detected from AWS/Azure/GCP | AWS_REGION, REVENIUM_REGION | | credentialAlias | string | Human-readable credential name | REVENIUM_CREDENTIAL_ALIAS | | traceType | string | Categorical identifier (alphanumeric, hyphens, underscores only, max 128 chars) | REVENIUM_TRACE_TYPE | | traceName | string | Human-readable label for trace instances (max 256 chars) | REVENIUM_TRACE_NAME |

All trace visualization fields are optional. The middleware will automatically detect and populate these fields when possible.

Example Configuration

REVENIUM_ENVIRONMENT=production
REVENIUM_REGION=us-central1
REVENIUM_CREDENTIAL_ALIAS=Vertex AI Production Key
REVENIUM_TRACE_TYPE=customer_support
REVENIUM_TRACE_NAME=Support Ticket #12345
REVENIUM_PARENT_TRANSACTION_ID=parent-txn-123
REVENIUM_TRANSACTION_NAME=Answer Customer Question
REVENIUM_RETRY_NUMBER=0

Multi-Region Deployment

// Deploy in different regions for better performance
const controllerUSCentral = new VertexAIController(
  "my-project-id",
  "us-central1",
);

const controllerEurope = new VertexAIController(
  "my-project-id",
  "europe-west1",
);

const controllerAsia = new VertexAIController(
  "my-project-id",
  "asia-southeast1",
);

What Gets Tracked

  • Token Usage: Input and output tokens for accurate billing
  • Request Duration: Total time for each API call
  • Model Information: Which model was used
  • Operation Type: Chat completion, embedding, streaming
  • Error Tracking: Failed requests and error details
  • Streaming Metrics: Time to first token for streaming responses

Supported Models

Important: The model parameter is required when calling any controller method. You must specify the model explicitly in your code.

This middleware supports all models available in Vertex AI. The middleware does not maintain a hardcoded list of models, ensuring compatibility with new models as Google releases them.

For the latest available models, see:

Common models used in examples:

  • Chat/Streaming: gemini-2.0-flash-001, gemini-1.5-pro, gemini-1.5-flash
  • Embeddings: text-embedding-004

Example usage:

// Basic pattern - model is required
const result = await controller.createChat(
  ["Your prompt here"],
  "gemini-2.0-flash-001", // required model parameter
);

For complete examples with different models and use cases, see:

Note: Vertex AI does not return exact token counts for embeddings; the middleware records an estimated count based on the request size.

Configuration Options

Environment Variables

Required:

  • GOOGLE_CLOUD_PROJECT - Your GCP project ID
  • GOOGLE_APPLICATION_CREDENTIALS - Absolute path to your service account JSON file
  • REVENIUM_METERING_API_KEY - Your Revenium API key from Revenium Dashboard

Optional:

  • GOOGLE_CLOUD_LOCATION - GCP region (defaults to us-central1)
  • REVENIUM_METERING_BASE_URL - Revenium API base URL (defaults to https://api.revenium.ai, only needed for development/testing)
  • REVENIUM_LOG_LEVEL - Log level: DEBUG, INFO, WARN, ERROR (defaults to INFO)
  • REVENIUM_PRINT_SUMMARY - Print cost/metrics summary to console after each request: true, false, "human", or "json" (defaults to false)
  • REVENIUM_TEAM_ID - Your Revenium team ID (required for cost metrics in summary output)
  • REVENIUM_CAPTURE_PROMPTS - Capture prompts and responses for analysis: true or false (defaults to false)

Prompt Capture

The middleware can capture prompts and responses for analysis and debugging. This feature is disabled by default for privacy and performance.

Configuration

Enable via environment variable:

REVENIUM_CAPTURE_PROMPTS=true

Per-Request Control

Override the global setting for individual requests:

import { VertexAIController } from "@revenium/google-vertex";

const controller = new VertexAIController();
const result = await controller.createChat(
  ["What is artificial intelligence?"],
  "gemini-2.0-flash-001",
  {
    capturePrompts: true,
  },
);

Security

All captured prompts are automatically sanitized to remove sensitive credentials including:

  • API keys (sk-*, sk-proj-*, sk-ant-*, AIzaSy*)
  • Bearer tokens
  • Passwords
  • Generic tokens and api_key fields

Manual Configuration

Controllers read settings from the environment:

export GOOGLE_CLOUD_PROJECT="your-project-id"
export GOOGLE_CLOUD_LOCATION="us-central1"
export GOOGLE_APPLICATION_CREDENTIALS="/absolute/path/to/vertex.json"
export REVENIUM_METERING_API_KEY="your-revenium-key"

# Optional: Only for development/testing
# export REVENIUM_METERING_BASE_URL="https://api.revenium.ai"

Then instantiate the controller:

import { VertexAIController } from "@revenium/google-vertex";

const controller = new VertexAIController();
const response = await controller.createChat(
  ["Hello Vertex"],
  "gemini-2.0-flash-001",
);

For complete configuration examples, see:

Troubleshooting

Common Issues

"Invalid JWT Signature" Error

Your service account JSON file may be corrupted. Download a fresh copy from Google Cloud Console (don't copy/paste). Verify with: cat your-file.json | jq .private_key | head -2

"Authentication Error"

# Verify your service account file exists
ls -la keys/vertex.json

# Check environment variables
echo $GOOGLE_CLOUD_PROJECT
echo $GOOGLE_APPLICATION_CREDENTIALS

"Project ID not found"

export GOOGLE_CLOUD_PROJECT="your-actual-project-id"
export GOOGLE_APPLICATION_CREDENTIALS="$(pwd)/keys/vertex.json"

"Requests not being tracked"

export REVENIUM_METERING_API_KEY="your-actual-revenium-key"
export REVENIUM_LOG_LEVEL="DEBUG"  # Enable debug logging

Module Import Errors

{
  "type": "module"
}

Setting Environment Variables

Mac/Linux

export GOOGLE_CLOUD_PROJECT="your-gcp-project-id"
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
export REVENIUM_METERING_API_KEY="your-revenium-api-key"

# Optional: Only for development/testing
# export REVENIUM_METERING_BASE_URL="https://api.revenium.ai"

Windows PowerShell

$env:GOOGLE_CLOUD_PROJECT="your-gcp-project-id"
$env:GOOGLE_APPLICATION_CREDENTIALS="C:/path/to/service-account.json"
$env:REVENIUM_METERING_API_KEY="your-revenium-api-key"

Windows CMD

set GOOGLE_CLOUD_PROJECT=your-gcp-project-id
set GOOGLE_APPLICATION_CREDENTIALS=C:/path/to/service-account.json
set REVENIUM_METERING_API_KEY=your-revenium-api-key

Requirements

  • Node.js 18+
  • Google Cloud Project with Vertex AI enabled
  • Service Account JSON file
  • Revenium API key

Documentation

For detailed documentation, visit docs.revenium.io

Contributing

See CONTRIBUTING.md

Code of Conduct

See CODE_OF_CONDUCT.md

Security

See SECURITY.md

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

For issues, feature requests, or contributions:

Local Development and Contributing

Are you planning to modify the middleware source code? (Not just run examples)

If you want to:

  • Fix bugs in the middleware
  • Add new features to @revenium/google-vertex
  • Contribute to the project
  • Test changes to the middleware before publishing

Then follow the complete development workflow in DEVELOPMENT.md, which covers:

What DEVELOPMENT.md Includes:

  • Development Workflow - Step-by-step process for making changes
  • Build System - Understanding the monorepo and TypeScript compilation
  • Testing Local Changes - How to test your modifications properly
  • When to Rebuild - Understanding when npm run build is needed
  • Publishing Checklist - Steps to publish new versions
  • Architecture Notes - Understanding the codebase structure
  • Contributing Guidelines - How to contribute to the project

Key Difference:

  • Running Examples (above): You can modify example files and run them directly with npx tsx - no rebuild needed
  • Modifying Middleware (DEVELOPMENT.md): If you modify source files in packages/google-vertex/src/, you must run npm run build before testing

Quick Start for Contributors:

# 1. Make changes to source code
vim packages/google-vertex/src/vertexAI.service.ts

# 2. Rebuild the package
npm run build

# 3. Test your changes
npm run example:vertex:basic

# 4. See DEVELOPMENT.md for complete workflow

Built by Revenium