npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@revstackhq/ai

v0.6.0

Published

<div align="center"> <h1>@revstackhq/ai</h1> <p>A seamless metering wrapper for the Vercel AI SDK.</p> </div>

Readme

If you are building an AI native application, this package is the fastest way to monetize it. It wraps the official @ai-sdk functions, intercepts token consumption, and reports exactly what model and how many tokens were used—without breaking your streams.

Key Features

  • Zero Friction: Drop-in replacements for streamText and generateText.
  • Smart Backend: We transmit the raw usage (promptTokens, completionTokens, modelId), and Revstack Cloud calculates the exact credit deduction based on your configured margins and model pricing.
  • Non-Blocking: Usage tracking occurs transparently and never delays the user's stream.

Installation

npm install @revstackhq/ai ai

Quick Start

1. Configure Revstack Once (IoC)

Create a pre-configured instance of the AI wrapper in your app. Pass a trackUsage callback so @revstackhq/ai can report usage without knowing about your framework.

// lib/revstack.ts
import { trackUsage } from "@revstackhq/next/server";
import { createRevstackAI } from "@revstackhq/ai";

export const revstack = createRevstackAI(
  { secretKey: process.env.REVSTACK_SECRET_KEY! },
  async (key, usage, config) => {
    // This fires every time a stream or generation completes
    await trackUsage(key, usage, config);
  }
);

2. Replace streamText and generateText

Import your pre-configured instance instead of the base Vercel functions. You now only need to provide an entitlementKey.

// app/api/chat/route.ts
import { revstack } from "@/lib/revstack";
import { openai } from "@ai-sdk/openai";

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = await revstack.streamText({
    model: openai("gpt-4o"),
    messages,
    entitlementKey: "ai_tokens", // Triggers metering automatically
    // Your original onFinish still fires!
    async onFinish(event) {
      console.log("Stream completed locally, and usage was already tracked.");
    },
  });

  return result.toDataStreamResponse();
}

How It Works

When a user streams a generation:

  1. The stream begins sending chunks to the client instantly.
  2. Vercel's internal onFinish event fires when the stream ends.
  3. Your pre-configured revstack.streamText wrapper intercepts this event, extracting the model.id and exact token split.
  4. It calls your injected trackUsage callback, transmitting the raw payload (e.g. promptTokens: 10, completionTokens: 50) to your backend.
  5. If you provided an onFinish handler, it runs sequentially afterward.

This offloads the complex math of "how many credits should GPT-4o cost vs Claude 3.5 Sonnet" entirely to your Revstack product configuration.