npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@vhra/guard-rails

v0.1.0

Published

SDK-agnostic guardrails for any LLM client. One line to add budget limits, PII redaction, and prompt injection protection.

Readme

guard-rails

SDK-agnostic guardrails for any LLM client. One line to add budget limits, PII redaction, and prompt injection protection.

import OpenAI from 'openai'
import { wrap } from '@vhra/guard-rails'

const client = wrap(new OpenAI(), {
  budget:    { maxTokens: 50_000, window: '1d' },
  pii:       { mode: 'redact' },
  injection: true,
  nonsense:  true,
  onViolation: (e) => console.warn('[guard]', e),
})

// Use exactly like the normal client — guards run transparently
const response = await client.chat.completions.create({ ... })

Works with OpenAI, Anthropic, Google Gemini, and any SDK that follows the same { messages } / { contents } / { prompt } convention.


Install

npm install @vhra/guard-rails

No runtime dependencies.


Guards

Budget — token spend limits

wrap(client, {
  budget: {
    maxTokens: 100_000,       // hard limit
    window: '1d',             // 'session' | '1h' | '6h' | '1d' | '7d'
    onWarning: (used, limit) => console.warn(`${used}/${limit} tokens used`),
  }
})
  • Estimates tokens before the call, blocks if the projected total would exceed the limit.
  • Reconciles with actual token usage reported by the SDK after the response arrives.
  • Throws BudgetExceededError when the limit is hit.

PII — detect and redact personal information

wrap(client, {
  pii: {
    mode: 'redact',           // 'redact' (default) | 'block'
    types: ['email', 'ssn'],  // default: all types
    replacement: '[REDACTED]',
  }
})

Detects: email, phone, ssn, creditcard, ip.

In redact mode the PII is replaced in the prompt before it reaches the API. In block mode the whole call is rejected.

Injection — prompt injection detection

wrap(client, {
  injection: true,             // enabled by default
  // or pass extra patterns:
  injection: { patterns: [/my-custom-pattern/i] },
})

Detects: "ignore previous instructions", role-switching ("act as", "pretend to be"), system-prompt injection markers ([SYSTEM], <|im_start|>, ### instruction, etc.), jailbreak preambles (DAN, developer mode).

Text is normalised before matching — homoglyphs (Cyrillic/Greek lookalikes) and zero-width characters are stripped so evasion attempts don't bypass patterns.

Throws ViolationError when an injection attempt is detected.

Nonsense — gibberish and encoded-payload detection

wrap(client, {
  nonsense: true,              // enabled by default
  // or tune thresholds:
  nonsense: {
    entropyThreshold: 0.85,    // 0–1, higher = more permissive
    minLength: 20,
  },
})

Catches: high-entropy random strings, excessive character repetition, repeating multi-character patterns (common in base64-encoded payloads), invisible Unicode characters.

Throws ViolationError when nonsense is detected.


Custom guards

import { wrap } from '@vhra/guard-rails'
import type { Guard } from '@vhra/guard-rails'

const toxicityGuard: Guard = {
  name: 'toxicity',
  phase: 'input',
  run(ctx) {
    if (isToxic(ctx.text)) {
      throw new ViolationError({
        guard: 'toxicity',
        phase: 'input',
        action: 'block',
        reason: 'Toxic content detected',
      })
    }
    return { action: 'pass' }
  },
}

const client = wrap(new Anthropic(), { guards: [toxicityGuard] })

When you supply guards directly, the built-in guards are not added automatically — you have full control over the pipeline.


Individual guard factories

import {
  createBudgetGuard,
  createPIIGuard,
  createInjectionGuard,
  createNonsenseGuard,
} from '@vhra/guard-rails'

const guards = [
  createNonsenseGuard(),
  createInjectionGuard(),
  createPIIGuard({ mode: 'redact' }),
  createBudgetGuard({ maxTokens: 10_000 }),
]

const client = wrap(new OpenAI(), { guards })

Violation events

wrap(client, {
  onViolation(event) {
    // event.guard   — 'budget' | 'pii' | 'injection' | 'nonsense' | ...
    // event.phase   — 'input' | 'output'
    // event.action  — 'block' | 'redact'
    // event.reason  — human-readable description
    logger.warn(event)
  },
})

onViolation fires for both blocked and redacted events. Blocked calls also throw an error after the hook returns.


Error types

import { GuardRailsError, BudgetExceededError, ViolationError } from '@vhra/guard-rails'

try {
  await client.chat.completions.create({ ... })
} catch (err) {
  if (err instanceof BudgetExceededError) {
    console.error(`Over budget: ${err.used}/${err.limit} in window "${err.window}"`)
  } else if (err instanceof ViolationError) {
    console.error(`Blocked by guard "${err.event.guard}": ${err.event.reason}`)
  }
}

Default guard pipeline

| Guard | Default | Phase | |-----------|---------|--------| | nonsense | ON | input | | injection | ON | input | | pii | OFF | both | | budget | OFF | both |

PII and budget are opt-in because they either modify content (PII redaction) or require configuration (budget).


License

MIT