npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@openai/guardrails

v0.2.1

Published

OpenAI Guardrails: A TypeScript framework for building safe and reliable AI systems

Downloads

23,919

Readme

OpenAI Guardrails: TypeScript (Preview)

This is the TypeScript version of OpenAI Guardrails, a package for adding configurable safety and compliance guardrails to LLM applications. It provides a drop-in wrapper for OpenAI's TypeScript / JavaScript client, enabling automatic input/output validation and moderation using a wide range of guardrails.

Most users can simply follow the guided configuration and installation instructions at guardrails.openai.com.

OpenAI Guardrails configuration screenshot

Installation

Usage

Follow the configuration and installation instructions at guardrails.openai.com.

Local Development

Clone the repository and install locally:

# Clone the repository
git clone https://github.com/openai/openai-guardrails-js.git
cd openai-guardrails-js

# Install dependencies
npm install

# Build the package
npm run build

Integration Details

Drop-in OpenAI Replacement

The easiest way to use Guardrails TypeScript is as a drop-in replacement for the OpenAI client:

import { GuardrailsOpenAI } from '@openai/guardrails';

async function main() {
  // Use GuardrailsOpenAI instead of OpenAI
  const client = await GuardrailsOpenAI.create({
    version: 1,
    output: {
      version: 1,
      guardrails: [{ name: 'Moderation', config: { categories: ['hate', 'violence'] } }],
    },
  });

  try {
    const response = await client.responses.create({
      model: 'gpt-5',
      input: 'Hello world',
    });

    // Access OpenAI response directly
    console.log(response.output_text);
  } catch (error) {
    if (error.constructor.name === 'GuardrailTripwireTriggered') {
      console.log(`Guardrail triggered: ${error.guardrailResult.info}`);
    }
  }
}

main();

Agents SDK Integration

import { GuardrailAgent } from '@openai/guardrails';
import { run } from '@openai/agents';

// Create agent with guardrails automatically configured
const agent = new GuardrailAgent({
  config: {
    version: 1,
    output: {
      version: 1,
      guardrails: [{ name: 'Moderation', config: { categories: ['hate', 'violence'] } }],
    },
  },
  name: 'Customer support agent',
  instructions: 'You are a helpful customer support agent.',
});

// Use exactly like a regular Agent
const result = await run(agent, 'Hello, can you help me?');

Evaluation Framework

The evaluation framework allows you to test guardrail performance on datasets and measure metrics like precision, recall, and F1 scores.

Running Evaluations

Using the CLI:

npm run build
npm run eval -- --config-path src/evals/sample_eval_data/nsfw_config.json --dataset-path src/evals/sample_eval_data/nsfw_eval.jsonl

Dataset Format

Datasets must be in JSONL format, with each line containing a JSON object:

{
  "id": "sample_1",
  "data": "Text to evaluate",
  "expectedTriggers": {
    "guardrail_name_1": true,
    "guardrail_name_2": false
  }
}

Programmatic Usage

import { GuardrailEval } from '@openai/guardrails';

const eval = new GuardrailEval(
  'configs/my_guardrails.json',
  'data/demo_data.jsonl',
  32, // batch size
  'results', // output directory
  false // multi-turn mode (set to true to evaluate conversation-aware guardrails incrementally)
);

await eval.run('Evaluating my dataset');

Project Structure

  • src/ - TypeScript source code
  • dist/ - Compiled JavaScript output
  • src/checks/ - Built-in guardrail checks
  • src/evals/ - Evaluation framework
  • examples/ - Example usage and sample data

Examples

The package includes comprehensive examples in the examples/ directory:

  • agents_sdk.ts: Agents SDK integration with GuardrailAgent
  • hello_world.ts: Basic chatbot with guardrails using GuardrailsOpenAI
  • azure_example.ts: Azure OpenAI integration example
  • local_model.ts: Using local models with guardrails
  • streaming.ts: Streaming responses with guardrails
  • suppress_tripwire.ts: Handling guardrail violations gracefully

Running Examples

Prerequisites

Before running examples, you need to build the package:

# Install dependencies (if not already done)
npm install

# Build the TypeScript code
npm run build

Running Individual Examples

Using tsx (Recommended)

npx tsx examples/basic/hello_world.ts
npx tsx examples/basic/streaming.ts
npx tsx examples/basic/agents_sdk.ts

Available Guardrails

The TypeScript implementation includes the following built-in guardrails:

  • Moderation: Content moderation using OpenAI's moderation API
  • URL Filter: URL filtering and domain allowlist/blocklist
  • Contains PII: Personally Identifiable Information detection
  • Hallucination Detection: Detects hallucinated content using vector stores
  • Jailbreak: Detects jailbreak attempts
  • Off Topic Prompts: Ensures responses stay within business scope
  • Custom Prompt Check: Custom LLM-based guardrails

License

MIT License - see LICENSE file for details.

Disclaimers

Please note that Guardrails may use Third-Party Services such as the Presidio open-source framework, which are subject to their own terms and conditions and are not developed or verified by OpenAI. For more information on configuring guardrails, please visit: guardrails.openai.com

Developers are responsible for implementing appropriate safeguards to prevent storage or misuse of sensitive or prohibited content (including but not limited to personal data, child sexual abuse material, or other illegal content). OpenAI disclaims liability for any logging or retention of such content by developers. Developers must ensure their systems comply with all applicable data protection and content safety laws, and should avoid persisting any blocked content generated or intercepted by Guardrails. Guardrails calls paid OpenAI APIs, and developers are responsible for associated charges.