npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

llm-guard

v0.1.8

Published

A TypeScript library for validating and securing LLM prompts

Readme

LLM Guard

LLM Guard Logo

Secure your LLM prompts with confidence

A TypeScript library for validating and securing LLM prompts. This package provides various guards to protect against common LLM vulnerabilities and misuse.

npm version GitHub license GitHub stars GitHub issues GitHub pull requests

Features

  • Validate LLM prompts for various security concerns
  • Support for multiple validation rules:
    • PII detection
    • Jailbreak detection
    • Profanity filtering
    • Prompt injection detection
    • Relevance checking
    • Toxicity detection
  • Batch validation support
  • CLI interface
  • TypeScript support

Installation

npm install llm-guard

Usage

JavaScript/TypeScript

import { LLMGuard } from 'llm-guard';

const guard = new LLMGuard({
  pii: true,
  jailbreak: true,
  profanity: true,
  promptInjection: true,
  relevance: true,
  toxicity: true
});

// Single prompt validation
const result = await guard.validate('Your prompt here');
console.log(result);

// Batch validation
const batchResult = await guard.validateBatch([
  'First prompt',
  'Second prompt'
]);
console.log(batchResult);

CLI

# Basic usage
npx llm-guard "Your prompt here"

# With specific guards enabled
npx llm-guard --pii --jailbreak "Your prompt here"

# With a config file
npx llm-guard --config config.json "Your prompt here"

# Batch mode
npx llm-guard --batch '["First prompt", "Second prompt"]'

# Show help
npx llm-guard --help

Configuration

You can configure which validators to enable when creating the LLMGuard instance:

const guard = new LLMGuard({
  pii: true,              // Enable PII detection
  jailbreak: true,        // Enable jailbreak detection
  profanity: true,        // Enable profanity filtering
  promptInjection: true,  // Enable prompt injection detection
  relevance: true,        // Enable relevance checking
  toxicity: true,         // Enable toxicity detection
  customRules: {          // Add custom validation rules
    // Your custom rules here
  },
  relevanceOptions: {     // Configure relevance guard options
    minLength: 10,        // Minimum text length
    maxLength: 5000,      // Maximum text length
    minWords: 3,          // Minimum word count
    maxWords: 1000        // Maximum word count
  }
});

Available Guards

PII Guard

Detects personally identifiable information like emails, phone numbers, SSNs, credit card numbers, and IP addresses.

Profanity Guard

Filters profanity and offensive language, including common character substitutions (like using numbers for letters).

Jailbreak Guard

Detects attempts to bypass AI safety measures and ethical constraints, such as "ignore previous instructions" or "pretend you are".

Prompt Injection Guard

Identifies attempts to inject malicious instructions or override system prompts, including system prompt references and memory reset attempts.

Relevance Guard

Evaluates the relevance and quality of the prompt based on length, word count, filler words, and repetitive content.

Toxicity Guard

Detects toxic, harmful, or aggressive content, including hate speech, threats, and discriminatory language.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request on GitHub. We appreciate any help with:

  • Bug fixes
  • New features
  • Documentation improvements
  • Code quality enhancements
  • Test coverage
  • Performance optimizations

How to Contribute

  1. Fork the repository on GitHub
  2. Create a new branch for your feature or bugfix
  3. Make your changes
  4. Write or update tests as needed
  5. Ensure all tests pass
  6. Submit a Pull Request with a clear description of the changes

For more complex changes, please open an issue first to discuss the proposed changes.

Documentation

For more detailed documentation, visit our documentation site.