npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@llm-dev-ops/shield-sdk

v1.0.0

Published

Enterprise-grade SDK for securing Large Language Model applications

Readme

@llm-dev-ops/shield-sdk

Enterprise-grade SDK for securing Large Language Model applications.

Overview

Shield SDK provides comprehensive security scanning for LLM applications, protecting against:

  • Prompt Injection - Detects attempts to manipulate LLM behavior
  • Data Leakage - Prevents exposure of secrets, API keys, and credentials
  • PII Exposure - Identifies personally identifiable information
  • Toxic Content - Filters harmful, offensive, or inappropriate content

Installation

npm install @llm-dev-ops/shield-sdk

Quick Start

import { Shield } from '@llm-dev-ops/shield-sdk';

// Create a shield with standard security level
const shield = Shield.standard();

// Scan a prompt before sending to LLM
const result = await shield.scanPrompt("Hello, how are you?");

if (result.isValid) {
  console.log("Prompt is safe to send to LLM");
} else {
  console.log("Security risk detected:", result.riskFactors);
}

Security Presets

Strict

Maximum security for regulated industries (banking, healthcare):

const shield = Shield.strict();
  • All scanners enabled
  • Low risk tolerance (short-circuit at 0.7)
  • Sequential execution for deterministic results

Standard (Recommended)

Balanced security for general-purpose applications:

const shield = Shield.standard();
  • Core scanners (secrets, PII, prompt injection)
  • Moderate risk tolerance (short-circuit at 0.9)
  • Parallel execution enabled

Permissive

Minimal security for development/testing:

const shield = Shield.permissive();
  • Essential scanners only
  • High risk tolerance
  • Fast execution

Custom Configuration

Use the builder pattern for fine-grained control:

import {
  Shield,
  SecretsScanner,
  PIIScanner,
  PromptInjectionScanner
} from '@llm-dev-ops/shield-sdk';

const shield = Shield.builder()
  .addInputScanner(new PromptInjectionScanner())
  .addInputScanner(new SecretsScanner({ secretTypes: ['aws', 'github'] }))
  .addInputScanner(new PIIScanner({ piiTypes: ['email', 'ssn'] }))
  .addOutputScanner(new PIIScanner())
  .withShortCircuit(0.8)
  .withParallelExecution(true)
  .withMaxConcurrent(4)
  .build();

Scanning Methods

Scan Prompts (Before LLM)

const result = await shield.scanPrompt("User input here");

if (result.isValid) {
  // Safe to send to LLM
  const llmResponse = await callLLM(result.sanitizedText);
} else {
  // Handle security risk
  for (const factor of result.riskFactors) {
    console.log(`Risk: ${factor.description} (${factor.severity})`);
  }
}

Scan Outputs (After LLM)

const result = await shield.scanOutput(llmResponse);

if (result.isValid) {
  // Safe to show to user
  displayToUser(result.sanitizedText);
} else {
  displayError("Response filtered for security reasons");
}

Scan Both Prompt and Output

const { promptResult, outputResult } = await shield.scanPromptAndOutput(
  userInput,
  llmResponse
);

Batch Scanning

const prompts = ["Hello", "How are you?", "What's the weather?"];
const results = await shield.scanBatch(prompts);

for (const [prompt, result] of prompts.map((p, i) => [p, results[i]])) {
  console.log(`${prompt}: ${result.isValid ? 'safe' : 'risky'}`);
}

Available Scanners

PromptInjectionScanner

Detects attempts to manipulate LLM behavior:

import { PromptInjectionScanner } from '@llm-dev-ops/shield-sdk';

const scanner = new PromptInjectionScanner({
  customPatterns: [/my-custom-pattern/i],
  detectJailbreaks: true,
  detectRolePlay: true,
});

Detects:

  • Instruction override attempts ("ignore previous instructions")
  • Role manipulation ("pretend to be", "act as")
  • System prompt attacks
  • Jailbreak patterns (DAN mode, etc.)
  • Delimiter injection

SecretsScanner

Detects 40+ types of credentials:

import { SecretsScanner } from '@llm-dev-ops/shield-sdk';

const scanner = new SecretsScanner({
  secretTypes: ['aws', 'github', 'stripe', 'openai'],
  redact: true,
});

Detects:

  • AWS Access Keys and Secrets
  • GitHub Tokens (PAT, OAuth, App)
  • Stripe API Keys
  • OpenAI / Anthropic API Keys
  • Slack Tokens and Webhooks
  • Google API Keys
  • Private Keys (RSA, EC, PGP)
  • JWT Tokens
  • Generic API keys and passwords

PIIScanner

Detects personally identifiable information:

import { PIIScanner } from '@llm-dev-ops/shield-sdk';

const scanner = new PIIScanner({
  piiTypes: ['email', 'phone', 'ssn', 'credit-card'],
  redact: true,
});

Detects:

  • Email addresses
  • Phone numbers (US, UK, international)
  • Social Security Numbers
  • Credit Card numbers (with Luhn validation)
  • IP addresses
  • Passport numbers
  • Driver's license numbers

ToxicityScanner

Detects harmful content:

import { ToxicityScanner } from '@llm-dev-ops/shield-sdk';

const scanner = new ToxicityScanner({
  categories: ['violence', 'hate', 'harassment', 'self-harm'],
  sensitivity: 0.5,
  customKeywords: ['banned-word'],
  allowedKeywords: ['exception-word'],
});

Detects:

  • Violence-related content
  • Hate speech
  • Harassment
  • Self-harm references
  • Sexual content
  • Profanity

Result Structure

interface ScanResult {
  isValid: boolean;        // Whether scan passed
  riskScore: number;       // 0.0-1.0
  sanitizedText: string;   // Sanitized input
  entities: Entity[];      // Detected entities
  riskFactors: RiskFactor[];
  severity: Severity;      // 'none' | 'low' | 'medium' | 'high' | 'critical'
  metadata: Record<string, string>;
  durationMs: number;      // Scan duration
}

interface Entity {
  entityType: string;      // 'email', 'ssn', 'api_key', etc.
  text: string;
  start: number;
  end: number;
  confidence: number;      // 0.0-1.0
}

LangChain Integration

import { Shield } from '@llm-dev-ops/shield-sdk';
import { ChatOpenAI } from '@langchain/openai';

const shield = Shield.standard();
const llm = new ChatOpenAI();

async function safeChat(userInput: string) {
  // Scan input
  const inputResult = await shield.scanPrompt(userInput);
  if (!inputResult.isValid) {
    throw new Error('Unsafe input detected');
  }

  // Call LLM
  const response = await llm.invoke(inputResult.sanitizedText);

  // Scan output
  const outputResult = await shield.scanOutput(response.content);
  if (!outputResult.isValid) {
    throw new Error('Unsafe output detected');
  }

  return outputResult.sanitizedText;
}

OpenAI Integration

import { Shield } from '@llm-dev-ops/shield-sdk';
import OpenAI from 'openai';

const shield = Shield.standard();
const openai = new OpenAI();

async function safeChatCompletion(userMessage: string) {
  const inputResult = await shield.scanPrompt(userMessage);
  if (!inputResult.isValid) {
    return { error: 'Input blocked', riskFactors: inputResult.riskFactors };
  }

  const completion = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: inputResult.sanitizedText }],
  });

  const response = completion.choices[0].message.content;
  const outputResult = await shield.scanOutput(response);

  return {
    response: outputResult.sanitizedText,
    inputScan: inputResult,
    outputScan: outputResult,
  };
}

Performance

  • Sub-millisecond scanning for most inputs
  • Parallel execution support
  • Short-circuit evaluation for early termination
  • Batch processing for high-throughput scenarios

Related Packages

License

Apache-2.0