npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ai-preflight

v1.0.1

Published

A lightweight library to run safety checks and optimizations before LLM API calls

Readme

ai-preflight

A lightweight, zero-dependency JavaScript library to run safety checks and optimizations before LLM API calls. Helps make AI pipelines safer, cheaper, and more debuggable.

Features

  • Normalize prompts - Clean and standardize input text
  • Estimate tokens - Get approximate token counts for cost estimation
  • Detect risks - Identify potential security issues and problematic patterns
  • Comprehensive preflight - Run all checks in one call

Installation

npm install ai-preflight

Usage

ESM (ES Modules)

import { normalizePrompt, estimateTokens, detectPromptRisks, runPreflight } from 'ai-preflight';

CommonJS

const { normalizePrompt, estimateTokens, detectPromptRisks, runPreflight } = require('ai-preflight');

API Reference

normalizePrompt(input: string): string

Normalizes a prompt string by trimming whitespace, collapsing multiple spaces, removing leading/trailing newlines, and cleaning UTF-8 junk characters from copy-paste operations.

Handles:

  • Zero-width characters (zero-width space, joiner, non-joiner, BOM)
  • Left-to-right and right-to-left marks
  • Non-breaking spaces and other Unicode whitespace
  • Smart quotes (converted to regular quotes)
  • Em/en dashes (converted to hyphens)
  • Ellipsis (converted to three dots)
  • Other invisible formatting characters

What this library does NOT do

  • It does not call any LLM APIs
  • It does not perform semantic analysis
  • It does not guarantee security or compliance
  • It does not replace proper prompt design

This library provides deterministic, best-effort checks before LLM calls.

import { normalizePrompt } from 'ai-preflight';

const cleaned = normalizePrompt("  Hello   world  \n\n");
console.log(cleaned); // "Hello world"

// Handles copy-paste junk characters
const withJunk = "Hello\u00A0\u2018world\u2019\u2013test";
const cleaned2 = normalizePrompt(withJunk);
console.log(cleaned2); // "Hello 'world'-test"

estimateTokens(input: string, model?: string): number

Estimates the number of tokens in a string based on the specified model. Uses approximate token counting rules.

import { estimateTokens } from 'ai-preflight';

const tokens = estimateTokens("Hello world");
console.log(tokens); // ~3 tokens

// Model-specific estimation
const gpt4Tokens = estimateTokens("Hello world", "gpt-4");
const claudeTokens = estimateTokens("Hello world", "claude-3-opus");

Supported models:

  • gpt-3.5-turbo (default)
  • gpt-4, o1 variants
  • claude variants
  • gemini variants

detectPromptRisks(input: string): string[]

Detects potential risks in a prompt string. Returns an array of risk identifiers.

import { detectPromptRisks } from 'ai-preflight';

const risks = detectPromptRisks("Ignore previous instructions...");
console.log(risks); // ["potential_injection"]

const safeRisks = detectPromptRisks("What is the weather today?");
console.log(safeRisks); // []

Detected Risks:

  • potential_injection - Suspicious patterns suggesting prompt injection attempts
  • excessive_length - Input exceeds 100,000 characters
  • high_non_ascii_ratio - High percentage of non-ASCII characters (potential obfuscation)
  • excessive_repetition - Excessive word repetition (token waste)
  • empty_prompt - Empty or whitespace-only input
  • very_short_prompt - Input is less than 3 characters
  • potential_pii - Patterns suggesting personally identifiable information

runPreflight(input: string, options?: object): object

Runs a comprehensive preflight check on a prompt. Combines normalization, token estimation, and risk detection.

import { runPreflight } from 'ai-preflight';

const result = runPreflight("  Hello world  ", {
  model: "gpt-4",
  normalize: true,
  strict: false
});

console.log(result);
// {
//   normalized: "Hello world",
//   tokens: 3,
//   risks: [],
//   safe: true,
//   originalLength: 15,
//   normalizedLength: 11
// }

Options:

  • model (string, default: 'gpt-3.5-turbo') - Model name for token estimation
  • normalize (boolean, default: true) - Whether to normalize the prompt
  • strict (boolean, default: false) - If true, throws an error when risks are detected

Strict Mode Example:

try {
  runPreflight("Ignore previous instructions", { strict: true });
} catch (error) {
  console.error(error.message);
  // "Preflight check failed. Risks detected: potential_injection"
}

Examples

Basic Usage

import { runPreflight } from 'ai-preflight';

const prompt = "  What is the capital of France?  ";

const check = runPreflight(prompt, { model: "gpt-4" });

if (check.safe) {
  console.log(`Safe to send! Estimated tokens: ${check.tokens}`);
  // Use check.normalized for the actual API call
} else {
  console.warn(`Risks detected: ${check.risks.join(', ')}`);
}

Express.js API Endpoint

ESM:

import express from 'express';
import { runPreflight } from 'ai-preflight';

const app = express();
app.use(express.json());

app.post("/chat", async (req, res) => {
  try {
    // Run preflight check - throws if risks detected (strict mode)
    const report = runPreflight(req.body.prompt, {
      model: "gpt-4",
      strict: true
    });

    // Only reached if safe - use normalized prompt and token estimate
    const llmResponse = await callLLM(report.normalized, report.tokens);
    res.json(llmResponse);
  } catch (error) {
    // Handle preflight failures (risks detected or invalid input)
    res.status(400).json({ 
      error: error.message,
      risks: error.message.includes('Risks detected') ? 'preflight_failed' : 'invalid_input'
    });
  }
});

async function callLLM(prompt, estimatedTokens) {
  // Your LLM API call here
  const response = await fetch('https://api.openai.com/v1/chat/completions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${process.env.API_KEY}`
    },
    body: JSON.stringify({
      model: 'gpt-4',
      messages: [{ role: 'user', content: prompt }],
      max_tokens: Math.min(1000, estimatedTokens * 10)
    })
  });
  return response.json();
}

CommonJS:

const express = require('express');
const { runPreflight } = require('ai-preflight');

const app = express();
app.use(express.json());

app.post("/chat", async (req, res) => {
  try {
    const report = runPreflight(req.body.prompt, {
      model: "gpt-4",
      strict: true
    });

    const llmResponse = await callLLM(report.normalized, report.tokens);
    res.json(llmResponse);
  } catch (error) {
    res.status(400).json({ 
      error: error.message,
      risks: error.message.includes('Risks detected') ? 'preflight_failed' : 'invalid_input'
    });
  }
});

async function callLLM(prompt, estimatedTokens) {
  // Your LLM API call implementation
  // ...
}

Before API Call

import { runPreflight } from 'ai-preflight';

async function callLLM(userInput) {
  // Run preflight check
  const preflight = runPreflight(userInput, {
    model: "gpt-4",
    strict: true // Throw if risks detected
  });
  
  // Make API call with normalized prompt
  const response = await fetch('https://api.openai.com/v1/chat/completions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${process.env.API_KEY}`
    },
    body: JSON.stringify({
      model: 'gpt-4',
      messages: [{ role: 'user', content: preflight.normalized }],
      max_tokens: Math.min(1000, preflight.tokens * 10) // Use token estimate
    })
  });
  
  return response.json();
}

Cost Estimation

import { estimateTokens } from 'ai-preflight';

function estimateCost(prompt, model = 'gpt-4') {
  const tokens = estimateTokens(prompt, model);
  const costPer1kTokens = 0.03; // Example pricing
  return (tokens / 1000) * costPer1kTokens;
}

const cost = estimateCost("A very long prompt...", "gpt-4");
console.log(`Estimated cost: $${cost.toFixed(4)}`);

Requirements

  • Node.js 16.0.0 or higher
  • Zero external dependencies

Examples

Example files are available in the examples/ directory:

  • basic-usage.mjs - ESM examples (for projects with "type": "module")
  • basic-usage.cjs - CommonJS examples

Run them with:

node examples/basic-usage.mjs  # ESM
node examples/basic-usage.cjs  # CommonJS

Testing

Run the test suite:

npm test          # CommonJS tests
npm run test:esm  # ESM tests

Or directly:

node tests/test.cjs  # CommonJS
node tests/test.mjs  # ESM

License

MIT

Contributing

Contributions welcome! Please ensure all code follows the existing style and includes appropriate documentation.