npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@ai-pip/core

v0.5.0

Published

Core implementation of the AI-PIP protocol. Provides layered, zero-trust context processing (CSL, ISL, AAL) and transversal integrity (CPE)

Readme

@ai-pip/core

npm version npm downloads License

Core implementation of the AI-PIP protocol. Layered, zero-trust context processing and transversal integrity for AI systems.


Why AI-PIP?

AI-powered browsers and chat interfaces (e.g. GPT Atlas, embedded AI in web apps) create new attack surfaces: prompt injection, hidden text, jailbreaking, and role hijacking. The AI-PIP (AI Prompt Integrity Protocol) was designed to improve the security of these environments by providing rules and tools to detect, score, and respond to such threats before content reaches the model. This package is the semantic core of that protocol—pure functions, immutable value objects, and clear contracts between layers—so that SDKs and applications can build secure, auditable pipelines.


Description

AI-PIP is a multi-layer security protocol that protects AI systems from prompt injection and malicious context manipulation. This package contains the core implementation: it does not execute network calls or side effects; it provides the logic for segmentation, sanitization, risk scoring, policy decisions, and remediation plans. The official AI-PIP SDK (in development) will use this core to deliver production-ready features, including browser extensions and integrations for AI-powered applications.


Architecture (summary)

| Layer | Role | |-------|------| | CSL (Context Segmentation Layer) | Segments and classifies content by origin (UI, DOM, API, SYSTEM). | | ISL (Instruction Sanitization Layer) | Detects threats (~287 patterns), scores risk, sanitizes content, and emits a signal (risk score, detections) for other layers. From v0.5.0: produces ThreatTag metadata and exposes the canonical tag serializer for semantic isolation (encapsulation with <aipip:threat-type>...</aipip> is applied by the SDK at fragment level). | | AAL (Agent Action Lock) | Consumes the ISL signal and applies policy: ALLOW, WARN, or BLOCK. Produces a remediation plan (what to clean—target segments, goals, constraints); the SDK or an AI agent performs the actual cleanup. | | CPE (Cryptographic Prompt Envelope) | Transversal: ensures the integrity of each layer. Wraps pipeline output with a signed envelope (nonce, metadata, HMAC-SHA256) so that results can be verified. Implemented in shared/envelope; exported as @ai-pip/core/cpe. |

The processing pipeline is CSL → ISL (optionally AAL consumes the signal). CPE is not a step in that sequence—it is a transversal capability that can wrap the result at any point to guarantee integrity. Layers communicate via signals, not internal results, so that each layer stays independent and testable.

Trust and security (contract):

  • source (UI, DOM, API, SYSTEM) determines trust level and sanitization. It must be set only by trusted code (backend/SDK), never derived from user input. Otherwise an attacker could send source: 'SYSTEM' and reduce sanitization.
  • CPE secret key: The key passed to envelope(..., secretKey) must not be logged or serialized. Key rotation and storage are the SDK’s responsibility (e.g. use a key id in metadata and multiple keys in the verifier).

Semantic isolation and canonical tags (v0.5.0): The core does not modify segment text. It produces ThreatTag metadata (segmentId, offsets, type, confidence) and defines the canonical AI-PIP tag format via the serializer (openTag, closeTag, wrapWithTag). Encapsulation with tags like <aipip:prompt-injection>...</aipip> is applied by the SDK at fragment level. Benefits: no semantic corruption, auditable and reversible, deterministic; the SDK is responsible for applying offsets, insertions, and ordering (e.g. by descending offset when resolving multiple tags).


Installation

pnpm add @ai-pip/core
# or
npm install @ai-pip/core
# or
yarn add @ai-pip/core

Usage example

Segment user input, sanitize it, optionally get a risk decision and remediation plan, and wrap the result in a cryptographic envelope:

import { segment, sanitize, emitSignal, resolveAgentAction, buildRemediationPlan, envelope } from '@ai-pip/core'
import type { AgentPolicy } from '@ai-pip/core'

const cslResult = segment({ content: userInput, source: 'UI', metadata: {} })
const islResult = sanitize(cslResult)
const signal = emitSignal(islResult)
const policy: AgentPolicy = {
  thresholds: { warn: 0.3, block: 0.7 },
  remediation: { enabled: true }
}
const action = resolveAgentAction(signal, policy)  // 'ALLOW' | 'WARN' | 'BLOCK'
const remediationPlan = buildRemediationPlan(islResult, policy)
const cpeResult = envelope(islResult, secretKey)   // integrity for the pipeline result

if (action === 'BLOCK') {
  // SDK: block the request, log, and optionally run cleanup using remediationPlan
}
if (remediationPlan.needsRemediation) {
  // SDK or AI agent: clean targetSegments using goals and constraints
}

Documentation


Requirements

  • Node.js ≥ 18
  • TypeScript: For correct imports and types, your tsconfig.json must use:
    • "module": "NodeNext"
    • "moduleResolution": "nodenext"
    • "target": "ES2022" (or compatible)

Without this, you may see resolution errors for subpaths (@ai-pip/core/csl, etc.). See docs/readme.md for a full configuration example.


License

Apache-2.0. See LICENSE.


Repository: github.com/AI-PIP/ai-pip-core · npm: @ai-pip/core