npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@airtasystems/ailp

v3.0.10

Published

TypeScript client for the AILP (AI Log Protocol) LLM compliance risk assessment API.

Downloads

2,369

Readme

@airtasystems/ailp

LLM log visibility and risk assessment — sends llm input/outputs to ailp.airtasystems.com dashboard for log and compliance visibility. Manage and view your AI logs in one easy to use dashboard.

This package is a thin fetch client for the AILP (AI Log Protocol) HTTP API. It runs in Node 18+, browsers, workers, and edge runtimes that provide native fetch. No extra HTTP dependencies.

Default server: The hosted AILP API is at https://ailp.airtasystems.com/ailp (no trailing slash). The client appends /health, /assess, and /assess/stream to that base. Override with baseUrl if you use another deployment (for example http://127.0.0.1:8000/ailp when your server is mounted there).

For normal product traffic, call AILP as fire-and-forget telemetry after your product LLM returns. Do not await AILP before sending the assistant response to the user unless your application intentionally gates on the risk verdict.

For integration patterns, env loading in Node, and production security, see Integrating the AIRTA AILP TypeScript client in the repository. The HTTP contract (headers, log entry shape, streaming events) is described in the AILP server README.

Stable 3.01 release. Follows semver — breaking changes will bump the major version.


Requirements

To call AILP you need two credentials, both issued at ailp.airtasystems.com:

| Credential | Header sent | Option / env | |------------|-------------|--------------| | AILP API key | Airta-Api-Key | apiKey (Node) · NEXT_PUBLIC_AILP_API_KEY / VITE_AILP_API_KEY (React) | | Program ID | Airta-Program-Id | programId (Node) · NEXT_PUBLIC_AIRTASYSTEMS_PROGRAM_ID / VITE_AIRTASYSTEMS_PROGRAM_ID (React) |

Sign in, create a program, copy the API key and program ID, and keep both in server-side env vars. The server rejects requests that are missing either header with HTTP 400.

Provider keys (geminiApiKey / openaiApiKey) are needed when you set provider / expertProvider / judgeProvider explicitly or when your hosted API program requires client-supplied pipeline keys. The client sends both hosted headers (Gemini-Api-Key / OpenAI-Api-Key) and compatibility headers (X-Gemini-Api-Key / X-OpenAI-Api-Key) when those keys are provided.


Install

npm install @airtasystems/ailp

The package is ESM ("type": "module"). Import from @airtasystems/ailp; React helpers from @airtasystems/ailp/react (optional peer react).


What you send and what you get

Send: With createAilp(), pass an array of { role, content } messages (the conversation you sent to your LLM) and the final assistant text (output) from that model. The client builds a flat AILP log entry and posts it directly to the API. The hosted server validates airta_import: 1 as a numeric import flag alongside the normal top-level log fields.

Receive: An AilpAssessResponse including:

| Field | Meaning | |-------|---------| | risk_level | Overall verdict (most severe finding across frameworks). | | judge_reasoning | Judge synthesis string. | | experts | One result per framework expert (framework, risk_level, reasoning). | | frameworks | Resolved display names for the rubrics that ran. | | assessment | Which vendor/models AILP used internally (expertProvider, judgeProvider, expertModel, judgeModel; legacy provider is "mixed" when sides differ). | | log | Echo of the submitted entry. input.messages[*].content and output come back with PII/PHI placeholders substituted (see "Server-side redaction" below); all other fields echo verbatim. | | assessmentMode | Server-normalized mode. Defaults to "response_safety". | | requestRiskLevel | Optional request-security risk level. Present only when request security is enabled; it does not affect risk_level. | | requestRiskReasoning | Optional explanation for the request-security side assessment. | | requestSecurityExperts | Optional OWASP LLM, OWASP Agentic, and MITRE ATT&CK expert results for the request-security side assessment. |

The model you attach (via createAilp’s third argument or a full AilpLogEntry) describes the audited model. The models used inside AILP for experts and judge come from provider / expertProvider / judgeProvider and server configuration — not from your product model name.


Server-side redaction

AILP redacts PII/PHI in input.messages[*].content and output at ingress, before any LLM sees the data. Detected entities are replaced with numbered placeholders like <PERSON_1>, <EMAIL_ADDRESS_1>, <DATE_TIME_1>, <CUSTOMER_ID_1>, <MRN_1>. This means:

  • The log field on the response carries the redacted text, not what you sent. If you need the raw content for your own correlation or storage, keep your original strings — don't round-trip them through AILP.
  • The experts[*].reasoning and judge_reasoning strings only ever reference placeholders, so downstream UIs that render them are safe to display without further scrubbing.
  • Redaction is enabled by default on the hosted deployment and on any self-hosted AILP built from the current Dockerfile. Self-hosters can tune the entity set or swap the NER model via AILP_REDACT_* env vars on the server — see the AILP server README.

Nothing in the client needs to change for redaction to work — it happens server-side before the message hits any rubric.


Quick start (createAilp)

Configure once, then call the returned function after each LLM response without blocking the user-facing path:

import { createAilp } from "@airtasystems/ailp";

const ailp = createAilp({
  apiKey: process.env.AILP_API_KEY!,
  programId: process.env.AIRTASYSTEMS_PROGRAM_ID!,
  frameworks: ["eu-ai-act", "owasp-llm"],
  // Some hosted programs require a client-supplied pipeline key even when
  // `provider` is omitted. Pass whichever keys are available.
  openaiApiKey: process.env.OPENAI_API_KEY,
  geminiApiKey: process.env.GEMINI_API_KEY,
});

void ailp(messages, assistantText, { model: "gpt-4o-mini" })
  .then((result) => {
    console.log("AILP risk:", result.risk_level);
  })
  .catch((err) => {
    console.warn("AILP assessment failed:", err);
  });

Optional third argument per call: { model?, endpoint? } to record which model produced the output and an optional endpoint label.

Fire-and-forget should still be observable: attach a .catch(...), use a short timeoutMs, and send failures to your logs or telemetry. If assessment completeness is a compliance requirement, enqueue the assessment into a durable background worker rather than relying on an in-process promise.

To also classify the incoming request against security frameworks (OWASP LLM, OWASP Agentic, and MITRE ATT&CK) without changing the response-based risk_level, enable request security:

const ailp = createAilp({
  apiKey: process.env.AILP_API_KEY!,
  programId: process.env.AIRTASYSTEMS_PROGRAM_ID!,
  frameworks: ["owasp-llm", "owasp-agent", "mitre-attack"],
  assessmentMode: "response_safety_with_request_security",
});

void ailp(messages, assistantText)
  .then((result) => {
    console.log(result.risk_level); // response safety verdict
    console.log(result.requestRiskLevel); // independent request security verdict
  })
  .catch((err) => {
    console.warn("AILP assessment failed:", err);
  });

You can also use the compact alias security: true globally or per call:

void ailp(messages, assistantText, { security: true }).catch((err) => {
  console.warn("AILP assessment failed:", err);
});

Await the result only when the application needs to make a synchronous decision from the verdict, such as moderation gates, admin review tools, eval/CI runs, or compliance workflows:

const result = await ailp(messages, assistantText);

if (result.risk_level === "critical" || result.risk_level === "high") {
  await notifyComplianceTeam(result);
}

Omit provider if your AILP server is configured to choose the expert/judge pipeline itself. Still pass openaiApiKey and/or geminiApiKey when your hosted program requires client-supplied pipeline keys; the client will forward any non-empty keys even when provider is omitted. If you do set provider (or expertProvider / judgeProvider), supply the matching key so the client can send Gemini-Api-Key / OpenAI-Api-Key plus the X-*-Api-Key compatibility variants.

createAilp options

| Option | Purpose | |--------|---------| | apiKey | Required. AILP API key from ailp.airtasystems.com. Sent as Airta-Api-Key. | | programId | Required. Program ID from ailp.airtasystems.com. Sent as Airta-Program-Id and echoed under airtasystems.programId. | | frameworks | Required. One slug or an array (see table below). | | baseUrl | API base URL, no trailing slash. Omit for AILP_DEFAULT_BASE_URL (https://ailp.airtasystems.com/ailp). | | provider | "gemini" | "openai" — same vendor for experts and judge when split fields omitted. Omit to let the server default. | | expertProvider / judgeProvider | Split vendors; send both API keys when both sides need them. | | geminiApiKey / openaiApiKey | Mapped to Gemini-Api-Key / OpenAI-Api-Key and X-Gemini-Api-Key / X-OpenAI-Api-Key. When all provider fields are omitted, non-empty keys may both be sent so mixed server configs still authenticate. | | assessmentMode | Optional. Use "response_safety_with_request_security" to include OWASP request-risk fields while keeping risk_level response-based. | | security | Optional boolean alias for request-security mode. | | timeoutMs | Optional fetch timeout for assess calls. |

createAilp() throws synchronously if apiKey or programId is missing or empty — fail fast at boot rather than per request.

Export AILP_DEFAULT_BASE_URL when you need the constant in app code.


Setup Guide for LLMs

When an LLM or coding agent is asked to integrate AILP into a JavaScript/TypeScript app, follow this checklist:

  1. Install the package:
npm install @airtasystems/ailp
  1. Use the hosted API default unless the user gives a different deployment:
https://ailp.airtasystems.com/ailp

Do not append /assess yourself when using createAilp() or AilpClient; the client appends /health, /assess, or /assess/stream.

  1. Require these app secrets:
AILP_API_KEY=...
AIRTASYSTEMS_PROGRAM_ID=...
OPENAI_API_KEY=...   # often required by hosted programs using OpenAI
GEMINI_API_KEY=...   # required when the pipeline uses Gemini

Do not assume provider keys are unnecessary just because provider is omitted. Some hosted AILP programs choose the provider server-side but still require the matching client-supplied key. If the server returns Missing OpenAI API key. Send it in the OpenAI-Api-Key request header., pass openaiApiKey: process.env.OPENAI_API_KEY. If it asks for Gemini, pass geminiApiKey: process.env.GEMINI_API_KEY.

  1. Prefer createAilp() for normal integrations:
import { createAilp } from "@airtasystems/ailp";

const ailp = createAilp({
  apiKey: process.env.AILP_API_KEY!,
  programId: process.env.AIRTASYSTEMS_PROGRAM_ID!,
  frameworks: ["eu-ai-act", "owasp-llm"],
  openaiApiKey: process.env.OPENAI_API_KEY,
  geminiApiKey: process.env.GEMINI_API_KEY,
});

void ailp(messages, assistantText, {
  model: "gpt-4o-mini",
  endpoint: "chat-completion",
}).catch((err) => {
  console.warn("AILP assessment failed:", err);
});
  1. Send the original LLM conversation as messages and the final assistant text as assistantText. Do not send the AILP assessment prompt, hidden system policy text, or provider SDK response object unless the application intentionally wants that audited.

  2. For raw fetch integrations, post a flat JSON body to POST /assess. Do not wrap it as { airta_import: entry }. The hosted API expects:

{
  airta_import: 1,
  timestamp: new Date().toISOString(),
  input: { messages, endpoint: "chat-completion" },
  output: assistantText,
  modelTested: "gpt-4o-mini",
  framework: ["eu-ai-act", "owasp-llm"],
  assessmentMode: "response_safety_with_request_security",
  airtasystems: {
    programId: process.env.AIRTASYSTEMS_PROGRAM_ID,
    frameworks: ["eu-ai-act", "owasp-llm"],
  },
}
  1. For raw fetch headers, include:

| Header | Value | |--------|-------| | Content-Type | application/json | | Airta-Api-Key | AILP API key | | Airta-Program-Id | AIRTA Systems program ID | | OpenAI-Api-Key / Gemini-Api-Key | Provider key requested by the hosted AILP pipeline; include the available key even if provider selection is server-side | | X-OpenAI-Api-Key / X-Gemini-Api-Key | Compatibility variant; safe to send with the non-X header |

  1. In browser apps, prefer a server route or proxy for production. NEXT_PUBLIC_* and VITE_* values are visible to users, so never expose production LLM provider keys in a public bundle.

  2. If the API returns HTTP 400, print or log the JSON response body. Validation errors usually name the missing header, missing field, or bad request shape.

AilpClient (full control)

import { AilpClient, AILP_DEFAULT_BASE_URL } from "@airtasystems/ailp";

const client = new AilpClient({
  baseUrl: AILP_DEFAULT_BASE_URL,
  timeoutMs: 120_000,
  headers: { /* optional extra headers on every request */ },
});

const auth = {
  apiKey: process.env.AILP_API_KEY!,
  programId: process.env.AIRTASYSTEMS_PROGRAM_ID!,
  geminiApiKey,
  openaiApiKey,
};

await client.health(); // GET /health -> boolean
void client.assess(entry, auth).catch((err) => {
  console.warn("AILP assessment failed:", err);
});
await client.assessStream(entry, auth, { onEvent });
  • assess — sends the AilpLogEntry as a flat POST /assess body and returns the full AilpAssessResponse. Use it as non-blocking telemetry by default; await only for explicit gating or back-office workflows. Use airta_import: 1 for hosted import-mode requests.
  • assessStream — sends the AilpLogEntry as a flat POST /assess/stream body and reads NDJSON until done. Same final shape as assess. Use streaming for operator/admin progress UIs, not default production chat paths.
  • Non-2xx responses throw AilpError with status and body. A 400 mentioning Airta-Api-Key or Airta-Program-Id means the server rejected the request for missing auth.

The AilpAssessHeaders passed to assess / assessStream accepts apiKey, programId, geminiApiKey, and openaiApiKey. Use buildProviderAuthHeaders(entry, auth) if you build fetch yourself — it produces the correct Airta-*, provider-key, and X-*-Api-Key compatibility header set.

Proxied streams: readAilpAssessNdjsonStream(response.body, onEvent) parses POST /assess/stream from any fetch (for example your own Next.js route).

Streaming events (event field)

Same contract as the server:

| event | Purpose | |---------|---------| | meta | Framework list + assessment metadata. | | cached | Result served from server disk cache. | | phase | experts or judge — UI hints during long LLM gaps. | | expert | One expert payload (may include expert_id). | | judge | Judge progress (risk_level, reasoning_preview). | | request_security | Optional request-security side assessment progress (risk_level, reasoning_preview). | | done | Final payload — same keys as assess. | | error | Terminal failure (detail). |

Types: AilpAssessStreamEvent, AilpAssessStreamExpertPayload, AilpAssessStreamOptions.


Works with any LLM provider

Pass through messages and the string output from OpenAI, Anthropic, Gemini, or a custom stack:

const response = await openai.chat.completions.create({ model, messages });
const text = response.choices[0]?.message?.content ?? "";
void ailp(messages, text).catch((err) => {
  console.warn("AILP assessment failed:", err);
});

Fire-and-forget wrappers

Assessment runs after your LLM returns; failures are logged, not thrown (unless your LLM call fails).

OpenAI-shaped chat API:

import { wrapOpenAI, AilpClient, AILP_DEFAULT_BASE_URL } from "@airtasystems/ailp";

const client = new AilpClient({ baseUrl: AILP_DEFAULT_BASE_URL });

const response = await wrapOpenAI(
  (p) => openai.chat.completions.create(p),
  { model: "gpt-4o-mini", messages },
  {
    client,
    apiKey: process.env.AILP_API_KEY!,
    programId: process.env.AIRTASYSTEMS_PROGRAM_ID!,
    frameworks: ["eu-ai-act"],
    provider: "gemini",
    geminiApiKey: process.env.GEMINI_API_KEY,
    onAssess: (result) => console.log("Risk:", result.risk_level),
  },
);

Any async LLM function: wrapLlmCall(fn, params, { client, apiKey, programId, frameworks, extractOutput, messages, ... }). Both apiKey and programId are required.


React (@airtasystems/ailp/react)

Keeps react out of the main bundle.

useAilp() — recommended

Memoized createAilp + assess / result / loading / error / reset. Reads Next.js NEXT_PUBLIC_* or Vite VITE_* when options are omitted. Throws synchronously on the first render if apiKey or programId is missing — wrap in an error boundary if you want a graceful fallback.

Use the React hook for panels or tools that intentionally display AILP progress/results. For production chat paths, prefer a server route or background worker that calls AILP as fire-and-forget telemetry.

import { useAilp } from "@airtasystems/ailp/react";

function Panel() {
  const { assess, result, loading, error } = useAilp();

  async function run(messages: { role: string; content: string }[], output: string) {
    await assess(messages, output);
  }

  return (
    <>
      {loading && <p>Assessing…</p>}
      {error && <p>{error.message}</p>}
      {result && <p>Risk: {result.risk_level}</p>}
    </>
  );
}

Each assess clears the previous result and error before the new request. reset() clears UI state without assessing.

Environment variables (browser)

Override any field by passing useAilp({ ... }) instead of relying on env.

| Variable | Role | |----------|------| | NEXT_PUBLIC_AILP_API_KEY / VITE_AILP_API_KEY | Required. AILP API key from ailp.airtasystems.com. | | NEXT_PUBLIC_AIRTASYSTEMS_PROGRAM_ID / VITE_AIRTASYSTEMS_PROGRAM_ID | Required. Program ID from ailp.airtasystems.com. | | NEXT_PUBLIC_AILP_BASE_URL / VITE_AILP_BASE_URL | API base (including any path prefix, e.g. /ailp). Omit for AILP_DEFAULT_BASE_URL (https://ailp.airtasystems.com/ailp). | | NEXT_PUBLIC_AILP_PROVIDER / VITE_AILP_PROVIDER | Omit so the server picks pipeline and keys. Set gemini or openai only when the browser must send provider API key headers. | | NEXT_PUBLIC_GEMINI_API_KEY / VITE_GEMINI_API_KEY | Required when provider (or split experts/judge) uses Gemini. | | NEXT_PUBLIC_OPENAI_API_KEY / VITE_OPENAI_API_KEY | Required when provider (or split experts/judge) uses OpenAI. | | NEXT_PUBLIC_AILP_FRAMEWORKS / VITE_AILP_FRAMEWORKS | Comma-separated or JSON array; default eu-ai-act. |

Security: NEXT_PUBLIC_* / VITE_* values ship to the browser. Treat NEXT_PUBLIC_AILP_API_KEY as a scoped credential — use a program ID dedicated to browser traffic, and never put production LLM provider keys in public env vars. For sensitive deployments, call AILP from a server route or proxy and keep both the AILP key and LLM keys in private env vars.

useAssess(ailp)

If you already have an AilpFn from createAilp():

import { createAilp } from "@airtasystems/ailp";
import { useAssess } from "@airtasystems/ailp/react";

const ailp = createAilp({
  apiKey: process.env.AILP_API_KEY!,
  programId: process.env.AIRTASYSTEMS_PROGRAM_ID!,
  frameworks: ["eu-ai-act"],
  geminiApiKey: process.env.GEMINI_API_KEY,
});
const { assess, result, loading, error } = useAssess(ailp);

Framework slugs

Hyphen and underscore variants are accepted where listed.

| Slug(s) | Framework | |---------|-----------| | eu_ai_act / eu-ai-act | EU AI Act | | oecd | OECD AI Principles (server default if none selected) | | owasp_llm / owasp-llm | OWASP Top 10 for LLMs | | owasp_agent / owasp-agent | OWASP Top 10 for Agentic Applications | | nist_ai_rmf / nist-ai-rmf | NIST AI RMF | | mitre_attack / mitre-attack | MITRE ATT&CK | | pld | EU PLD (AI) | | fria_core / fria-core | FRIA Core | | fria_extended / fria-extended | FRIA Extended |

OWASP: LLM and agentic experts are separate; include both slugs in frameworks if you want both lenses in one request.


Risk levels (severity)

criticalhighmediumlowinformationalcompliantindeterminate


Node: load .env before reading process.env

Node does not load .env automatically. Use dotenv (or your host’s secrets) before createAilp so AILP_API_KEY, AIRTASYSTEMS_PROGRAM_ID, and any provider keys are defined — otherwise createAilp() throws on start-up, or you may see 400 responses mentioning a missing Airta-Api-Key, Airta-Program-Id, OpenAI-Api-Key, or Gemini-Api-Key header.


Troubleshooting

| Symptom | What to check | |---------|----------------| | createAilp throws at start-up | apiKey and programId are both required. Load env (e.g. dotenv) before createAilp(). | | 400Missing required header(s): Airta-Api-Key, Airta-Program-Id | Pass apiKey / programId (or the corresponding env vars); values are trimmed, so whitespace-only strings are treated as missing. | | 400Missing OpenAI API key / Missing Gemini API key | Pass openaiApiKey: process.env.OPENAI_API_KEY and/or geminiApiKey: process.env.GEMINI_API_KEY. Hosted programs may require these even when provider is omitted and provider selection is server-side. | | 400 — bad body shape or missing import flag | Send a flat body with airta_import: 1, not { airta_import: entry }. Include top-level timestamp, input, output, modelTested, framework, and airtasystems. | | Wrong server | Set baseUrl (no trailing slash). | | Timeouts | Do not await AILP on the product response path. Use fire-and-forget telemetry, increase timeoutMs only for awaited workflows, or use assessStream for operator/admin progressive UI. | | AilpError | Inspect status and body; while testing raw scripts, print the response as JSON.stringify(result) so validation details are visible. |


License

MIT