npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

governed

v0.1.1

Published

AI Governance SDK for Self-Hosted LLMs — policy-gated tool execution, claim extraction, and audit recording powered by Limen

Readme

governed

AI Governance SDK for Self-Hosted LLMs. Every output audited. Every tool call policy-gated. Every claim tracked.

npm install governed

Quick Start

Prerequisites: Requires Node.js >= 22. Ollama running with a Gemma 4 model pulled:

ollama pull gemma4:e4b
ollama serve  # if not already running
import { Governed } from 'governed';

const gg = await Governed.create({
  provider: { type: 'ollama', endpoint: 'http://localhost:11434', model: 'gemma4:e4b' },
  governance: { storagePath: './governed-data' },
});

const response = await gg.ask('What is the capital of France?');

console.log(response.text);        // "Paris"
console.log(response.confidence);  // 0.85
console.log(response.audit);       // Full governance trail

Why Governance?

  • Every output is audited. Chat requests, tool calls, policy evaluations, confidence scores -- all recorded in an immutable audit trail.
  • Every tool call is policy-gated. Define rules that block or constrain tool execution. A blocked tool never runs.
  • Every claim is tracked. Tool results and model outputs become knowledge claims -- factual assertions extracted from model responses, structured as subject-predicate-object triples with confidence scores and provenance tracking.

Features

| Feature | Description | |---------|-------------| | Policy Gates | Block tools, constrain parameters, filter output text | | Audit Trails | Immutable, append-only record of every governance decision | | Claim Extraction | Automatic knowledge claims from tool results and responses | | Confidence Scoring | Heuristic confidence on every response | | Escalation | Route low-confidence responses to Claude for verification | | Soft Failure | SDK errors never crash your app -- errors land in the response |

Provider Support

| Provider | Type | Status | |----------|------|--------| | Ollama | ollama | Supported | | llama.cpp | llama-cpp | Supported | | OpenAI-compatible (vLLM, TGI, etc.) | openai-compat | Supported |

API Reference

Governed.create(config)

Create a governed instance. Validates config, checks provider health, initializes governance.

const gg = await Governed.create({
  provider: {
    type: 'ollama',
    endpoint: 'http://localhost:11434',
    model: 'gemma4:e4b',
    timeoutMs: 30_000,
  },
  governance: {
    storagePath: './governed-data',
    autoExtractClaims: true,
    maxToolCallRounds: 5,
    defaultPolicies: [],
  },
  // Optional: escalation to a stronger model for low-confidence responses
  escalation: {
    provider: { type: 'ollama', endpoint: 'http://localhost:11434', model: 'gemma4:31b' },
    threshold: 0.3,
  },
});

gg.chat(options)

The full governance pipeline. Never throws (except after close(), which rejects with INSTANCE_CLOSED) -- errors land in response.error.

const response = await gg.chat({
  messages: [{ role: 'user', content: 'Process refund for order #123' }],
  tools: [refundTool],
  toolHandlers: { process_refund: handleRefund },
  policies: [refundLimitPolicy],
  temperature: 0.7,
});

Returns GovernedResponse:

{
  text: string;              // Model output (or blocked message)
  toolCalls: ToolCallResult[];  // Every tool call with policy verdict
  claims: ClaimRecord[];     // Extracted knowledge claims
  audit: AuditEntry[];       // Full audit trail for this request
  confidence: number;        // [0, 1] confidence score
  escalated: boolean;        // Whether escalation model was consulted
  usage: UsageStats;         // Tokens, latency, cost
  requestId: string;         // Correlation ID
  error?: GovernedErrorInfo;  // Non-null if pipeline hit an error
  warnings: string[];        // Degradation warnings
}

gg.ask(prompt)

Convenience shorthand for single-message chat.

const response = await gg.ask('What is 2+2?');

gg.audit(filter?)

Query the audit trail.

const trail = await gg.audit({ requestId: response.requestId });
const violations = await gg.audit({ policyViolationsOnly: true, limit: 10 });

gg.addPolicy(policy) / gg.removePolicy(name) / gg.listPolicies()

Manage policies at runtime. All changes are audited.

await gg.addPolicy({
  name: 'block-delete',
  description: 'Prevent account deletion',
  rule: { type: 'tool_block', toolName: 'delete_account' },
  mode: 'enforce',
  locked: true,  // Cannot be overridden by per-request policies
});

const policies = await gg.listPolicies();
await gg.removePolicy('block-delete');

gg.flush() / gg.close()

Lifecycle management. close() is idempotent.

await gg.flush();  // Persist pending data
await gg.close();  // Flush + mark disposed

Policy System

Three policy types, three enforcement modes.

Tool Block

Block a tool entirely:

{
  name: 'block-dangerous-tool',
  description: 'Block all calls to dangerous_action',
  rule: { type: 'tool_block', toolName: 'dangerous_action' },
  mode: 'enforce',
}

Use toolName: '*' to block all tools.

Tool Constraint

Constrain tool parameters:

{
  name: 'refund-limit',
  description: 'Refund amount must be under $100',
  rule: {
    type: 'tool_constraint',
    toolName: 'process_refund',
    field: 'amount',
    operator: 'lt',
    value: 100,
  },
  mode: 'enforce',
}

Operators: eq, neq, lt, lte, gt, gte, contains, not_contains, in_set, matches.

Output Constraint

Filter model output text:

{
  name: 'no-profanity',
  description: 'Block responses containing profanity',
  rule: {
    type: 'output_constraint',
    check: 'contains_none',
    values: ['badword1', 'badword2'],
    caseSensitive: false,
  },
  mode: 'enforce',
}

Checks: contains_none, contains_any, contains_all, matches_regex, max_length.

Enforcement Modes

| Mode | Behavior | |------|----------| | enforce | Block the action, record violation | | warn | Allow but record violation | | log | Allow, record for audit only |

Locked Policies

Set locked: true to prevent per-request overrides:

await gg.addPolicy({
  name: 'mandatory-audit',
  description: 'Cannot be overridden',
  rule: { type: 'tool_block', toolName: 'bypass_audit' },
  locked: true,
});

Escalation

When confidence drops below the threshold, the SDK routes to a stronger model for verification. The escalation target is just another provider -- same interface as the primary model:

// Escalate to a bigger local model
const gg = await Governed.create({
  provider: { type: 'ollama', endpoint: 'http://localhost:11434', model: 'gemma4:e4b' },
  governance: { storagePath: './data' },
  escalation: {
    provider: { type: 'ollama', endpoint: 'http://localhost:11434', model: 'gemma4:31b' },
    threshold: 0.3,
  },
});

const response = await gg.ask('Complex legal question...');
if (response.escalated) {
  console.log('Response verified by escalation model');
}

Escalate to any OpenAI-compatible API (Claude, GPT, any hosted endpoint):

escalation: {
  provider: { type: 'openai-compat', endpoint: 'https://api.anthropic.com', model: 'claude-haiku-4-5', apiKey: 'sk-...' },
  threshold: 0.5,
}

Self-hosted escalation with Ollama:

escalation: {
  provider: { type: 'ollama', endpoint: 'http://localhost:11434', model: 'gemma4:31b' },
  threshold: 0.5,
}

The escalated response re-enters the governance pipeline -- it is policy-checked and audited like any other output.

Production Deployment

For production with vLLM or any OpenAI-compatible inference server:

// Production with vLLM
provider: { type: 'openai-compat', endpoint: 'http://your-gpu-server:8000', model: 'gemma4-e4b' }

Providers

All providers use the same config shape. Switching is a one-line change:

// Ollama
{ type: 'ollama', endpoint: 'http://localhost:11434', model: 'gemma4:e4b' }

// llama.cpp server
{ type: 'llama-cpp', endpoint: 'http://localhost:8080', model: 'gemma-4-e4b' }

// vLLM / TGI / any OpenAI-compatible
{ type: 'openai-compat', endpoint: 'http://localhost:8000', model: 'gemma-4-e4b', apiKey: '...' }

Types

All public types are exported:

import type {
  GovernedConfig,
  ChatOptions,
  GovernedResponse,
  Policy,
  PolicyVerdict,
  AuditEntry,
  ClaimRecord,
  ToolCallResult,
  UsageStats,
  GovernedErrorInfo,
} from 'governed';

Troubleshooting

PROVIDER_UNREACHABLE — Model server not responding

The SDK could not reach your model server during the health check.

  1. Verify Ollama is running: ollama list should show your model.
  2. If Ollama is not running: ollama serve
  3. If the model is not pulled: ollama pull gemma4:e4b
  4. Check that endpoint in your config matches the server address (default: http://localhost:11434).

GOVERNANCE_INIT_FAILED — Storage initialization error

The SDK could not initialize governance storage at the configured path.

  1. Verify the directory in governance.storagePath exists: mkdir -p ./governed-data
  2. Check that the process has write permissions to that directory.
  3. If using a relative path, ensure it resolves correctly from your working directory.

No tool calls returned

If the model responds with text but does not invoke tools:

  1. Verify your model supports function calling. Gemma 4 (e4b and above) supports it natively.
  2. Check that tools are passed in your chat() options with correct JSON Schema parameters.
  3. Some smaller models may not reliably produce structured tool calls. Try a larger variant.

INSTANCE_CLOSED — Using a disposed instance

You called chat(), ask(), audit(), or a policy method after calling close().

Create a new instance with Governed.create() -- closed instances cannot be reused.

INVALID_CONFIG — Configuration validation failed

The error message includes which field failed and what was expected. Read the remediation property on the error for specific guidance:

try {
  const gg = await Governed.create(config);
} catch (err) {
  if (err instanceof GovernedError) {
    console.error(err.message);       // What went wrong
    console.error(err.remediation);   // How to fix it
  }
}

License

Apache 2.0

Built by SolisHQ

Governed is built on Limen AI, the governance substrate for autonomous AI systems.