@sidclaw/langchain-governance
v0.1.1
Published
SidClaw governance integration for LangChain.js — policy evaluation, human approval, and audit trails
Maintainers
Readme
@sidclaw/langchain-governance
SidClaw governance integration for LangChain.js — policy evaluation, human approval, and audit trails for AI agent tools.
What it does
Wraps your LangChain.js tools with governance. Before any tool executes:
- Allowed actions run immediately
- High-risk actions require human approval
- Prohibited actions are blocked before execution
- Every decision is logged with a tamper-proof audit trail
Installation
npm install @sidclaw/langchain-governance @langchain/coreQuick Start
Option 1: Enforce policies (recommended)
import { governTools } from '@sidclaw/langchain-governance';
import { AgentIdentityClient } from '@sidclaw/sdk';
const client = new AgentIdentityClient({
apiKey: 'ai_...',
apiUrl: 'https://api.sidclaw.com',
agentId: 'your-agent-id',
});
// Wrap your existing tools — no changes to tool code
const governed = governTools(myTools, { client, data_classification: 'confidential' });Option 2: Monitor only (audit without blocking)
import { GovernanceCallbackHandler } from '@sidclaw/langchain-governance';
const handler = new GovernanceCallbackHandler(client);
// Pass as callback — every tool call is logged, nothing is blocked