npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@pot-sdk2/langchain

v0.1.0

Published

LangChain integration for ThoughtProof Protocol — verify AI outputs in any chain or agent

Readme

@pot-sdk2/langchain

LangChain integration for the ThoughtProof Protocol. Verify AI outputs — in any agent, chain, or retriever — using multi-model consensus verification.

Overview

Three drop-in components:

| Component | What it does | |---|---| | ThoughtProofVerifyTool | A StructuredTool agents can call to fact-check any claim | | ThoughtProofCallbackHandler | Automatically verifies every LLM output in a chain | | ThoughtProofRetrieverWrapper | Filters retrieved documents by verification confidence |

Install

npm install @pot-sdk2/langchain

Peer dependencies (install separately):

npm install pot-sdk @langchain/core

Quick Start

1. ThoughtProofVerifyTool — Agent fact-checker

Give any LangChain agent the ability to verify claims on demand.

import { ThoughtProofVerifyTool } from '@pot-sdk2/langchain';
import { createToolCallingAgent, AgentExecutor } from 'langchain/agents';
import { ChatAnthropic } from '@langchain/anthropic';
import { pull } from 'langchain/hub';

// Set API keys in env:
//   ANTHROPIC_API_KEY, XAI_API_KEY, DEEPSEEK_API_KEY

const verifyTool = new ThoughtProofVerifyTool({
  models: [
    'anthropic/claude-sonnet-4-5',
    'xai/grok-4-1-fast',
    'deepseek/deepseek-chat',
  ],
  mode: 'adversarial',   // adversarial | resistant | balanced | calibrative
  threshold: 0.7,        // minimum confidence to pass
  domain: 'general',     // optional: medical | legal | financial | code | creative
});

const llm = new ChatAnthropic({ model: 'claude-sonnet-4-5' });
const prompt = await pull('hwchase17/openai-tools-agent');

const agent = createToolCallingAgent({ llm, tools: [verifyTool], prompt });
const executor = new AgentExecutor({ agent, tools: [verifyTool] });

const result = await executor.invoke({
  input: 'Verify this claim: The Great Wall of China is visible from space.',
});

console.log(result.output);
// Agent calls verify_claim → UNCERTAIN (60.3%) — not supported by evidence

Tool schema — the agent calls it with:

{ "claim": "The claim text to verify" }

Tool output — a human-readable string with:

Verdict: VERIFIED
Confidence: 87.4%
Threshold: 70.0% — PASSED
Synthesis: Multiple sources confirm that water boils at 100°C at sea level...

2. ThoughtProofCallbackHandler — Automatic chain verification

Attach to any chain to verify every LLM output automatically. Verification happens in the background — the chain output is not blocked or modified.

import { ThoughtProofCallbackHandler } from '@pot-sdk2/langchain';
import { ChatAnthropic } from '@langchain/anthropic';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';

const handler = new ThoughtProofCallbackHandler({
  models: ['anthropic/claude-sonnet-4-5', 'xai/grok-4-1-fast'],
  mode: 'calibrative',
  threshold: 0.6,
  onVerified: (result, text) => {
    console.log(`✓ Output verified (${(result.confidence * 100).toFixed(0)}%):`, text.slice(0, 60));
  },
  onFailed: (result, text) => {
    console.warn(`✗ Verification failed (${(result.confidence * 100).toFixed(0)}%, ${result.verdict}):`, text.slice(0, 60));
  },
});

const llm = new ChatAnthropic({ model: 'claude-sonnet-4-5' });
const prompt = ChatPromptTemplate.fromMessages([
  ['human', '{question}'],
]);
const chain = prompt.pipe(llm).pipe(new StringOutputParser());

const answer = await chain.invoke(
  { question: 'What is the boiling point of water?' },
  { callbacks: [handler] },
);

console.log('Answer:', answer);

// Access all accumulated results:
console.log('All results:', handler.results);

How it works:

  1. handleLLMStart captures the prompt as claim context
  2. handleLLMEnd verifies each generated text against the captured prompt
  3. onVerified / onFailed fire asynchronously with the result
  4. Verification errors are logged and do not break the chain

3. ThoughtProofRetrieverWrapper — Verified RAG

Wrap any retriever to filter out low-confidence documents before they reach the LLM context. Passing documents are annotated with tp_verdict, tp_confidence, and tp_flags in their metadata.

import { ThoughtProofRetrieverWrapper } from '@pot-sdk2/langchain';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { OpenAIEmbeddings } from '@langchain/openai';

const vectorstore = await MemoryVectorStore.fromTexts(
  ['Paris is the capital of France.', 'The Moon is made of cheese.'],
  [{}, {}],
  new OpenAIEmbeddings(),
);

const baseRetriever = vectorstore.asRetriever({ k: 5 });

const verifiedRetriever = new ThoughtProofRetrieverWrapper({
  retriever: baseRetriever,
  models: ['anthropic/claude-sonnet-4-5', 'deepseek/deepseek-chat'],
  minConfidence: 0.6,    // documents below this are filtered out
  mode: 'resistant',
});

// Only verified documents reach the LLM
const docs = await verifiedRetriever.getRelevantDocuments('What is the capital of France?');

console.log(docs.length); // 1 — the Moon/cheese document was filtered
console.log(docs[0].metadata);
// {
//   tp_verdict: 'VERIFIED',
//   tp_confidence: 0.91,
//   tp_flags: [],
// }

Configuration Reference

Model Specs

Models can be specified as "provider/model" strings or full ProviderConfig objects:

// String format — API key from environment variable
models: ['anthropic/claude-sonnet-4-5']
// → reads process.env.ANTHROPIC_API_KEY

// Full config — explicit API key
models: [{ name: 'anthropic', model: 'claude-opus-4-6', apiKey: 'sk-...' }]

Supported providers and their env vars:

| Provider string | Environment variable | |---|---| | anthropic | ANTHROPIC_API_KEY | | openai | OPENAI_API_KEY | | xai | XAI_API_KEY | | deepseek | DEEPSEEK_API_KEY | | google | GOOGLE_API_KEY | | moonshot | MOONSHOT_API_KEY | | mistral | MISTRAL_API_KEY | | cohere | COHERE_API_KEY | | custom | CUSTOM_API_KEY (auto-derived) |

Critic Modes (mode)

| Mode | Behavior | |---|---| | adversarial | Red-team mode — find every flaw. Highest dissent. | | resistant | Require evidence for each objection. Fewer false positives. | | balanced | Adversarial on facts, resistant on logic. Default in pot-sdk. | | calibrative | Re-score confidence without generating new objections. |

ThoughtProofVerifyToolOptions

{
  models: ModelSpec[];          // required
  mode?: CriticMode;            // default: pot-sdk default
  threshold?: number;           // default: 0.7
  domain?: DomainProfile;       // 'medical' | 'legal' | 'financial' | 'code' | 'creative' | 'general'
  verificationMode?: 'basic' | 'standard'; // default: 'basic'
}

ThoughtProofCallbackHandlerOptions

{
  models: ModelSpec[];          // required
  mode?: CriticMode;
  threshold?: number;           // default: 0.6
  domain?: DomainProfile;
  onVerified?: (result: VerificationResult, text: string) => void;
  onFailed?: (result: VerificationResult, text: string) => void;
}

ThoughtProofRetrieverWrapperOptions

{
  retriever: BaseRetriever;     // required — the underlying retriever
  models: ModelSpec[];          // required
  mode?: CriticMode;
  minConfidence?: number;       // default: 0.6
  domain?: DomainProfile;
}

TypeScript

All types are exported:

import type {
  ModelSpec,
  ThoughtProofVerifyToolOptions,
  ThoughtProofCallbackHandlerOptions,
  ThoughtProofRetrieverWrapperOptions,
  VerificationResult,
  CriticMode,
  DomainProfile,
  ProviderConfig,
} from '@pot-sdk2/langchain';

ESM + CJS

The package ships both ESM and CJS builds:

// ESM
import { ThoughtProofVerifyTool } from '@pot-sdk2/langchain';

// CJS
const { ThoughtProofVerifyTool } = require('@pot-sdk2/langchain');

License

MIT