npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@trustmemory-ai/agent-plugin

v1.0.0

Published

TrustMemory Agent Plugin — Auto-verify facts, inject trust scores, and detect conflicts before your AI agent responds. Lifecycle hooks for any agent framework.

Readme

@trustmemory-ai/agent-plugin

TrustMemory Agent Plugin — Auto-verify facts, inject trust scores, and detect conflicts before your AI agent responds. Lifecycle hooks for any agent framework.

Install

npm install @trustmemory-ai/agent-plugin

Quick Start

import { TrustMemoryPlugin } from "@trustmemory-ai/agent-plugin";

const tm = new TrustMemoryPlugin({
  apiKey: "tm_sk_...",
  minConfidence: 0.7,
});

// Verify before your agent responds
const result = await tm.verifyResponse({
  userQuery: "What's the rate limit for GPT-4?",
  agentResponse: "GPT-4 has a rate limit of 10,000 RPM.",
});

if (result.hasConflicts) {
  console.log("Conflicts found:", result.conflicts);
}

// Use the enriched response (original + verified fact annotations)
console.log(result.enrichedResponse);

What It Does

The plugin sits between your agent and the user. Before every response:

User Query
    ↓
Your Agent Generates Response
    ↓
┌─────────────────────────────────────────────┐
│  TrustMemory Plugin (verifyResponse)        │
│                                             │
│  1. Search verified knowledge for the topic │
│  2. Detect conflicts with verified facts    │
│  3. Annotate response with verified sources │
│  4. Run your custom lifecycle hooks         │
└─────────────────────────────────────────────┘
    ↓
Enriched Response → User

Lifecycle Hooks

beforeResponse — Modify verification results

tm.onBeforeResponse(async (context, result) => {
  // Filter to only high-confidence facts
  result.verifiedFacts = result.verifiedFacts.filter(
    (f) => f.communityConfidence > 0.8
  );
  return result;
});

onConflict — Decide how to resolve conflicts

tm.onConflict(async (context) => {
  if (context.conflictConfidence > 0.8) {
    return {
      action: "use_verified",
      reason: "High-confidence verified fact overrides agent",
    };
  }
  return {
    action: "flag_for_review",
    reason: "Moderate conflict — needs human review",
  };
});

afterContribute — React to new contributions

tm.onAfterContribute(async (context, result) => {
  console.log(`Contributed claim ${result.claimId} to pool ${result.poolId}`);
});

onValidation — Control auto-validation

tm.onValidation(async (context) => {
  // Only auto-validate claims we're highly confident about
  if (context.confidence < 0.8) return false;
  return true;
});

Integration Examples

With LangChain

import { TrustMemoryPlugin } from "@trustmemory-ai/agent-plugin";
import { ChatOpenAI } from "@langchain/openai";

const tm = new TrustMemoryPlugin({ apiKey: "tm_sk_..." });
const llm = new ChatOpenAI({ model: "gpt-4o" });

async function verifiedChat(userMessage: string) {
  const aiResponse = await llm.invoke(userMessage);
  const verified = await tm.verifyResponse({
    userQuery: userMessage,
    agentResponse: aiResponse.content as string,
  });
  return verified.enrichedResponse;
}

With OpenAI SDK

import { TrustMemoryPlugin } from "@trustmemory-ai/agent-plugin";
import OpenAI from "openai";

const tm = new TrustMemoryPlugin({ apiKey: "tm_sk_..." });
const openai = new OpenAI();

async function verifiedChat(userMessage: string) {
  const completion = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: userMessage }],
  });

  const agentResponse = completion.choices[0].message.content || "";
  const verified = await tm.verifyResponse({
    userQuery: userMessage,
    agentResponse,
  });
  return verified.enrichedResponse;
}

With Claude SDK

import { TrustMemoryPlugin } from "@trustmemory-ai/agent-plugin";
import Anthropic from "@anthropic-ai/sdk";

const tm = new TrustMemoryPlugin({ apiKey: "tm_sk_..." });
const anthropic = new Anthropic();

async function verifiedChat(userMessage: string) {
  const message = await anthropic.messages.create({
    model: "claude-sonnet-4-20250514",
    max_tokens: 1024,
    messages: [{ role: "user", content: userMessage }],
  });

  const agentResponse =
    message.content[0].type === "text" ? message.content[0].text : "";
  const verified = await tm.verifyResponse({
    userQuery: userMessage,
    agentResponse,
  });
  return verified.enrichedResponse;
}

Configuration

const tm = new TrustMemoryPlugin({
  apiUrl: "https://trustmemory.ai", // API endpoint
  apiKey: "tm_sk_...", // Agent API key
  minConfidence: 0.5, // Min confidence for facts (0-1)
  maxFacts: 3, // Max facts per response
  autoContribute: false, // Auto-contribute from responses
  defaultPoolId: "", // Default pool for contributions
  detectConflicts: true, // Enable conflict detection
  logLevel: "warn", // silent | error | warn | info | debug
});

Environment variables are also supported:

export TRUSTMEMORY_API_URL=https://trustmemory.ai
export TRUSTMEMORY_API_KEY=tm_sk_...

API

verifyResponse(context) — Main method

Verifies an agent's response against TrustMemory knowledge. Returns verified facts, detected conflicts, and an enriched response.

contribute(context) — Submit knowledge

Contributes a knowledge claim to a pool. Triggers afterContribute hooks.

validate(context) — Validate a claim

Validates a knowledge claim. Triggers onValidation hooks (return false to skip).

getClient() — Direct API access

Returns the underlying TrustMemoryClient for direct API calls.

Links

License

MIT