npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@reconai/sdk

v0.1.2

Published

Trust layer for agent systems: Reflex scoring, drift signals, GhostLog-oriented contracts, and recovery routing.

Readme

ReconAI SDK

A trust layer for agent systems.

ReconAI helps detect silent drift before outputs visibly fail: trust scoring (Reflex), structured failure memory (GhostLog contracts in the full platform), classification, and recovery routing. This package is the TypeScript SDK used by the Recon product; the public developer experience is stabilizing around install, examples, and predictable entrypoints.

Status

The SDK already powers internal flows for Reflex scoring, guard middleware, swarm/mission trust, and related APIs. We are tightening package naming, install flow, examples, and stable public API boundaries. Advanced subpaths such as @reconai/sdk/migration ship as TypeScript sources for operator tooling; treat the root export as the primary integration surface.

Publishing note (maintainers)

This package depends on @reconai/reflex-core via workspace:* in the monorepo. Publish with pnpm from this package (after @reconai/reflex-core is on npm), e.g. pnpm publish --filter @reconai/sdk --access public, so pnpm rewrites that dependency to a real semver in the tarball. Using plain npm publish from the folder can leave workspace:* in the published package.json, which makes npm install fail for users with EUNSUPPORTEDPROTOCOL.

Install

npm install @reconai/sdk

Peer dependency: @reconai/reflex-core is bundled as a normal dependency; you typically only install @reconai/sdk.

Try it in 30 seconds

The CLI binary is reconai, but the npm package is @reconai/sdk. There is no unscoped reconai package—install the scoped package first (or use -p below).

npm install @reconai/sdk
npx reconai demo
npx reconai explain

One-shot without adding a dependency to package.json (downloads the package into npx’s cache):

npx --yes -p @reconai/sdk reconai demo
npx --yes -p @reconai/sdk reconai explain

This shows a terminal Reflex Score dashboard for an output that looks plausible but should not be trusted—before you wire anything into your own agent.

Optional: scaffold a local file and run the demo once (after npm install @reconai/sdk, or prefix with npx --yes -p @reconai/sdk):

npx reconai init

Render a real guard() run in the terminal

Synthetic onboarding (npx reconai demo) and live runs share the same renderer. After guard() returns, pass the same request you used for evaluation plus the result:

import {
  configure,
  guard,
  reflexDashboardFromGuardResult,
  renderReflexDashboard,
} from "@reconai/sdk";

configure({ baseUrl: "" });

const request = {
  agentId: "support_agent",
  actionType: "summarize_thread",
  toolName: "ticket_api",
  signals: {
    contextIntegrity: 52,
    behavioralConsistency: 58,
    toolRisk: 72,
    outcomeConfidence: 92,
    policyAlignment: 55,
  },
  requestId: `run-${Date.now()}`,
};

const result = await guard(
  request,
  async () => ({
    summary: "Probably a billing issue, but I'm not fully sure.",
  })
);

console.log(renderReflexDashboard(reflexDashboardFromGuardResult(request, result)));

For advanced use (custom trends, notes, or when you already have a full ReflexSignals object from guardEvaluate), use buildReflexDashboardState directly.

60-second example

guard evaluates Reflex signals locally, applies policy, optionally runs your tool/lambda, and (when a dashboard is configured) posts trust state for ingestion.

import { configure, guard } from "@reconai/sdk";

configure({ baseUrl: "http://localhost:3000" }); // your Recon dashboard origin, or omit for local-only eval

const result = await guard(
  {
    agentId: "support_agent",
    actionType: "summarize_thread",
    toolName: "ticket_api",
    signals: {
      contextIntegrity: 88,
      behavioralConsistency: 84,
      toolRisk: 55,
      outcomeConfidence: 82,
      policyAlignment: 78,
    },
    requestId: `demo-${Date.now()}`,
  },
  async () => {
    return { summary: "…" };
  }
);

if (result.blocked) {
  console.error(result.policyError ?? result.decision);
} else {
  console.log(result.score, result.decision, result.result);
}

What you get back

GuardResult includes decision (e.g. EXECUTE, RETRY, BLOCK), score (Reflex score 0–100), autonomyConfidence, predictionGap, optional result from your callback, and blocked / policyError when execution is not allowed.

Core concepts

| Concept | Meaning | |--------|---------| | Reflex Score | Trust signal for a step or action, derived from signal dimensions (context integrity, tool risk, etc.). | | GhostLog | In the full platform, structured memory of drift/failure patterns (types live in this SDK for contracts and UI). | | Recovery | Treated as a state transition in product flows—not only “retry again.” |

More detail: Reflex Score · GhostLog · Recovery.

Flow (high level)

flowchart TD
  A[Agent / tool step] --> G[guard / guardEvaluate]
  G --> R[Reflex score + decision]
  R --> I[Trust state ingestion optional]
  R --> P[Policy + recovery routing in product]

Runnable examples (this repo)

From packages/sdk:

pnpm install
pnpm run build
pnpm run example:quickstart
pnpm run example:drift
pnpm run example:langchain

LangChain-style wrapper

import { configure, withRecon } from "@reconai/sdk";

configure({ baseUrl: "http://localhost:3000" });

const out = await withRecon(
  {
    agentId: "lc_agent",
    actionType: "runnable_invoke",
    toolName: "chain",
    signals: {
      contextIntegrity: 86,
      behavioralConsistency: 85,
      toolRisk: 48,
      outcomeConfidence: 83,
      policyAlignment: 80,
    },
  },
  async (input: string) => `Echo: ${input}`,
  "hello",
);

console.log(out.score, out.decision, out.result);

Or import the adapter only: import { withRecon } from "@reconai/sdk/adapters/langchain".

What is stable today vs tightening

Stable to build on: root exports — guard, guardEvaluate, configure, Reflex re-exports (computeReflexScore, …), and shared trust event types.

Still tightening: migration/deploy subpaths, full middleware surface for every framework, and semver guarantees as we gather external feedback.

Early builders

If you are stress-testing long chains, tool-heavy agents, or nested tool misuse, open an issue or reach out — that feedback shapes calibration and recovery policy.

Docs

License

MIT — see LICENSE.