npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@orthanc-protocol/client

v0.2.0

Published

Universal SDK for querying agent memory. The Orthanc Protocol standard.

Readme


Orthanc Client SDK

The open source SDK for querying agent memory. Build with a standard interface that works everywhere.

What is this

Orthanc Client SDK is a universal way to query long term memory for AI agents. Whether you're building a chatbot, a support agent, or an autonomous system, this SDK gives you one interface that works locally or in the cloud.

Think of it like this: instead of every AI project building memory differently, they all speak the same language through the Orthanc Protocol.

Installation

npm install @orthanc-protocol/client

Quick start

import { OrthancsClient } from "@orthanc-protocol/client";

const client = new OrthancsClient({
  endpoint: "https://api.orthanc.ai",
  apiKey: "your-api-key",
});

const result = await client.query("user-123", "What do I like?");
console.log(result.memories);

That's it. Your agent now has access to long term memory.

Try it now (Public Demo Key)

Want to test without signing up? Use our public demo key:

import { OrthancsClient } from "@orthanc-protocol/client";

const client = new OrthancsClient({
  endpoint: "https://api.orthanc.ai",
  apiKey: "orth_demo_public_2026",  // Public demo key
});

// Store a memory
await client.syncMessages("demo-user", [
  { role: "user", content: "I love hiking in the mountains" },
  { role: "assistant", content: "That sounds wonderful!" }
]);

// Query memories (~150ms from production servers)
const result = await client.query("demo-user", "What are my hobbies?");
console.log(result.memories);     // ["User loves hiking in the mountains"]
console.log(result.latency_ms);   // ~150ms
console.log(result.queryType);    // "question_match"

Note: The demo key is rate-limited and shared. Get your own key at orthanc.ai for production use.

Features

Query memory

Retrieve relevant memories for any user query. The SDK handles query type detection, scoring, and result formatting.

const result = await client.query("user-123", "What are my hobbies?", {
  matchThreshold: 0.5,
  matchCount: 5,
  timeFilter: "month",
});

console.log(result.memories);     // Array of memory strings
console.log(result.scores);       // Relevance scores (0-1)
console.log(result.queryType);    // "question_match" | "vector_search" | etc.
console.log(result.latency_ms);   // Processing time in ms
console.log(result.requestId);    // Request ID for debugging

Sync memories

Ingest new memories from chat messages or raw text.

await client.syncMessages("user-123", [
  { role: "user", content: "I just moved to San Francisco" },
  { role: "assistant", content: "How do you like it?" },
  { role: "user", content: "I love the weather here" },
]);

await client.syncText("user-123", "User works at Google as a software engineer");

Batch operations

Create, update, or delete multiple memories in a single request.

const result = await client.batch({
  userId: "user-123",
  operations: [
    { action: "create", text: "User likes coffee" },
    { action: "update", id: "mem-1", updates: { category: "food" } },
    { action: "delete", id: "mem-2" },
  ],
});

console.log(result.results.created);
console.log(result.results.updated);
console.log(result.results.deleted);

Export and import

Export all memories for data portability or migration.

const memories = await client.exportAll("user-123");

for (const memory of memories) {
  console.log(memory.content, memory.category, memory.createdAt);
}

Webhooks

Subscribe to real time notifications when memories change.

const webhook = await client.createWebhook({
  url: "https://your-server.com/webhook",
  events: ["memory.created", "memory.updated", "memory.deleted"],
  secret: "your-hmac-secret",
});

Caching

Built in query caching to reduce API calls and improve response times.

const client = new OrthancsClient({
  endpoint: "https://api.orthanc.ai",
  apiKey: "your-api-key",
  cache: {
    enabled: true,
    ttl: 60000,
    maxSize: 1000,
  },
});

Error handling

Typed errors for all failure cases with automatic retries for transient failures.

import {
  OrthancsClient,
  AuthenticationError,
  RateLimitError,
  ValidationError,
} from "@orthanc-protocol/client";

try {
  const result = await client.query("user-123", "What do I like?");
} catch (error) {
  if (error instanceof AuthenticationError) {
    console.error("Invalid API key");
  } else if (error instanceof RateLimitError) {
    console.error("Rate limit exceeded, retry after:", error.retryAfter);
  } else if (error instanceof ValidationError) {
    console.error("Invalid request:", error.message);
  }
}

Local development

Test your agent without hitting an external API using the in memory store.

import { LocalClient } from "@orthanc-protocol/client";

const client = new LocalClient();

await client.syncText("user-123", "User likes hiking");
const result = await client.query("user-123", "What does the user like?");

console.log(result.memories);

Configuration

The client accepts these options:

const client = new OrthancsClient({
  endpoint: "https://api.orthanc.ai",
  apiKey: "your-api-key",
  timeout: 30000,
  retries: 3,
  retryDelay: 1000,
  cache: {
    enabled: true,
    ttl: 60000,
    maxSize: 1000,
  },
});

| Option | Type | Default | Description | |--------|------|---------|-------------| | endpoint | string | required | API endpoint URL | | apiKey | string | required | Your API key | | timeout | number | 30000 | Request timeout in milliseconds | | retries | number | 3 | Number of retry attempts | | retryDelay | number | 1000 | Base delay between retries | | cache.enabled | boolean | false | Enable query caching | | cache.ttl | number | 60000 | Cache TTL in milliseconds | | cache.maxSize | number | 1000 | Maximum cached entries |

The Protocol

The Orthanc Protocol is the standard interface for agent memory. Any backend that implements the protocol works with this SDK.

See the full protocol documentation in docs/protocol.md.

Request format

{
  "userId": "user-123",
  "messages": [{ "role": "user", "content": "What do I like?" }],
  "options": {
    "matchThreshold": 0.5,
    "matchCount": 5
  }
}

Response format

{
  "memories": ["User likes spicy food", "User prefers TypeScript"],
  "scores": [0.92, 0.87],
  "count": 2,
  "queryType": "vector_search",
  "latency_ms": 145
}

Examples

The examples folder contains working code for common use cases:

  • basic.ts: Simple query and response
  • error-handling.ts: Handling all error types
  • sync-memories.ts: Ingesting memories from different sources
  • batch-operations.ts: Bulk create, update, delete
  • export-import.ts: Data portability
  • webhooks.ts: Real time notifications
  • local-development.ts: Testing without an API
  • caching.ts: Query caching
  • langchain-integration.ts: Using with LangChain

Building your own backend

The protocol is open. You can build your own memory backend that speaks the same language.

Your backend needs to implement these endpoints:

  • POST /api/context: Query memory
  • POST /api/sync: Ingest memory
  • POST /api/memories/batch: Batch operations
  • GET /api/memories/export: Export data
  • POST /api/webhooks: Create webhook
  • GET /api/health: Health check

See the protocol documentation for request and response formats.

Why this matters

Memory is hard. Every project does it differently. Some use databases, some use vectors, some use graphs. The Orthanc Protocol says: here's the interface everyone should use.

Once your agent uses this SDK, switching between a local memory system and a cloud service is literally one line of code change.

This is the whole point. You build once, run anywhere.

License

Apache 2.0

Questions

Open an issue on GitHub or reach out on Discord.