npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@crystralai/sdk

v0.4.1

Published

TypeScript SDK for Crystral - Local-first AI agent framework

Readme

@crystralai/sdk

npm version License: MIT Node.js: ≥18 TypeScript

TypeScript SDK for Crystral — a local-first AI agent framework that lets you define agents in YAML, connect them to any LLM provider, equip them with tools, and chat with them in code.

Key differentiators:

  • File-based agents — define agents in YAML, not code
  • Provider-agnostic — OpenAI, Anthropic, Groq, Google, Together AI out of the box
  • Persistent sessions — SQLite-backed multi-turn conversations that survive restarts
  • Built-in RAG — attach a document collection to any agent with two YAML fields
  • Multi-agent workflows — orchestrate multiple agents with YAML-defined workflows
  • Agent delegation — agents can call other agents as tools
  • MCP client — connect agents to MCP servers for dynamic tool discovery
  • Full TypeScript — comprehensive types and TSDoc on every export

Table of Contents

  1. Installation
  2. Prerequisites
  3. Quick Start
  4. Core Concepts
  5. API Reference
  6. RunOptions
  7. RunResult
  8. Sessions
  9. Streaming
  10. Workflows
  11. Agent Delegation
  12. Inference Logs
  13. Supported Providers
  14. Error Handling
  15. Agent YAML Reference
  16. Guides
  17. License

Installation

# npm
npm install @crystralai/sdk

# pnpm
pnpm add @crystralai/sdk

# yarn
yarn add @crystralai/sdk

Prerequisites

  • Node.js 18+ (ESM and CommonJS both supported)
  • An agents/ directory in your project root containing agent YAML files
  • At least one provider API key (see Supported Providers)
my-project/
├── agents/
│   └── assistant.yaml   ← your agent definition
├── .env                 ← OPENAI_API_KEY=sk-...
└── index.ts

Quick Start

import { Crystral } from '@crystralai/sdk';

const client = new Crystral();
const result = await client.run('assistant', 'What is the capital of France?');
console.log(result.content); // "Paris"

agents/assistant.yaml:

version: "1"
name: assistant
provider: openai
model: gpt-4o-mini
system_prompt: You are a helpful assistant.

Core Concepts

Crystral follows a simple layered model:

Crystral (client)
  ├── loadAgent(name) → Agent (instance)
  │     └── run(message, options) → RunResult
  └── loadWorkflow(name) → Workflow (instance)
        └── run(task, options) → SDKWorkflowRunResult
  1. Crystral — the client; reads config from disk, manages the SQLite store
  2. Agent — a configured agent loaded from YAML; holds conversation state in memory
  3. Workflow — a multi-agent workflow loaded from YAML; orchestrates multiple agents
  4. RunResult — the structured response from a single turn, including session ID, token usage, tool calls, and RAG context

API Reference

Crystral Client

| Method | Signature | Description | |--------|-----------|-------------| | constructor | new Crystral(options?: CrystralOptions) | Create a client. options.cwd defaults to process.cwd(). | | loadAgent | (name: string) → Agent | Load an agent by name from agents/<name>.yaml. | | run | (name: string, message: string, options?: RunOptions) → Promise<RunResult> | One-shot: load agent and run in a single call. | | loadWorkflow | (name: string) → Workflow | Load a workflow by name from workflows/<name>.yaml. | | runWorkflow | (name: string, task: string, options?: SDKWorkflowRunOptions) → Promise<SDKWorkflowRunResult> | One-shot: load workflow and run in a single call. | | getLogs | (filter?: GetLogsFilter) → InferenceLog[] | Query persisted inference logs from SQLite. |

Agent Instance

| Member | Signature | Description | |--------|-----------|-------------| | name | string (getter) | Agent name from YAML. | | provider | string (getter) | LLM provider (e.g. "openai"). | | model | string (getter) | Model identifier (e.g. "gpt-4o"). | | run | (message: string, options?: RunOptions) → Promise<RunResult> | Send a message, get a response. | | getHistory | () → Message[] | In-memory conversation history for this agent. | | clearSession | () → void | Reset in-memory history and start a new session. |


RunOptions

| Field | Type | Default | Description | |-------|------|---------|-------------| | sessionId | string | auto | Resume an existing session. Omit to start a new one. | | variables | Record<string, string> | {} | Key/value pairs substituted in tool URL/body templates. | | maxToolIterations | number | 10 | Maximum tool-call cycles per run to prevent infinite loops. | | stream | boolean | false | Deliver tokens via onToken as the model generates them. | | onToken | (token: string) → void | — | Streaming callback; called once per token when stream: true. | | onToolCall | (name, args) → void | — | Called before each tool is executed. | | onToolResult | (name, result) → void | — | Called after each tool finishes. result.success is false on error. | | onAgentDelegation | (parent, target, task) → void | — | Called when an agent delegates to another agent. | | onAgentDelegationResult | (parent, target, result, success) → void | — | Called when a delegation completes. |


RunResult

| Field | Type | Description | |-------|------|-------------| | content | string | The agent's final text response. | | sessionId | string | Pass to RunOptions.sessionId to continue the conversation. | | messages | Message[] | Full conversation history including this turn. | | toolCalls | Array<{name, args, result}> | Tool invocations made during this run. | | ragContext | string \| undefined | RAG context injected into the prompt (if any). | | usage.input | number | Prompt tokens consumed. | | usage.output | number | Completion tokens generated. | | usage.total | number | Sum of input and output tokens. | | durationMs | number | Total wall-clock time for this run in milliseconds. |


Sessions

Sessions are persisted to a local SQLite database and survive process restarts. Pass sessionId from one result into the next call to continue the conversation.

const client = new Crystral();

// Turn 1 — new session created automatically
const r1 = await client.run('support-bot', 'My order arrived damaged.');
console.log('Session:', r1.sessionId);

// Turn 2 — continue the same session
const r2 = await client.run('support-bot', 'Order #98765', {
  sessionId: r1.sessionId,
});

// Turn 3
const r3 = await client.run('support-bot', 'Yes, please proceed with the replacement.', {
  sessionId: r1.sessionId,
});

See docs/guides/sessions.md for the full guide.


Streaming

Enable token-by-token streaming with stream: true and the onToken callback:

const result = await client.run('assistant', 'Write a short poem about the ocean.', {
  stream: true,
  onToken: (token) => process.stdout.write(token),
  onToolCall: (name, args) => console.error(`\n[tool] ${name}(${JSON.stringify(args)})`),
  onToolResult: (name, res) => console.error(`[tool] ${name} → ${res.success ? 'ok' : 'error'}`),
});

process.stdout.write('\n');
console.log('Tokens used:', result.usage.total);

See docs/guides/streaming.md for details.


Workflows

Workflows orchestrate multiple agents to accomplish complex tasks. Define a workflow in YAML and run it with the SDK:

const client = new Crystral();

// Load and run a workflow
const workflow = client.loadWorkflow('content-pipeline');
const result = await workflow.run('Write an article about renewable energy');

console.log(result.content);        // Final synthesized output
console.log(result.agentResults);   // Per-agent call counts
console.log(result.usage.total);    // Total tokens across all agents
console.log(result.durationMs);     // Total execution time

One-shot convenience

const result = await client.runWorkflow('content-pipeline', 'Write about AI');

Workflow callbacks

Monitor agent delegation in real time:

const result = await workflow.run('Research and write about quantum computing', {
  onToken: (token) => process.stdout.write(token),
  onAgentDelegation: (parent, target, task) => {
    console.log(`\n[delegation] ${parent} → ${target}: ${task}`);
  },
  onAgentDelegationResult: (parent, target, result, success) => {
    console.log(`[result] ${target}: ${success ? 'ok' : 'failed'}`);
  },
});

Workflow YAML

# workflows/content-pipeline.yaml
version: 1
name: content-pipeline
description: Research, analyze, and produce content

orchestrator:
  provider: openai
  model: gpt-4o
  system_prompt: |
    You orchestrate content production.
  strategy: auto
  max_iterations: 20
  temperature: 0.7

agents:
  - name: researcher
    agent: research-agent
    description: Gathers information from the web
  - name: writer
    agent: writing-agent
    description: Writes polished final content

context:
  shared_memory: true
  max_context_tokens: 8000

SDKWorkflowRunResult

| Field | Type | Description | |-------|------|-------------| | content | string | Final response from the orchestrator. | | sessionId | string | Session ID of the orchestrator. | | agentResults | Array<{name, calls, lastResult?}> | Per-agent call statistics. | | usage.input | number | Total input tokens across all agents. | | usage.output | number | Total output tokens across all agents. | | usage.total | number | Sum of input and output. | | durationMs | number | Total wall-clock execution time. |


Agent Delegation

Agents can delegate tasks to other agents using the agent tool type. The LLM sees delegations as regular tool calls — it decides when to delegate based on the tool description.

Agent tool YAML

# tools/delegate-research.yaml
version: 1
name: delegate-research
description: Delegates research tasks to the research specialist
type: agent
agent_name: researcher
pass_context: true
timeout_ms: 120000
max_iterations: 10
parameters:
  - name: task
    type: string
    required: true
    description: The research task to perform

Monitoring delegations

const result = await client.run('orchestrator', 'Analyze this dataset', {
  onAgentDelegation: (parent, target, task) => {
    console.log(`${parent} delegating to ${target}: ${task}`);
  },
  onAgentDelegationResult: (parent, target, result, success) => {
    console.log(`${target} returned (${success ? 'success' : 'failure'})`);
  },
});

Circular delegation protection

CrystalAI tracks the agent call stack and throws CircularDelegationError if a delegation would create a cycle (e.g. A → B → A):

import { CircularDelegationError } from '@crystralai/sdk';

try {
  await client.run('agent-a', 'Do something');
} catch (err) {
  if (err instanceof CircularDelegationError) {
    console.error(`Circular call: ${err.callStack.join(' → ')}`);
  }
}

Inference Logs

Every agent run is automatically logged to a local SQLite database. Retrieve logs with getLogs():

// All logs
const allLogs = client.getLogs();

// Filter by agent, time window, and count
const recentLogs = client.getLogs({
  agentName: 'support-bot',
  since: new Date(Date.now() - 24 * 60 * 60 * 1000), // last 24 h
  limit: 50,
});

recentLogs.forEach(log => {
  console.log(`${log.agentName} | ${log.durationMs}ms | ${log.usage?.totalTokens} tokens`);
});

Supported Providers

| Provider | Value | Default Model | Environment Variable | |----------|-------|---------------|----------------------| | OpenAI | openai | gpt-4o | OPENAI_API_KEY | | Anthropic | anthropic | claude-3-5-sonnet-20241022 | ANTHROPIC_API_KEY | | Groq | groq | llama-3.3-70b-versatile | GROQ_API_KEY | | Google | google | gemini-1.5-pro | GOOGLE_API_KEY | | Together AI | together | meta-llama/Llama-3.3-70B-Instruct-Turbo | TOGETHER_API_KEY |

Set the relevant environment variable in your .env file or shell:

OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...

See docs/guides/providers.md for credential resolution order and provider-specific notes.


Error Handling

All SDK errors extend CrystralError and carry a machine-readable code property.

import {
  CrystralError,
  AgentNotFoundError,
  CredentialNotFoundError,
  ProviderError,
  RateLimitError,
  ToolExecutionError,
  ValidationError,
  CircularDelegationError,
} from '@crystralai/sdk';

try {
  const result = await client.run('my-agent', 'Hello');
} catch (err) {
  if (err instanceof AgentNotFoundError) {
    console.error('Agent YAML not found — check your agents/ directory.');
  } else if (err instanceof CredentialNotFoundError) {
    console.error(`Missing API key. Set ${err.envVarName} in your environment.`);
  } else if (err instanceof RateLimitError) {
    const wait = err.retryAfterMs ?? 5000;
    console.warn(`Rate limited. Retry after ${wait}ms.`);
  } else if (err instanceof ProviderError) {
    console.error(`LLM provider error [${err.code}]: ${err.message}`);
  } else if (err instanceof ToolExecutionError) {
    console.error(`Tool "${err.toolName}" failed: ${err.message}`);
  } else if (err instanceof CircularDelegationError) {
    console.error(`Circular delegation: ${err.callStack.join(' → ')}`);
  } else if (err instanceof ValidationError) {
    console.error(`Agent YAML invalid: ${err.message}`);
  } else if (err instanceof CrystralError) {
    console.error(`Crystral error [${err.code}]: ${err.message}`);
  } else {
    throw err; // re-throw unexpected errors
  }
}

See docs/guides/error-handling.md for the full error reference.


Agent YAML Reference

version: "1"          # Required. Must be "1".
name: my-agent        # Required. Must match the file name (without .yaml).
provider: openai      # Required. See Supported Providers table above.
model: gpt-4o         # Required. Provider-specific model identifier.

system_prompt: |      # Optional. Sets the agent's persona and instructions.
  You are a helpful assistant.

temperature: 0.7      # Optional. 0.0–2.0. Defaults to provider default.

tools:                # Optional. List of tool names (references tools/*.yaml).
  - search
  - delegate-research  # agent-type tools work the same way

rag:                  # Optional. Attach a RAG collection.
  collection: my-docs
  match_threshold: 0.75
  match_count: 5

mcp:                  # Optional. MCP servers for dynamic tool discovery.
  - transport: stdio
    name: filesystem
    command: npx
    args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
  - transport: sse
    name: github
    url: http://localhost:3000/mcp

Full field reference:

| Field | Type | Required | Description | |-------|------|----------|-------------| | version | string | Yes | Schema version. Always "1". | | name | string | Yes | Agent identifier. Must match file name. | | provider | string | Yes | LLM provider key (see providers table). | | model | string | Yes | Model identifier for the chosen provider. | | system_prompt | string | No | Instructions prepended to every conversation. | | temperature | number | No | Sampling temperature (0.0–2.0). | | tools | string[] | No | Tool names referencing tools/<name>.yaml (rest_api, javascript, web_search, agent). | | rag.collection | string | No | Name of the RAG collection directory under rag/. | | rag.match_threshold | number | No | Minimum similarity score (0.0–1.0). Default: 0.7. | | rag.match_count | number | No | Maximum chunks to inject. Default: 5. | | mcp | MCPServerConfig[] | No | MCP servers for dynamic tool discovery (stdio or SSE). |


Guides

Detailed how-to guides are in docs/guides/:

| Guide | Description | |-------|-------------| | Getting Started | Install the SDK, create your first agent, run your first query | | Sessions | Multi-turn conversations, session persistence, forking | | Streaming | Token streaming, tool lifecycle callbacks | | Tools | Tool types (rest_api, javascript, web_search, agent), YAML reference | | Workflows | Multi-agent orchestration, workflow YAML, delegation callbacks | | RAG | Set up document retrieval for an agent | | Error Handling | All error classes, codes, and recovery patterns | | Providers | Credential setup, embedding providers, provider-specific notes |

The generated HTML API reference lives in docs/api/ (not committed). Regenerate it with:

pnpm run docs

License

MIT © Mayur Kakade