@ai-agentree/sdk
v0.1.0
Published
AI Agentree TypeScript SDK — AI Agent Decision Tracing
Downloads
43
Maintainers
Readme
@ai-agentree/sdk — TypeScript SDK
TypeScript SDK for AI Agentree decision tracing — auto-tracing, reasoning extraction, local mode, and export methods. Near-full parity with the Python SDK (see differences below).
Requirements
- Node.js >= 18 (uses built-in
fetch) - Zero runtime dependencies
Installation
npm install @ai-agentree/sdkOr link locally during development:
cd sdk/typescript && npm install && npm run buildQuick Start — Local Mode (No API Key Needed)
import { LocalTracer, TracedAgent } from "@ai-agentree/sdk";
const tracer = new LocalTracer();
const client = tracer.getClient();
const agent = new TracedAgent(client, { workflowId: "claim_review", entityId: "CLM-4821" });
await agent.start();
const response = await agent.chat(myLlm, "Review this insurance claim for $15,000");
await agent.end();
// Export results
await agent.exportMarkdown("trace.md");
console.log(agent.stats());Quick Start — Cloud Mode
import { AgentreeClient, TracedAgent } from "@ai-agentree/sdk";
const client = new AgentreeClient({
apiKey: "ask_...",
baseUrl: "https://your-tenant.argumentree.com",
tenantId: "your-tenant-id",
});
const agent = new TracedAgent(client, {
workflowId: "claim_review",
entityType: "claim",
entityId: "CLM-4821",
title: "Insurance Claim Review",
});
await agent.start();
const response = await agent.chat(myLlm, "Review this insurance claim");
await agent.end();How It Works — 3 Integration Stages
The SDK supports three integration stages with increasing data quality:
Stage 1: Passive Tracing (TracedLLM / instrumentLlm)
No prompt changes. Wraps your LLM transparently. The SDK intercepts the response and extracts what it can using heuristics.
const traced = instrumentLlm(new OpenAI(), client, "review");
// Your prompts are unchanged — tracing is invisible
const response = await traced.chat.completions.create({ ... });Data quality: Basic — bullet/list parsing, keyword-guessed categories, no relations or confidence.
Stage 2: Structured Prompts (TracedAgent, default)
Adds DECISION_SYSTEM_PROMPT that instructs the LLM to output structured JSON. This is the default.
const agent = new TracedAgent(client, {
workflowId: "review", entityId: "ORD-123",
// useStructuredPrompt: true ← default
});Data quality: Rich — categories, confidence scores, supports/opposes relations, structured decisions.
Stage 3: Argument Tree Prompts (useArgumentPrompt)
Uses ARGUMENT_SYSTEM_PROMPT for full pro/con argument trees with hierarchy levels.
const agent = new TracedAgent(client, {
workflowId: "review", entityId: "ORD-123",
useArgumentPrompt: true, // overrides useStructuredPrompt
});Data quality: Maximum — full argument hierarchy, typed relations, hierarchy levels, structured evidence.
Data Quality Summary
| | Stage 1 (Passive) | Stage 2 (Structured) | Stage 3 (Argument Tree) | |---|---|---|---| | Prompt changes | None | System prompt added | System prompt added | | Steps | Bullet/list heuristics | LLM-structured JSON | LLM-structured JSON | | Categories | Keyword-guessed | LLM-assigned | LLM-assigned | | Confidence | Not available | Per-step scores | Per-step scores | | Relations | Not available | Supports/opposes | Full hierarchy | | Decision | Keyword scan | Structured | Structured |
Integration Methods
Method 1: TracedAgent (Recommended)
import { AgentreeClient, TracedAgent } from "@ai-agentree/sdk";
const client = new AgentreeClient({ apiKey: "...", baseUrl: "...", tenantId: "..." });
const agent = new TracedAgent(client, {
workflowId: "order_review",
entityType: "order",
entityId: "ORD-123",
// useStructuredPrompt: true, // default — prepends DECISION_SYSTEM_PROMPT
// useArgumentPrompt: false, // set true for full argument tree (Stage 3)
});
await agent.start();
const response = await agent.chat(llm, "Should we approve this $500 order?", {
context: { order_amount: 500, customer_type: "new" },
});
await agent.end();
// Access extracted reasoning
console.log(agent.extractedReasoning);Method 2: instrumentLlm (One-Liner Wrapper)
import { AgentreeClient, instrumentLlm } from "@ai-agentree/sdk";
import OpenAI from "openai";
const client = new AgentreeClient({ apiKey: "...", baseUrl: "...", tenantId: "..." });
const traced = instrumentLlm(new OpenAI(), client, "review");
// Use exactly like normal OpenAI — tracing is automatic
const response = await traced.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Review order ORD-123" }],
_entity_id: "ORD-123",
});Method 3: Manual Control
const trace = await client.startTrace({
agentId: "claims-processor",
workflowId: "claim_review",
entityType: "claim",
entityId: "CLM-4821",
});
await trace.addInput("claim_amount", 2340.0, { source: "database" });
await trace.addStep({
tempId: "s1",
title: "Check claim threshold",
category: "financial_threshold",
confidence: 0.95,
});
await trace.seal({ decisionId: "d1", action: "approve", confidence: 0.94 });Export Methods
After tracing, export results in multiple formats:
const agent = new TracedAgent(client, { workflowId: "review", entityId: "X" });
await agent.start();
await agent.chat(llm, "Review this claim");
await agent.end();
agent.printRaw(); // Pretty-print raw LLM response
await agent.exportJson("trace.json"); // Formatted JSON
await agent.exportText("trace.txt"); // Human-readable plain text
await agent.exportMarkdown("trace.md"); // Markdown with headers
await agent.exportMermaid("trace.mmd"); // Mermaid diagram (steps + relations)
await agent.exportJsonl("traces.jsonl"); // Append as JSONL line
const stats = agent.stats();
// { word_count: 234, step_count: 5, relation_count: 3,
// input_count: 2, categories: ['cost_benefit', 'risk_assessment'],
// has_decision: true }Local-First Mode (LocalTracer)
Zero-config local tracing — console + JSONL file, no API needed:
import { LocalTracer } from "@ai-agentree/sdk";
const tracer = new LocalTracer(); // console + file
const tracer = new LocalTracer({ console: false }); // file only
const tracer = new LocalTracer({ filePath: "my-traces.jsonl" }); // custom path
const client = tracer.getClient();Reasoning Extraction
Extract structured reasoning from any LLM output:
import { ReasoningExtractor } from "@ai-agentree/sdk";
const extractor = new ReasoningExtractor();
// Works with JSON
const extracted = extractor.extract('{"reasoning_steps": [...], "decision": {...}}');
// Also works with freeform text
const extracted2 = extractor.extract(`
1. Checked the order amount ($500) against threshold
2. Verified customer history - new customer
3. Assessed risk level
Decision: Approve with standard verification
`);
// Returns: { inputs: [...], steps: [...], relations: [...], decision: {...} }Prompt Templates
Use built-in prompts for consistent structured output:
import {
DECISION_SYSTEM_PROMPT,
ARGUMENT_SYSTEM_PROMPT,
formatDecisionPrompt,
} from "@ai-agentree/sdk";
// System prompt for structured JSON reasoning (Stage 2)
const system = DECISION_SYSTEM_PROMPT;
// System prompt for full argument trees (Stage 3)
const argSystem = ARGUMENT_SYSTEM_PROMPT;
// Format user message with context
const userMessage = formatDecisionPrompt("Review this loan", { amount: 50000 });Using with MCP (Model Context Protocol)
For Claude and other MCP-enabled agents, provide AIAgentree tools:
import { getMcpTools } from "@ai-agentree/sdk";
// Get tool definitions
const tools = getMcpTools();
// Returns: [agentree_start_trace, agentree_add_input, agentree_add_reasoning_step, agentree_seal_decision]
// Provide to your MCP-enabled agent — Claude will call these tools as it reasonsUsing with Function Calling
For OpenAI/Anthropic function calling:
import { getOpenAiFunctionSchema, getAnthropicToolSchema } from "@ai-agentree/sdk";
// OpenAI function calling
const functions = [getOpenAiFunctionSchema()];
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [...],
functions,
function_call: { name: "submit_decision" },
});
// Anthropic tool use
const tools = [getAnthropicToolSchema()];
const response = await anthropic.messages.create({
model: "claude-3-opus-20240229",
messages: [...],
tools,
});Transports
Three built-in transports:
| Transport | Description |
|-----------|-------------|
| HttpTransport | HTTP with retry/backoff (default) |
| FileTransport | JSONL to local file |
| ConsoleTransport | Pretty-prints to stdout |
| BufferedTransport | Wraps any transport; batches events and flushes periodically or at threshold |
API Reference
AgentreeClient
| Method | Description |
|--------|-------------|
| startTrace(options) | Create a new decision trace |
| getTrace(traceId) | Get a trace by ID |
| listTraces(options?) | List traces with filtering |
| validateTrace(traceId) | Validate a trace |
| transformTrace(traceId) | Transform into Argumentree objects |
| getValidationStats() | Get validation statistics |
TracedAgent
Constructor options:
| Option | Default | Description |
|--------|---------|-------------|
| workflowId | — | Workflow identifier (required) |
| entityId | — | Entity ID (required) |
| entityType | — | Entity type (e.g., "order") |
| useStructuredPrompt | true | Prepend DECISION_SYSTEM_PROMPT (Stage 2) |
| useArgumentPrompt | false | Prepend ARGUMENT_SYSTEM_PROMPT instead (Stage 3) |
Methods:
| Method | Description |
|--------|-------------|
| start() | Start the trace |
| chat(llm, message, options?) | Send message to LLM with auto-tracing |
| end() | Seal and transform the trace |
| printRaw() | Print raw LLM response |
| exportJson(path) | Write formatted JSON |
| exportText(path) | Write plain text summary |
| exportMarkdown(path) | Write Markdown |
| exportMermaid(path) | Write Mermaid diagram |
| exportJsonl(path) | Append JSONL line |
| stats() | Return summary statistics |
Trace
| Method | Description |
|--------|-------------|
| addInput(key, value, options?) | Add an input snapshot |
| addStep(options) | Add a deliberation step |
| addRelation(parentId, childId, options?) | Link two steps |
| addPolicy(policyId, outcome, options?) | Add policy evaluation |
| setDiscussionData(options) | Set discussion metadata |
| seal(options) | Seal the decision (auto-validates; returns validation_status + quality_score) |
| abort(reason?) | Abort the trace |
| getStatus() | Get trace status |
| validate() | Manually revalidate (usually not needed — seal auto-validates) |
| transform() | Trigger transformation |
Constants
RECOMMENDED_CATEGORIES— 21 deliberation categoriesEVENT_TYPES— 18 event typesSIGNIFICANCE_LEVELS— 5 significance levels
Differences from Python SDK
| Feature | TypeScript | Python |
|---------|-----------|--------|
| useArgumentPrompt (Stage 3) | Supported | Supported (use_argument_prompt) |
| @traced_decision decorator | Not available | Supported |
| Context manager (with) | Manual start()/end() | with TracedAgent(...) |
| STRUCTURED_OUTPUT_SUFFIX | Exported | Not in __all__ |
| Prompt overrides | Supported | Supported |
