@directive-run/ai
v1.4.0
Published
AI guardrails and orchestration for Directive. Prompt injection, PII detection, cost tracking, multi-agent patterns.
Maintainers
Readme
@directive-run/ai
AI agent orchestration with guardrails, cost tracking, and multi-agent coordination. Built on Directive's constraint-driven runtime.
- No SDK dependencies – pure
fetchadapters for OpenAI, Anthropic, Ollama, and Gemini - Guardrails – input, output, and tool call validation with retry support
- Multi-agent orchestration – parallel, sequential, and supervisor patterns
- Cost tracking – per-call token usage with pricing constants for every provider
- Streaming – async iterable streams with backpressure and streaming guardrails
- Provider adapters – swap providers by changing one import, not your codebase
Install
npm install @directive-run/core @directive-run/aiProvider adapters are subpath exports – no extra packages needed.
Quick Start
import { createAgentOrchestrator } from "@directive-run/ai";
import { createOpenAIRunner } from "@directive-run/ai/openai";
const runner = createOpenAIRunner({ apiKey: process.env.OPENAI_API_KEY! });
const orchestrator = createAgentOrchestrator({
runner,
guardrails: {
input: [async (data) => ({ passed: data.input.length < 10000 })],
},
});
const result = await orchestrator.run(
{ name: "assistant", instructions: "You are a helpful assistant." },
"Hello!",
);
console.log(result.output);Provider Adapters
Adapters are thin wrappers around each provider's HTTP API. No SDK dependencies – pure fetch.
| | OpenAI | Anthropic | Ollama | Gemini |
|---|--------|-----------|--------|--------|
| Import | @directive-run/ai/openai | @directive-run/ai/anthropic | @directive-run/ai/ollama | @directive-run/ai/gemini |
| Default model | gpt-4o | claude-sonnet-4-5-20250929 | llama3 | gemini-2.0-flash |
| API key required | Yes | Yes | No | Yes |
| Streaming runner | createOpenAIStreamingRunner | createAnthropicStreamingRunner | – | createGeminiStreamingRunner |
| Embedder | createOpenAIEmbedder | – | – | – |
| Pricing constants | OPENAI_PRICING | ANTHROPIC_PRICING | – | GEMINI_PRICING |
| Compatible APIs | Azure, Together, any OpenAI-compatible | – | – | – |
Cost Tracking
Every adapter returns tokenUsage with input/output breakdown:
import { estimateCost } from "@directive-run/ai";
import { createOpenAIRunner, OPENAI_PRICING } from "@directive-run/ai/openai";
const runner = createOpenAIRunner({ apiKey: process.env.OPENAI_API_KEY! });
const result = await runner(agent, "Hello");
const { inputTokens, outputTokens } = result.tokenUsage!;
const cost =
estimateCost(inputTokens, OPENAI_PRICING["gpt-4o"].input) +
estimateCost(outputTokens, OPENAI_PRICING["gpt-4o"].output);Lifecycle Hooks
Attach hooks to any adapter for observability:
import { createAnthropicRunner } from "@directive-run/ai/anthropic";
const runner = createAnthropicRunner({
apiKey: process.env.ANTHROPIC_API_KEY!,
hooks: {
onBeforeCall: ({ agent, input }) => console.log(`Calling ${agent.name}`),
onAfterCall: ({ durationMs, tokenUsage }) => {
metrics.track("llm_call", { durationMs, ...tokenUsage });
},
onError: ({ error }) => Sentry.captureException(error),
},
});Multi-Agent Orchestration
Coordinate multiple agents with built-in execution patterns:
import { createMultiAgentOrchestrator, parallel } from "@directive-run/ai";
import { createOpenAIRunner } from "@directive-run/ai/openai";
const runner = createOpenAIRunner({ apiKey: process.env.OPENAI_API_KEY! });
const researchAgent = { name: "researcher", instructions: "Research the topic thoroughly." };
const writerAgent = { name: "writer", instructions: "Write a clear summary." };
const orchestrator = createMultiAgentOrchestrator({
runner,
agents: {
researcher: { agent: researchAgent, maxConcurrent: 3 },
writer: { agent: writerAgent, maxConcurrent: 1 },
},
patterns: {
researchAndWrite: parallel(
["researcher", "writer"],
(results) => results.map((r) => r.output).join("\n\n"),
),
},
});
// Run the pattern
const result = await orchestrator.runPattern("researchAndWrite", "Quantum computing basics");Subpath Exports
| Import | Purpose |
|--------|---------|
| @directive-run/ai | Orchestrator, guardrails, multi-agent, streaming, memory |
| @directive-run/ai/testing | Mock runners, test helpers |
| @directive-run/ai/openai | OpenAI / Azure / Together adapter |
| @directive-run/ai/anthropic | Anthropic Claude adapter |
| @directive-run/ai/ollama | Local Ollama inference adapter |
| @directive-run/ai/gemini | Google Gemini adapter |
Testing
Mock runners for unit testing without real LLM calls:
import { createAgentOrchestrator } from "@directive-run/ai";
import { createMockAgentRunner } from "@directive-run/ai/testing";
const mock = createMockAgentRunner({
responses: {
assistant: { output: "This is a mock response." },
},
});
const orchestrator = createAgentOrchestrator({ runner: mock.run });
const result = await orchestrator.run(
{ name: "assistant", instructions: "You are a helpful assistant." },
"Hello!",
);
// result.output === "This is a mock response."Related Blog Posts
- Building AI Agents with Directive – orchestrating agents with approval flows, guardrails, and budget constraints
- Declarative AI Guardrails – why your agent framework needs a constraint layer
- Why AI Loves Directive – budget enforcement, PII redaction, tool control, and provider resilience
- Building an AI Docs Chatbot with Directive – RAG-backed chatbot with streaming, guardrails, and reactive state
Documentation
License
MIT
