ai-agent-sdk-orchestrator
v1.1.4
Published
A comprehensive TypeScript SDK for building robust AI agent workflows with multi-model support, logging, and orchestration capabilities
Maintainers
Readme
AI Agent SDK Orchestrator
TypeScript SDK and CLI for building robust AI agent workflows with OpenRouter.
Powered by OpenRouter
This project is powered by OpenRouter for model access and routing. You can create a free account and get an API key from OpenRouter. OpenRouter provides free daily credits (typically up to 50 credits/day) which are sufficient to run the included examples.
Guides
- Getting Started:
docs/getting-started.md
Running the Examples
All examples load environment variables from .env automatically via the npm scripts.
- Create
.envin the project root:
OPENROUTER_API_KEY=your_openrouter_api_keyIf you don't have an API key yet, sign up at OpenRouter to obtain one. Accounts include free daily credits (usually 50/day) for testing.
Programmatic usage
/* eslint-disable @typescript-eslint/no-unused-vars */
/* eslint-env node */
/* global console, process, require, module */
import { config } from "dotenv";
import { AgentOrchestrator, Agent, Workflow } from "ai-agent-sdk-orchestrator";
import chalk from "chalk";
// Load environment variables from .env file
config();
// --- Main Application Logic ---
async function main() {
console.log(chalk.cyan("🚀 Initializing Simplified AI Workflow Orchestrator"));
console.log(chalk.gray("=".repeat(60)));
// --- 1. CREATE THE ORCHESTRATOR ---
const orchestrator = new AgentOrchestrator({ logLevel: "info" });
// --- 2. DEFINE ORIGINAL AGENTS ---
const creativeWriterAgent = new Agent({
id: "creative-writer",
name: "Creative Story Writer",
model: {
provider: "openrouter",
model: "openai/gpt-4o",
apiKey: process.env.OPENROUTER_API_KEY!,
fallbackModels: ["mistralai/mistral-7b-instruct:free"],
},
systemPrompt: "You are a world-class short story author. Write a compelling and imaginative story based on the user's topic. The story should have a clear beginning, middle, and end.",
temperature: 0.8,
});
const summarizerAgent = new Agent({
id: "summarizer",
name: "Story Summarizer",
model: {
provider: "openrouter",
model: "meta-llama/llama-3.1-8b-instruct",
apiKey: process.env.OPENROUTER_API_KEY!,
fallbackModels: ["mistralai/mistral-7b-instruct:free"],
},
systemPrompt: "You are a text summarization expert. Take the provided story and create a concise, one-paragraph summary.",
temperature: 0.2,
});
const sentimentAnalyzerAgent = new Agent({
id: "sentiment-analyzer",
name: "Sentiment Analyzer",
model: {
provider: "openrouter",
model: "google/gemma-7b-it:free",
apiKey: process.env.OPENROUTER_API_KEY!,
},
systemPrompt: "Analyze the sentiment of the provided story. Respond with only a single word: Positive, Negative, or Neutral.",
temperature: 0.1,
});
// --- 3. REGISTER AGENTS ---
console.log("Registering agents...");
orchestrator.registerAgent(creativeWriterAgent);
orchestrator.registerAgent(summarizerAgent);
orchestrator.registerAgent(sentimentAnalyzerAgent);
console.log("Agents registered successfully.");
// --- 4. CREATE THE WORKFLOW (SIMPLIFIED) ---
const workflow = new Workflow({
id: "creative-analysis-workflow",
name: "Creative Writing and Analysis Workflow",
description: "Generates a story, then summarizes it and analyzes its sentiment.",
steps: [
{
id: "generate_story",
name: "Generate Story",
type: "agent",
agentId: "creative-writer",
// No 'params' needed; it will receive the initial input from orchestrator.execute()
},
{
id: "summarize_story",
name: "Summarize Story",
type: "agent",
agentId: "summarizer",
// No 'params' needed; it will automatically receive the output from the previous step
},
{
id: "analyze_sentiment",
name: "Analyze Sentiment",
type: "agent",
agentId: "sentiment-analyzer",
// This step is tricky without explicit input mapping. It will likely receive
// the output from 'summarize_story'. To analyze the ORIGINAL story, a more
// complex workflow structure would be needed if the package supports it.
// For now, we will let it analyze the summary.
},
],
});
orchestrator.registerWorkflow(workflow);
console.log("Workflow registered successfully.");
// --- 5. EXECUTE THE WORKFLOW ---
const topic = "An astronaut who finds a mysterious, glowing plant on Mars.";
console.log(chalk.blue(`\nProcessing topic through the creative pipeline...`));
console.log(chalk.bold("Initial Topic:"), topic);
const startTime = Date.now();
try {
// The 'topic' object key must match the expected input of the first agent.
// Let's assume the agent expects a 'message' property.
const result = await orchestrator.execute("creative-analysis-workflow", {
message: topic,
});
const totalTime = Date.now() - startTime;
console.log("\n" + chalk.green("📊 Final Results:"));
console.log(chalk.gray("─".repeat(60)));
console.log("\n" + chalk.yellow("📖 Generated Story:"));
console.log(result.variables.generate_story);
console.log("\n" + chalk.yellow("📝 Summary of Story:"));
console.log(result.variables.summarize_story);
console.log("\n" + chalk.yellow("🎭 Sentiment of Summary:"));
console.log(result.variables.analyze_sentiment);
console.log("\n" + chalk.green("📈 Execution Metrics:"));
console.log(chalk.gray("─".repeat(30)));
console.log(chalk.bold("Total time:"), totalTime, "ms");
console.log(chalk.bold("Steps executed:"), result.history.length);
result.history.forEach((step, i) => {
console.log(`${i + 1}. ${step.stepId}: ${step.duration}ms`);
});
} catch (error) {
console.error("\n" + chalk.red("An error occurred during workflow execution:"), error);
} finally {
await orchestrator.shutdown();
console.log(chalk.cyan("\nOrchestrator shut down."));
}
}
// Run the main function
main().catch(console.error);CLI Usage
The package ships with a CLI ai-agent.
Install globally (optional):
npm i -g ai-agent-sdk-orchestratorOr use npx locally:
npx ai-agent-sdk-orchestrator@latest --helpInitialize a project (use the project subcommand):
# via npx
npx ai-agent-sdk-orchestrator project init my-ai-project
cd my-ai-project
# or if installed globally
ai-agent project init my-ai-projectNotes:
- The positional value
my-ai-projectis used as the default for--nameduring the interactive prompt. Press Enter to accept it.
Open common resources quickly:
# Open docs or repo in your default browser
ai-agent open --docs
ai-agent open --repo
# Open project folders in your OS file manager
ai-agent open --project
ai-agent open --agents
ai-agent open --workflows
ai-agent open --plugins
ai-agent open --logs
# Open any URL or path
ai-agent open --url https://example.com
ai-agent open --path C:\\some\\folderAuthenticate providers (set API keys):
# Set OpenRouter API key into .env at project root
ai-agent auth set-openrouter sk-or-v1-...
# Show configured keys (masked)
ai-agent auth showWorkflows via CLI
Tip: create agents first so you can reference their IDs in steps:
ai-agent agent createCreate a simple workflow (single agent):
ai-agent workflow create --id chat --name "Chat" --agent assistant
# creates workflows/chat.json with a single agent step using agentId "assistant"Interactive workflow builder (multiple steps):
ai-agent workflow create
# Prompts for: name/description/parallel → add steps repeatedly
# Step types: agent, tool, condition, loop, parallel
# For agent step you’ll be asked for agentId (e.g., "assistant")
# Saves to workflows/<generated-id>.jsonList workflows:
ai-agent workflow listValidate a workflow file:
ai-agent workflow validate chatRun a workflow:
ai-agent run chat --input '{"message":"Hello"}'
# Expects ./workflows/chat.json and referenced agents to existJSON quoting on different shells:
# PowerShell
ai-agent run chat --input "{\"message\":\"Hello\"}"
# Windows CMD
ai-agent run chat --input "{""message"":""Hello""}"
# Git Bash / WSL / macOS/Linux shells
ai-agent run chat --input '{"message":"Hello"}'
# Or use a file
echo {"message":"Hello"} > input.json
ai-agent run chat --file input.jsonAgents and workflows:
# Create an agent interactively
ai-agent agent create
# List agents
ai-agent agent list
# Show an agent's details (ID, model, etc.)
ai-agent agent show <agentId>
# Run a workflow (expects ./workflows/<id>.json)
ai-agent run <workflowId> --input '{"message":"Hello"}' # see JSON quoting notes below
# Show a workflow's details (ID, name, steps)
ai-agent workflow show <workflowId>Examples for a workflow id fire-coding:
# PowerShell
ai-agent run fire-coding --input "{\"message\":\"hello fire code\"}"
# Windows CMD
ai-agent run fire-coding --input "{""message"":""hello fire code""}"
# Git Bash / WSL / macOS/Linux shells
ai-agent run fire-coding --input '{"message":"hello fire code"}'
# Using a file (works everywhere)
echo {"message":"hello fire code"} > input.json
ai-agent run fire-coding --file input.jsonMulti-step workflow file example (workflows/fire-coding.json):
{
"id": "fire-coding",
"name": "Fire Coding",
"steps": [
{ "id": "analyze", "type": "agent", "agentId": "fire-coder" },
{ "id": "plan", "type": "agent", "agentId": "fire-coder" },
{ "id": "implement", "type": "agent", "agentId": "fire-coder" }
]
}Install and Upgrade
Install from npm registry:
# project-local (recommended)
npm i ai-agent-sdk-orchestrator@latest
# or install the CLI globally
npm i -g ai-agent-sdk-orchestrator@latestInstall directly from a GitHub branch/tag (useful for testing diffs with main):
# main branch
npm i github:Franc-dev/ai-agent-sdk-orchestrator-v2#main
# a feature branch or tag
npm i github:Franc-dev/ai-agent-sdk-orchestrator-v2#my-branch
npm i github:Franc-dev/ai-agent-sdk-orchestrator-v2#v1.0.3If upgrade seems stuck, clear cache and reinstall:
npm cache verify
npm cache clean --force
rm -rf node_modules package-lock.json
npm iThe examples use OpenRouter and include fallbacks to free Mistral models:
mistralai/mistral-7b-instruct:free
mistralai/mistral-small-3.2-24b-instruct:free
mistralai/mistral-small-3.1-24b-instruct:freeExample Output Screenshots
Screenshots of typical outputs are available under public/examples/ and embedded below:
