orc8
v0.1.5
Published
A dynamic orchestration library for A2A enabled AI agents that can call MCP servers.
Downloads
242
Maintainers
Readme
Dynamic orchestration for A2A agents and MCP tools.
Installation
npm install orc8 @modelcontextprotocol/sdk openai @a2a-js/sdkQuick Start
import { orc8 } from "orc8";
import { openaiProvider } "orc8/openai";
const orchestrator = orc8.create({
modelId: "gpt-4o",
provider: openaiProvider({ apiKey: process.env.OPENAI_API_KEY }),
});
// Add MCP tools
orchestrator.add({
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
});
// Connect and get a response
const response = await orchestrator.connect("List files in /tmp");
console.log(response);
// Clean up
await orchestrator.stop();Modules
| Module | Description | | ------------------------ | -------------------------------------------------------- | | Relay | Discover agents, relay messages, send tasks, and search. |
Adding Agents
orchestrator.add({
engine: async function* (context) {
yield {
kind: "status-update",
status: {
state: "completed",
message: {
kind: "message",
role: "agent",
parts: [
{
kind: "text",
text: `Hello from agent!`,
},
],
messageId: "msg-1",
},
},
taskId: context.taskId,
contextId: context.contextId,
final: true,
};
},
agentCard: {
name: "EchoAgent",
description: "Echoes back every request",
},
});
const result = await orchestrator.connect("Say hello");Events
orchestrator.events.on("update", (data) => {
console.log("Update:", data);
});
orchestrator.events.on("error", (error, task) => {
console.error(`Error in ${task.id}:`, error);
});Expose as an A2A Agent
const agent = orchestrator.agent;
await agent.sendMessage("Hello, World!");OpenAI Provider
Use OpenAI (or any OpenAI-compatible API) as your LLM backend:
import { orc8 } from "orc8";
import { openaiProvider } "orc8/openai";
const orchestrator = orc8.create({
modelId: "gpt-4o",
provider: openaiProvider({ apiKey: process.env.OPENAI_API_KEY }),
});
// Add tools and agents as usual
orchestrator.add({
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
});
const response = await orchestrator.connect("List files in /tmp");Works with any OpenAI-compatible APIs by setting the base URL:
const provider = openaiProvider({
apiKey: process.env.API_KEY,
baseURL: "https://api.openrouter.ai/v1", // or any compatible endpoint
});Custom Provider
Bring your own LLM backend:
import { orc8, type APIProvider } from "orc8";
const provider: APIProvider = async (request, signal) => {
const response = await myLLM.chat(request.messages, { signal });
return {
agentResponse: response.content,
timestamp: new Date().toISOString(),
options: {
tools: { requests: response.toolCalls ?? [] },
agents: { requests: response.agentCalls ?? [] },
},
};
};
const orchestrator = orc8.create({ modelId: "my-model", provider });License
Apache-2.0
