providerplaneai
v0.3.1
Published
Multi-provider AI orchestration layer for Node.js applications.
Readme
ProviderPlaneAI
ProviderPlaneAI is a workflow-first AI orchestration framework for Node.js. It provides a provider-agnostic workflow layer above raw model SDKs:
- Provider-agnostic orchestration across OpenAI, Anthropic, and Gemini, with additional providers planned
- Workflow-first API with jobs available as the lower-level execution layer
- Multimodal workflows across text, audio, image, video, moderation, and embeddings
- Retry, fallback, persistence, and workflow-level observability
See providerplane.dev for guides, examples, configuration, changelog, and API reference. See providerplane.ai for the main project site.
Getting Started
Install
npm install providerplaneaiRuntime Requirements
- Node.js 20+
- TypeScript 5+
Configure Providers
ProviderPlaneAI loads configuration via node-config + dotenv.
Create config/default.json (or environment-specific config files) with a providerplane section containing appConfig and providers.
Minimal example:
{
"providerplane": {
"appConfig": {
"executionPolicy": {
"providerChain": [
{ "providerType": "openai", "connectionName": "default" }
]
}
},
"providers": {
"openai": {
"default": {
"type": "openai",
"apiKeyEnvVar": "OPENAI_API_KEY_1",
"defaultModel": "gpt-5"
}
}
}
}
}Minimal .env for the config above:
OPENAI_API_KEY_1=your_openai_api_keyFor full multi-provider config and environment examples covering OpenAI, Gemini, Anthropic, and Mistral, see providerplane.dev.
Quickstart
import {
AIClient,
MultiModalExecutionContext,
Pipeline,
WorkflowRunner
} from "providerplaneai";
const client = new AIClient();
const runner = new WorkflowRunner({ jobManager: client.jobManager, client });
const ctx = new MultiModalExecutionContext();
const pipeline = new Pipeline<{
generatedText: string;
transcriptText: string;
audioArtifactId: string;
}>("readme-workflow-1", {});
// Typed step handles keep `source` and `after` references readable and safe
const generateText = pipeline.step("generateText");
const tts = pipeline.step("tts");
const transcribe = pipeline.step("transcribe");
// Build a workflow: chat -> tts -> transcribe
const workflow = pipeline
.chat(generateText.id, "Generate one short inspirational quote in French.", {
normalize: "text"
})
.tts(tts.id, { voice: "alloy", format: "mp3" }, { source: generateText })
.transcribe(transcribe.id, { responseFormat: "text" }, { source: tts, normalize: "text" })
.output((values) => ({
generatedText: String(values.generateText ?? ""),
transcriptText: String(values.transcribe ?? ""),
audioArtifactId: String(((values.tts as any[])?.[0]?.id ?? ""))
}))
.build();
// Run the workflow
const execution = await runner.run(workflow, ctx);
console.log("Output", execution.output);graph TD
n0["generateText"]
n1["tts"]
n2["transcribe"]
n0 --> n1
n1 --> n2For most applications, this is the right abstraction level: the workflow layer via Pipeline plus WorkflowRunner.
Use direct jobs only when you need low-level control outside a workflow DAG, are integrating with an external scheduler, or are building custom orchestration on top of the library.
Built-In Providers
- OpenAI
- Anthropic
- Gemini
- Mistral
Providers listed in appConfig.executionPolicy.providerChain are initialized automatically when AIClient is constructed.
Workflow System
ProviderPlaneAI includes a DAG workflow engine for orchestrating multi-step AI workflows. Pipeline is the recommended authoring API. WorkflowBuilder remains available for advanced node-level control.
Workflow capabilities
- Deterministic DAG execution with explicit dependencies
- Parallel fan-out and fan-in
- Single-source and multi-source step inputs via
source - Conditional step execution via
when - Per-step retry and timeout policies
- Per-step provider and provider-chain overrides
- Streaming and non-streaming workflow nodes
- Nested workflows
- Export to JSON, Mermaid, DOT, and D3
Core APIs
Pipelinefor most workflowsWorkflowRunnerfor executionWorkflowExporterfor visualization and exportWorkflowBuilderfor advanced custom graph construction
Pipeline DSL (recommended)
const client = new AIClient();
const runner = new WorkflowRunner({ jobManager: client.jobManager, client });
const ctx = new MultiModalExecutionContext();
const pipeline = new Pipeline<{
generatedText: string;
transcriptText: string;
translationText: string;
moderationFlagged: boolean;
}>("readme-workflow-2", {});
// Typed step handles keep `source` and `after` references readable and safe
const generateText = pipeline.step("generateText");
const tts = pipeline.step("tts");
const transcribe = pipeline.step("transcribe");
const translate = pipeline.step("translate");
const moderate = pipeline.step("moderate");
// Build a workflow: chat -> tts -> transcribe + translate -> moderate
const workflow = pipeline
.chat(generateText.id, "Generate one short inspirational quote in French.", { normalize: "text" })
.tts(tts.id, { voice: "alloy", format: "mp3" }, { source: generateText })
.transcribe(transcribe.id, { responseFormat: "text" }, { source: tts, normalize: "text" })
.translate(translate.id, { targetLanguage: "english", responseFormat: "text" }, { source: tts, normalize: "text" })
.moderate(moderate.id, {}, { source: [transcribe, translate] })
.output((values) => ({
generatedText: String(values.generateText ?? ""),
transcriptText: String(values.transcribe ?? ""),
translationText: String(values.translate ?? ""),
moderationFlagged: Boolean((values.moderate as any)?.[0]?.flagged ?? false)
}))
.build();
// Run the workflow
const execution = await runner.run(workflow, ctx);
console.log("Output", execution.output);Notes:
sourcebinds step input to upstream output and can be either a single step or an array of steps.afteradds ordering dependencies when you need sequencing without data binding.- Typed step handles created with
pipeline.step("...")reduce stringly-typed wiring mistakes. custom(...)andcustomAfter(...)are escape hatches for custom capability steps without dropping toWorkflowBuilder.- If you find yourself reaching for
createCapabilityJobin application code, you are usually below the preferred abstraction level.
graph TD
n0["generateText"]
n1["tts"]
n2["transcribe"]
n3["translate"]
n4["moderate"]
n0 --> n1
n1 --> n2
n1 --> n3
n2 --> n4
n3 --> n4For the full Pipeline method reference and step-by-step DSL documentation, see providerplane.dev.
Built-in workflow-oriented capabilities
approvalGatesaveFile
These are registered by default and are intended for workflow authoring rather than provider-specific model calls.
Advanced workflow APIs
- Use
WorkflowBuilderwhen you need direct node functions or full control over graph construction. - Use
WorkflowExporterto render workflows as Mermaid, DOT, D3, or JSON. - Keep advanced builder/export usage in docs and internal tooling; use
Pipelinefor the common path.
Development
npm run build
npm run test
npm run lint
npm run perf:quickFor integration testing, PR title conventions, release workflow notes, and contribution guidance, see CONTRIBUTING.md.
License
MIT
