@runflow-ai/sdk
v1.0.62
Published
Runflow SDK - Multi-agent AI framework
Readme
🚀 Runflow SDK
A powerful TypeScript-first framework for building intelligent AI agents and multi-agent systems
Runflow SDK is a comprehensive, type-safe framework for building AI agents, complex workflows, and multi-agent systems. Designed to be simple to use yet powerful enough for advanced use cases.
✨ Features
- 🤖 Intelligent Agents - Create agents with LLM, tools, memory, and RAG capabilities
- 🔧 Type-Safe Tools - Build custom tools with Zod schema validation
- 🔌 Built-in Connectors - HubSpot, Twilio, Email, Slack integrations out of the box
- 🌊 Workflows - Orchestrate complex multi-step processes with conditional logic
- 🧠 Memory Management - Persistent conversation history with automatic summarization
- 📚 Agentic RAG - LLM-driven semantic search in vector knowledge bases
- 👥 Multi-Agent Systems - Supervisor pattern with automatic agent routing
- 🔍 Full Observability - Automatic tracing with cost tracking and performance metrics
- 📡 Streaming Support - Real-time streaming responses with memory persistence
- 🎨 Multi-Modal - Support for text and images (vision models)
- 🔄 Multiple Providers - OpenAI, Anthropic (Claude), and AWS Bedrock
📑 Table of Contents
- Installation
- Quick Start
- Core Concepts
- Advanced Examples
- Real-World Use Cases
- Configuration
- API Reference
- TypeScript Types
- Providers
- Troubleshooting
- Contributing
- License
📦 Installation
npm install @runflow-ai/sdk
# or
yarn add @runflow-ai/sdk
# or
pnpm add @runflow-ai/sdkRequirements
- Node.js: >= 22.0.0
- TypeScript: >= 5.0.0 (recommended)
📚 Built-in Libraries
The SDK includes the following libraries out-of-the-box. No need to install them separately - they're available in all your agents:
| Library | Version | Description | Import |
|---------|---------|-------------|--------|
| axios | ^1.7.0 | HTTP client for API requests | import axios from 'axios' |
| zod | ^3.22.0 | Schema validation and TypeScript inference | import { z } from 'zod' |
| date-fns | ^3.0.0 | Modern date utility library | import { format, addDays } from 'date-fns' |
| lodash | ^4.17.21 | JavaScript utility library | import _ from 'lodash' |
| cheerio | ^1.0.0 | Fast, flexible HTML/XML parsing | import * as cheerio from 'cheerio' |
| pino | ^8.19.0 | Fast JSON logger | import pino from 'pino' |
Quick Examples:
import { createTool } from '@runflow-ai/sdk';
import { z } from 'zod';
import axios from 'axios';
import { format, addDays } from 'date-fns';
import _ from 'lodash';
const myTool = createTool({
id: 'example-tool',
description: 'Shows all available libraries',
inputSchema: z.object({
url: z.string().url(),
data: z.array(z.any()),
}),
execute: async ({ context }) => {
// ✅ HTTP requests with axios
const response = await axios.get(context.url);
// ✅ Date manipulation
const tomorrow = addDays(new Date(), 1);
const formatted = format(tomorrow, 'yyyy-MM-dd');
// ✅ Array/Object utilities with lodash
const unique = _.uniq(context.data);
const grouped = _.groupBy(context.data, 'category');
return { response: response.data, date: formatted, unique, grouped };
},
});💡 Tip: You can also use the SDK's HTTP helpers for convenience:
import { httpGet, httpPost } from '@runflow-ai/sdk/http'; const data = await httpGet('https://api.example.com/data');
🚀 Quick Start
Note on Parameters:
messageis required (the user's message)companyIdis optional (for multi-tenant applications - your end-user's company ID)sessionIdis optional but recommended (maintains conversation history)userIdis optional (for user identification)- All other fields are optional and can be set via
Runflow.identify()or environment variables
Simple Agent
import { Agent, openai } from '@runflow-ai/sdk';
// Create a basic agent
const agent = new Agent({
name: 'Support Agent',
instructions: 'You are a helpful customer support assistant.',
model: openai('gpt-4o'),
});
// Process a message
const result = await agent.process({
message: 'I need help with my order', // Required
sessionId: 'session_456', // Optional: For conversation history
userId: 'user_789', // Optional: User identifier
companyId: 'company_123', // Optional: For multi-tenant apps
});
console.log(result.message);Agent with Memory
import { Agent, openai } from '@runflow-ai/sdk';
const agent = new Agent({
name: 'Support Agent',
instructions: 'You are a helpful assistant with memory.',
model: openai('gpt-4o'),
memory: {
maxTurns: 10,
},
});
// First interaction
await agent.process({
message: 'My name is John',
sessionId: 'session_456', // Same session for conversation continuity
});
// Second interaction - agent remembers the name
const result = await agent.process({
message: 'What is my name?',
sessionId: 'session_456', // Same session
});
console.log(result.message); // "Your name is John"Agent with Tools
import { Agent, openai, createTool } from '@runflow-ai/sdk';
import { z } from 'zod';
// Create a custom tool
const weatherTool = createTool({
id: 'get-weather',
description: 'Get current weather for a location',
inputSchema: z.object({
location: z.string(),
}),
execute: async ({ context }) => {
// Fetch weather data
return {
temperature: 22,
condition: 'Sunny',
location: context.location,
};
},
});
// Create agent with tool
const agent = new Agent({
name: 'Weather Agent',
instructions: 'You help users check the weather.',
model: openai('gpt-4o'),
tools: {
weather: weatherTool,
},
});
const result = await agent.process({
message: 'What is the weather in São Paulo?',
});
console.log(result.message);Agent with RAG (Knowledge Base)
import { Agent, openai } from '@runflow-ai/sdk';
const agent = new Agent({
name: 'Support Agent',
instructions: 'You are a helpful support agent.',
model: openai('gpt-4o'),
rag: {
vectorStore: 'support-docs',
k: 5,
threshold: 0.7,
searchPrompt: `Use searchKnowledge tool when user asks about:
- Technical problems
- Process questions
- Specific information`,
},
});
// Agent automatically has 'searchKnowledge' tool
// LLM decides when to use it (not always searching - more efficient!)
const result = await agent.process({
message: 'How do I reset my password?',
});🎯 Core Concepts
Agents
Agents are the fundamental building blocks of the Runflow SDK. Each agent is configured with:
- Name: Agent identifier
- Instructions: Behavior instructions (system prompt)
- Model: LLM model to use (OpenAI, Anthropic, Bedrock)
- Tools: Available tools for the agent
- Memory: Memory configuration
- RAG: Knowledge base search configuration
Complete Agent Configuration
import { Agent, anthropic } from '@runflow-ai/sdk';
const agent = new Agent({
name: 'Advanced Support Agent',
instructions: `You are an expert customer support agent.
- Always be polite and helpful
- Solve problems efficiently
- Use tools when needed`,
// Model
model: anthropic('claude-3-5-sonnet-20241022'),
// Model configuration
modelConfig: {
temperature: 0.7,
maxTokens: 4000,
topP: 0.9,
frequencyPenalty: 0,
presencePenalty: 0,
},
// Memory
memory: {
maxTurns: 20,
summarizeAfter: 50,
summarizePrompt: 'Create a concise summary highlighting key points and decisions',
summarizeModel: openai('gpt-4o-mini'), // Cheaper model for summaries
},
// RAG (Agentic - LLM decides when to search)
rag: {
vectorStore: 'support-docs',
k: 5,
threshold: 0.7,
searchPrompt: 'Use for technical questions',
},
// Tools
tools: {
createTicket: ticketTool,
searchOrders: orderTool,
},
// Tool iteration limit
maxToolIterations: 10,
// Streaming
streaming: {
enabled: true,
},
// Debug mode
debug: true,
});Supported Models
import { openai, anthropic, bedrock } from '@runflow-ai/sdk';
// OpenAI
const gpt4 = openai('gpt-4o');
const gpt4mini = openai('gpt-4o-mini');
const gpt4turbo = openai('gpt-4-turbo');
const gpt35 = openai('gpt-3.5-turbo');
// Anthropic (Claude)
const claude35 = anthropic('claude-3-5-sonnet-20241022');
const claude3opus = anthropic('claude-3-opus-20240229');
const claude3sonnet = anthropic('claude-3-sonnet-20240229');
const claude3haiku = anthropic('claude-3-haiku-20240307');
// AWS Bedrock
const claudeBedrock = bedrock('anthropic.claude-3-sonnet-20240229-v1:0');
const titan = bedrock('amazon.titan-text-express-v1');Agent Methods
// Process a message
const result = await agent.process(input: AgentInput): Promise<AgentOutput>;
// Stream a message
const stream = await agent.processStream(input: AgentInput): AsyncIterable<ChunkType>;
// Simple generation (without full agent context)
const response = await agent.generate(input: string | Message[]): Promise<{ text: string }>;
// Streaming generation
const stream = await agent.generateStream(prompt: string): AsyncIterable<ChunkType>;
// Generation with tools
const response = await agent.generateWithTools(input): Promise<{ text: string }>;Multi-Agent Systems (Supervisor Pattern)
const supervisor = new Agent({
name: 'Supervisor',
instructions: 'Route tasks to appropriate agents.',
model: openai('gpt-4o'),
agents: {
support: {
name: 'Support Agent',
instructions: 'Handle support requests.',
model: openai('gpt-4o-mini'),
},
sales: {
name: 'Sales Agent',
instructions: 'Handle sales inquiries.',
model: openai('gpt-4o-mini'),
},
},
});
// Supervisor automatically routes to the appropriate agent
await supervisor.process({
message: 'I want to buy your product',
sessionId: 'session_123',
});Debug Mode
const agent = new Agent({
name: 'Debug Agent',
instructions: 'Help users',
model: openai('gpt-4o'),
// Simple debug (all logs enabled)
debug: true,
// Or detailed debug configuration
debug: {
enabled: true,
logMessages: true, // Log messages
logLLMCalls: true, // Log LLM API calls
logToolCalls: true, // Log tool executions
logRAG: true, // Log RAG searches
logMemory: true, // Log memory operations
truncateAt: 1000, // Truncate logs at N characters
},
});Context Management
The Runflow Context is a global singleton that manages execution information and user identification. It allows you to identify once and all agents/workflows automatically use this context.
Basic Usage
import { Runflow, Agent, openai } from '@runflow-ai/sdk';
// Identify user by phone (WhatsApp)
Runflow.identify({
type: 'phone',
value: '+5511999999999',
});
// Agent automatically uses the context
const agent = new Agent({
name: 'WhatsApp Bot',
instructions: 'You are a helpful assistant.',
model: openai('gpt-4o'),
memory: {
maxTurns: 10,
},
});
// Memory is automatically bound to the phone number
await agent.process({
message: 'Hello!',
});Smart Identification (Auto-Detection)
New in v2.1: The identify() function now auto-detects entity type from value format:
import { identify } from '@runflow-ai/sdk/observability';
// Auto-detect email
identify('[email protected]');
// → type: 'email', value: '[email protected]'
// Auto-detect phone (international)
identify('+5511999999999');
// → type: 'phone', value: '+5511999999999'
// Auto-detect phone (local with formatting)
identify('(11) 99999-9999');
// → type: 'phone', value: '(11) 99999-9999'
// Auto-detect UUID
identify('550e8400-e29b-41d4-a716-446655440000');
// → type: 'uuid'
// Auto-detect URL
identify('https://example.com');
// → type: 'url'Supported patterns:
- Email: Standard RFC 5322 format
- Phone: E.164 format (with/without +, with/without formatting)
- UUID: Standard UUID v1-v5
- URL: With or without protocol
- Fallback: Generic
idtype for custom identifiers
Explicit Identification
For custom entity types or when auto-detection is not desired:
import { identify } from '@runflow-ai/sdk/observability';
// HubSpot Contact
identify({
type: 'hubspot_contact',
value: 'contact_123',
userId: '[email protected]',
});
// Order/Ticket
identify({
type: 'order',
value: 'ORDER-456',
userId: 'customer_789',
});
// Custom threadId override
identify({
type: 'document',
value: 'doc_456',
threadId: 'custom_thread_123',
});Backward Compatibility
The old API still works:
import { Runflow } from '@runflow-ai/sdk/core';
Runflow.identify({
type: 'email',
value: '[email protected]',
});State Management
// Get complete state
const state = Runflow.getState();
// Get specific value
const threadId = Runflow.get('threadId');
const entityType = Runflow.get('entityType');
// Set custom state (advanced)
Runflow.setState({
entityType: 'custom',
entityValue: 'xyz',
threadId: 'my_custom_thread_123',
userId: 'user_123',
metadata: { custom: 'data' },
});
// Clear state (useful for testing)
Runflow.clearState();Memory
The Memory system intelligently manages conversation history.
Memory Integrated in Agent
const agent = new Agent({
name: 'Memory Agent',
instructions: 'You remember everything.',
model: openai('gpt-4o'),
memory: {
maxTurns: 20, // Limit turns
maxTokens: 4000, // Limit tokens
summarizeAfter: 50, // Summarize after N turns
summarizePrompt: 'Create a concise summary with key facts and action items',
summarizeModel: openai('gpt-4o-mini'), // Cheaper model for summaries
},
});Standalone Memory Manager
import { Memory } from '@runflow-ai/sdk';
// Using static methods (most common - 99% of cases)
await Memory.append({
role: 'user',
content: 'Hello!',
timestamp: new Date(),
});
await Memory.append({
role: 'assistant',
content: 'Hi! How can I help you?',
timestamp: new Date(),
});
// Get formatted history
const history = await Memory.getFormatted();
console.log(history);
// Get recent messages
const recent = await Memory.getRecent(5); // Last 5 turns
// Search in memory
const results = await Memory.search('order');
// Check if memory exists
const exists = await Memory.exists();
// Get full memory data
const data = await Memory.get();
// Clear memory
await Memory.clear();Memory with Runflow Context
import { Runflow, Memory } from '@runflow-ai/sdk';
// Identify user
Runflow.identify({
type: 'phone',
value: '+5511999999999',
});
// Memory automatically uses the context
await Memory.append({
role: 'user',
content: 'My order number is 12345',
timestamp: new Date(),
});
// Memory is automatically bound to the phone numberCustom Memory Key
// Create memory with custom key
const memory = new Memory({
memoryKey: 'custom_key_123',
maxTurns: 10,
});
// Now use instance methods
await memory.append({ role: 'user', content: 'Hello', timestamp: new Date() });
const history = await memory.getFormatted();Cross-Session Access
// Access memory from different sessions (admin, analytics, etc)
const dataUser1 = await Memory.get('phone:+5511999999999');
const dataUser2 = await Memory.get('email:[email protected]');
// Search across multiple sessions
const results = await Promise.all([
Memory.search('bug', 'user:123'),
Memory.search('bug', 'user:456'),
Memory.search('bug', 'user:789'),
]);
// Get recent from specific session
const recent = await Memory.getRecent(5, 'session:abc123');
// Clear specific session
await Memory.clear('phone:+5511999999999');Custom Summarization
// Agent with custom summarization
const agent = new Agent({
name: 'Smart Agent',
model: openai('gpt-4o'),
memory: {
summarizeAfter: 30,
summarizePrompt: `Summarize in 3 bullet points:
- Main issue discussed
- Solution provided
- Next steps`,
summarizeModel: anthropic('claude-3-haiku'), // Fast & cheap
},
});
// Manual summarization with custom prompt
const summary = await Memory.summarize({
prompt: 'Extract only the key decisions from this conversation',
model: openai('gpt-4o-mini'),
});Tools
Tools are functions that agents can call to perform specific actions. The SDK uses Zod for type-safe validation.
Create Basic Tool
import { createTool } from '@runflow-ai/sdk';
import { z } from 'zod';
const weatherTool = createTool({
id: 'get-weather',
description: 'Get current weather for a location',
inputSchema: z.object({
location: z.string().describe('City name'),
units: z.enum(['celsius', 'fahrenheit']).optional(),
}),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
execute: async ({ context, runflow, projectId }) => {
// Implement logic
const weather = await fetchWeather(context.location);
return {
temperature: weather.temp,
condition: weather.condition,
};
},
});Tool with Runflow API
const searchDocsTool = createTool({
id: 'search-docs',
description: 'Search in documentation',
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ context, runflow }) => {
// Use Runflow API for vector search
const results = await runflow.vectorSearch(context.query, {
vectorStore: 'docs',
k: 5,
});
return {
results: results.results.map(r => r.content),
};
},
});Tool with Connector
const createTicketTool = createTool({
id: 'create-ticket',
description: 'Create a support ticket',
inputSchema: z.object({
subject: z.string(),
description: z.string(),
priority: z.enum(['low', 'medium', 'high']),
}),
execute: async ({ context, runflow }) => {
// Use connector
const ticket = await runflow.connector(
'hubspot',
'tickets',
'create',
{
subject: context.subject,
content: context.description,
priority: context.priority,
}
);
return { ticketId: ticket.id };
},
});Tool Execution Context
The execute function receives:
context: Validated input parameters (frominputSchema)runflow: Runflow API client for vector search, connectors, memoryprojectId: Current project ID
HTTP Utilities
The HTTP module provides pre-configured utilities for making HTTP requests in tools and agents. Built on top of axios, it comes with sensible defaults, automatic error handling, and full TypeScript support.
Features
- 🌐 Pre-configured axios instance with 30s timeout
- 🛡️ Automatic error handling with enhanced error messages
- 🎯 Helper functions for common HTTP methods (GET, POST, PUT, PATCH, DELETE)
- 📦 Zero configuration - works out of the box
- 🔒 Type-safe - Full TypeScript support with exported types
- ⚡ Available in all agents - No need to install additional dependencies
Quick Start
import { createTool } from '@runflow-ai/sdk';
import { http, httpGet, httpPost } from '@runflow-ai/sdk/http';
import { z } from 'zod';
const weatherTool = createTool({
id: 'get-weather',
description: 'Get current weather for a city',
inputSchema: z.object({
city: z.string(),
}),
execute: async ({ context }) => {
try {
// Option 1: Using httpGet helper (simplest)
const data = await httpGet('https://api.openweathermap.org/data/2.5/weather', {
params: {
q: context.city,
appid: process.env.OPENWEATHER_API_KEY,
units: 'metric',
},
});
return {
city: data.name,
temperature: data.main.temp,
condition: data.weather[0].description,
};
} catch (error: any) {
return { error: `Failed to fetch weather: ${error.message}` };
}
},
});Helper Functions
The SDK provides convenient helper functions that automatically extract data from responses:
import { httpGet, httpPost, httpPut, httpPatch, httpDelete } from '@runflow-ai/sdk/http';
// GET request - returns only the data payload
const user = await httpGet('https://api.example.com/users/123');
console.log(user.name);
// POST request
const newUser = await httpPost('https://api.example.com/users', {
name: 'John Doe',
email: '[email protected]',
});
// PUT request
const updated = await httpPut('https://api.example.com/users/123', {
name: 'Jane Doe',
});
// PATCH request
const patched = await httpPatch('https://api.example.com/users/123', {
email: '[email protected]',
});
// DELETE request
await httpDelete('https://api.example.com/users/123');Using the HTTP Instance
For more control, use the pre-configured http instance directly:
import { http } from '@runflow-ai/sdk/http';
// GET with full response
const response = await http.get('https://api.example.com/data');
console.log(response.status);
console.log(response.headers);
console.log(response.data);
// POST with custom headers
const response = await http.post(
'https://api.example.com/resource',
{ data: 'value' },
{
headers: {
'Authorization': `Bearer ${process.env.API_TOKEN}`,
'Content-Type': 'application/json',
},
timeout: 5000,
}
);
// Multiple requests in parallel
const [users, posts, comments] = await Promise.all([
http.get('https://api.example.com/users'),
http.get('https://api.example.com/posts'),
http.get('https://api.example.com/comments'),
]);Advanced: Direct Axios Usage
For complete control, use axios directly:
import { axios } from '@runflow-ai/sdk/http';
// Create a custom instance
const customAPI = axios.create({
baseURL: 'https://api.example.com',
headers: {
'Authorization': `Bearer ${process.env.API_TOKEN}`,
},
timeout: 10000,
});
// Add interceptors
customAPI.interceptors.request.use((config) => {
console.log(`Request: ${config.method?.toUpperCase()} ${config.url}`);
return config;
});
// Use the custom instance
const response = await customAPI.get('/users');Error Handling
All HTTP utilities provide enhanced error messages:
import { httpGet } from '@runflow-ai/sdk/http';
try {
const data = await httpGet('https://api.example.com/data');
return { success: true, data };
} catch (error: any) {
// Error message includes HTTP status and details
console.error(error.message);
// "HTTP GET failed: HTTP 404: Not Found"
return { success: false, error: error.message };
}TypeScript Types
All axios types are re-exported for convenience:
import type {
AxiosInstance,
AxiosRequestConfig,
AxiosResponse,
AxiosError,
} from '@runflow-ai/sdk/http';
async function fetchData(
url: string,
config?: AxiosRequestConfig
): Promise<AxiosResponse> {
const response = await http.get(url, config);
return response;
}Complete Example: Weather Tool
import { Agent, openai, createTool } from '@runflow-ai/sdk';
import { httpGet } from '@runflow-ai/sdk/http';
import { z } from 'zod';
const weatherTool = createTool({
id: 'get-weather',
description: 'Get current weather for any city',
inputSchema: z.object({
city: z.string().describe('City name (e.g., "São Paulo", "New York")'),
}),
execute: async ({ context }) => {
try {
const apiKey = process.env.OPENWEATHER_API_KEY;
const data = await httpGet('https://api.openweathermap.org/data/2.5/weather', {
params: {
q: context.city,
appid: apiKey,
units: 'metric',
lang: 'pt_br',
},
timeout: 5000,
});
return {
city: data.name,
temperature: data.main.temp,
feelsLike: data.main.feels_like,
condition: data.weather[0].description,
humidity: data.main.humidity,
windSpeed: data.wind.speed,
};
} catch (error: any) {
if (error.message.includes('404')) {
return { error: `City "${context.city}" not found` };
}
throw new Error(`Weather API error: ${error.message}`);
}
},
});
const agent = new Agent({
name: 'Weather Assistant',
instructions: 'You help users check the weather. Use the weather tool when users ask about weather conditions.',
model: openai('gpt-4o'),
tools: {
weather: weatherTool,
},
});
// Use the agent
const result = await agent.process({
message: 'What is the weather like in São Paulo?',
});Connectors
Connectors are dynamic integrations with external services defined in the Runflow backend. They support two modes of usage:
- As Tools - For agent execution (LLM decides when to call)
- Direct Invocation - For programmatic execution (you control when to call)
Key Features
- 🔄 Dynamic Schema Loading - Schemas are fetched from the backend automatically
- 🎭 Transparent Mocking - Enable mock mode for development and testing
- 🛣️ Path Parameter Resolution - Automatic extraction and URL building
- ⚡ Lazy Initialization - Schemas loaded only when needed, cached globally
- 🔐 Flexible Authentication - Supports API Key, Bearer Token, Basic Auth, OAuth2
- 🔄 Multiple Credentials - Override credentials per execution (multi-tenant support)
- ✅ Type-Safe - Automatic JSON Schema → Zod → LLM Parameters conversion
Usage Mode 1: As Agent Tool
Use connectors as tools that the LLM can call automatically
💡 Resource Identifier: Use the resource slug (e.g.,
get-customers,list-users) which is auto-generated from the resource name. Slugs are stable, URL-safe identifiers that won't break if you rename the resource display name.
import { createConnectorTool, Agent, openai } from '@runflow-ai/sdk';
// Basic connector tool (schema loaded from backend)
const getClienteTool = createConnectorTool({
connector: 'api-contabil', // Connector instance slug
resource: 'get-customers', // Resource slug
description: 'Get customer by ID from accounting API',
enableMock: true, // Optional: enables mock mode
});
// Use with Agent
const agent = new Agent({
name: 'Accounting Agent',
instructions: 'You help manage customers in the accounting system.',
model: openai('gpt-4o'),
tools: {
getCliente: getClienteTool,
listClientes: createConnectorTool({
connector: 'api-contabil',
resource: 'list-customers', // Resource slug
}),
},
});
// First execution automatically loads schemas from backend
const result = await agent.process({
message: 'Get customer with ID 123',
sessionId: 'session-123',
companyId: 'company-456',
});Usage Mode 2: Direct Invocation
Invoke connectors directly without agent involvement:
💡 Identifiers:
- Connector: Use the instance slug (e.g.,
hubspot-prod) - recommended over display name- Resource: Use the resource slug (e.g.,
create-contact) - auto-generated from resource name
import { connector } from '@runflow-ai/sdk/connectors';
import type { ConnectorExecutionOptions } from '@runflow-ai/sdk';
// Direct connector call (using slugs - recommended)
const result = await connector(
'hubspot-prod', // connector instance slug
'create-contact', // resource slug
{ // data
email: '[email protected]',
firstname: 'John',
lastname: 'Doe'
}
);
console.log('Contact created:', result);With execution options:
const options: ConnectorExecutionOptions = {
credentialId: 'cred-prod-123', // Override credential
timeout: 10000, // 10 seconds timeout
retries: 3, // Retry 3 times on failure
useMock: false, // Use real API
};
const result = await connector(
'api-contabil',
'get-customer', // Resource slug
{ id: 123 },
options
);Multi-tenant example:
// Different credentials per customer
async function createContactForCustomer(customerId: string, contactData: any) {
// Get customer's HubSpot credential
const credentialId = await getCustomerCredential(customerId, 'hubspot');
return await connector(
'hubspot',
'create-contact', // Resource slug
contactData,
{ credentialId }
);
}
// Usage
await createContactForCustomer('customer-1', { email: '[email protected]' });
await createContactForCustomer('customer-2', { email: '[email protected]' });Custom headers (override everything):
// Custom headers have HIGHEST priority
const result = await connector(
'hubspot',
'create-contact', // Resource slug
{ email: '[email protected]' },
{
headers: {
'Authorization': 'Bearer temp-test-token',
'X-Request-ID': generateId(),
}
}
);Authentication Priority:
- Custom headers (highest - overrides everything)
- credentialId override (runtime override)
- Instance credential (default from connector instance)
- No authentication
Connector Tool Configuration
createConnectorTool({
connector: string, // Connector instance slug (e.g., 'hubspot-prod', 'api-contabil')
resource: string, // Resource slug (e.g., 'get-contacts', 'list-customers', 'create-ticket')
description?: string, // Optional: Custom description (defaults to auto-generated)
enableMock?: boolean, // Optional: Enable mock mode (adds useMock parameter)
})Important Notes:
connector: Use the instance slug (e.g.,hubspot-prod) instead of display nameresource: Use the resource slug (e.g.,get-users,create-order) - auto-generated from resource name- Resource lookup priority: slug first → name fallback (backward compatibility)
Multiple Connector Tools
// Create multiple tools for the same connector
const tools = {
listClientes: createConnectorTool({
connector: 'api-contabil',
resource: 'list-customers', // Resource slug
enableMock: true,
}),
getCliente: createConnectorTool({
connector: 'api-contabil',
resource: 'get-customer', // Resource slug
enableMock: true,
}),
createCliente: createConnectorTool({
connector: 'api-contabil',
resource: 'create-customer', // Resource slug
enableMock: true,
}),
};
const agent = new Agent({
name: 'Customer Management Agent',
instructions: 'You help manage customers.',
model: openai('gpt-4o'),
tools,
});
// All schemas are loaded in parallel on first execution
const result = await agent.process({
message: 'Create a new customer named ACME Corp',
sessionId: 'session-123',
companyId: 'company-456',
});Using loadConnector Helper
For connectors with many resources, use the loadConnector helper:
import { loadConnector } from '@runflow-ai/sdk';
const contabil = loadConnector('api-contabil');
const agent = new Agent({
name: 'Accounting Agent',
instructions: 'You manage accounting data.',
model: openai('gpt-4o'),
tools: {
// Using resource slugs
listClientes: contabil.tool('list-customers'),
getCliente: contabil.tool('get-customer'),
createCliente: contabil.tool('create-customer'),
updateCliente: contabil.tool('update-customer'),
},
});Path Parameters
Connectors automatically resolve path parameters from the resource URL:
// Resource defined in backend with path: /clientes/{id}/pedidos/{pedidoId}
const getClientePedidoTool = createConnectorTool({
connector: 'api-contabil',
resource: 'get-customer-order', // Resource slug
description: 'Get specific order from a customer',
});
// Agent automatically extracts path params from context
const result = await agent.process({
message: 'Get order 456 from customer 123',
sessionId: 'session-123',
companyId: 'company-456',
});
// Backend automatically resolves: /clientes/123/pedidos/456Mock Execution
Enable mock mode for development and testing:
const tool = createConnectorTool({
connector: 'api-contabil',
resource: 'list-customers', // Resource slug
enableMock: true, // Adds useMock parameter
});
// Use mock mode in development
const result = await agent.process({
message: 'List customers (use mock data)',
sessionId: 'dev-session',
companyId: 'dev-company',
// Tool will automatically include useMock=true if mock data is configured
});Complete Example: Both Modes
import { Agent, openai, createConnectorTool, connector } from '@runflow-ai/sdk';
// 1. Create tool for agent use
const createContactTool = createConnectorTool({
connector: 'hubspot',
resource: 'contacts',
action: 'create',
description: 'Create contact in HubSpot',
});
const agent = new Agent({
name: 'CRM Agent',
instructions: 'You manage contacts in HubSpot.',
model: openai('gpt-4o'),
tools: { createContact: createContactTool },
});
// 2. Agent decides when to use tool (Mode 1)
await agent.process({
message: 'Create contact for Alice, [email protected]',
sessionId: 'session-1',
});
// 3. Direct invocation (Mode 2 - you control)
await connector(
'hubspot',
'contacts',
'update',
{
contactId: '123',
status: 'customer'
},
{ credentialId: 'cred-prod' }
);How It Works
- Tool Creation:
createConnectorToolcreates a tool with a temporary schema - Lazy Loading: On first agent execution, schemas are fetched from the backend in parallel
- Schema Conversion: JSON Schema → Zod → LLM Parameters (automatic)
- Caching: Schemas are cached globally to avoid repeated API calls
- Execution: Tool/API executes with authentication, path resolution, and error handling
Automatic Setup
The Agent automatically initializes connector tools on first execution:
const agent = new Agent({
name: 'My Agent',
model: openai('gpt-4o'),
tools: {
// Connector tools are automatically identified and initialized
tool1: createConnectorTool({ ... }),
tool2: createTool({ ... }), // Regular tool
tool3: createConnectorTool({ ... }),
},
});
// First process() call:
// 1. Identifies connector tools (marked with _isConnectorTool)
// 2. Loads schemas in parallel from backend
// 3. Updates tool parameters
// 4. Proceeds with normal executionWorkflows
Workflows orchestrate multiple agents, functions, and connectors in sequence.
Basic Workflow
import { createWorkflow, Agent, openai } from '@runflow-ai/sdk';
import { z } from 'zod';
// Define input/output schemas
const inputSchema = z.object({
customerEmail: z.string().email(),
issueDescription: z.string(),
});
const outputSchema = z.object({
ticketId: z.string(),
response: z.string(),
emailSent: z.boolean(),
});
// Create agents
const analyzerAgent = new Agent({
name: 'Issue Analyzer',
instructions: 'Analyze customer issues and categorize them.',
model: openai('gpt-4o'),
});
const responderAgent = new Agent({
name: 'Responder',
instructions: 'Write helpful responses to customers.',
model: openai('gpt-4o'),
});
// Create workflow
const workflow = createWorkflow({
id: 'support-workflow',
name: 'Support Ticket Workflow',
inputSchema,
outputSchema,
})
.agent('analyze', analyzerAgent, {
promptTemplate: 'Analyze this issue: {{input.issueDescription}}',
})
.connector('create-ticket', 'hubspot', 'tickets', 'create', {
subject: '{{analyze.text}}',
content: '{{input.issueDescription}}',
priority: 'medium',
})
.agent('respond', responderAgent, {
promptTemplate: 'Write a response for: {{input.issueDescription}}',
})
.connector('send-email', 'email', 'messages', 'send', {
to: '{{input.customerEmail}}',
subject: 'Your Support Request',
body: '{{respond.text}}',
})
.build();
// Execute workflow
const result = await workflow.execute({
customerEmail: '[email protected]',
issueDescription: 'My order has not arrived',
});
console.log(result);Workflow with Parallel Steps
const workflow = createWorkflow({
id: 'parallel-workflow',
inputSchema: z.object({ query: z.string() }),
outputSchema: z.any(),
})
.parallel([
createAgentStep('agent1', agent1),
createAgentStep('agent2', agent2),
createAgentStep('agent3', agent3),
], {
waitForAll: true, // Wait for all to complete
})
.function('merge', async (input, context) => {
// Merge results
return {
combined: Object.values(context.stepResults.get('parallel')),
};
})
.build();Workflow with Conditional Steps
const workflow = createWorkflow({
id: 'conditional-workflow',
inputSchema: z.object({ priority: z.string() }),
outputSchema: z.any(),
})
.condition(
'check-priority',
(context) => context.input.priority === 'high',
// True path
[
createAgentStep('urgent-agent', urgentAgent),
createConnectorStep('notify-slack', 'slack', 'messages', 'send', {
channel: '#urgent',
message: 'High priority issue!',
}),
],
// False path
[
createAgentStep('normal-agent', normalAgent),
]
)
.build();Workflow with Retry
const workflow = createWorkflow({
id: 'retry-workflow',
inputSchema: z.object({ data: z.any() }),
outputSchema: z.any(),
})
.then({
id: 'api-call',
type: 'connector',
config: {
connector: 'external-api',
resource: 'data',
action: 'fetch',
parameters: {},
},
retryConfig: {
maxAttempts: 3,
backoff: 'exponential', // 'fixed', 'exponential', 'linear'
delay: 1000,
retryableErrors: ['timeout', 'network'],
},
})
.build();Workflow Step Types
import {
createAgentStep,
createFunctionStep,
createConnectorStep
} from '@runflow-ai/sdk';
// Agent step
const agentStep = createAgentStep('step-id', agent, {
promptTemplate: 'Process: {{input.data}}',
});
// Function step
const functionStep = createFunctionStep('step-id', async (input, context) => {
// Custom logic
return { result: 'processed' };
});
// Connector step
const connectorStep = createConnectorStep(
'step-id',
'hubspot',
'contacts',
'create',
{ email: '{{input.email}}' }
);Prompts
The Prompts module manages prompt templates with support for global and tenant-specific prompts.
Standalone Prompts Manager
import { Prompts } from '@runflow-ai/sdk';
const prompts = new Prompts();
// Get prompt (global or tenant-specific)
const prompt = await prompts.get('sistema');
console.log(prompt.content);
console.log('Is global?', prompt.isGlobal);
// List all available prompts
const allPrompts = await prompts.list({ limit: 50 });
allPrompts.forEach(p => {
console.log(`${p.name} ${p.isGlobal ? '🌍' : '🏢'}`);
});
// Create tenant-specific prompt
const custom = await prompts.create(
'my-prompt',
'You are a specialist in {{topic}}.',
{ variables: ['topic'] }
);
// Update tenant prompt
await prompts.update('my-prompt', {
content: 'You are a SENIOR specialist in {{topic}}.'
});
// Delete tenant prompt
await prompts.delete('my-prompt');
// Render template with variables
const rendered = prompts.render(
'Hello {{name}}, welcome to {{company}}!',
{ name: 'John', company: 'Runflow' }
);
// Get and render in one call
const text = await prompts.getAndRender('my-prompt', { topic: 'AI' });Security Rules:
- ✅ Can read global prompts (provided by Runflow)
- ✅ Can create/update/delete own tenant prompts
- ❌ Cannot modify global prompts
- ❌ Cannot access other tenants' prompts
Knowledge (RAG)
The Knowledge module (also called RAG) manages semantic search in vector knowledge bases.
Standalone Knowledge Manager
import { Knowledge } from '@runflow-ai/sdk';
const knowledge = new Knowledge({
vectorStore: 'support-docs',
k: 5,
threshold: 0.7,
});
// Basic search
const results = await knowledge.search('How to reset password?');
results.forEach(result => {
console.log(result.content);
console.log('Score:', result.score);
});
// Get formatted context for LLM
const context = await knowledge.getContext('password reset', { k: 3 });
console.log(context);Hybrid Search (Semantic + Keyword)
const results = await knowledge.hybridSearch({
query: 'password reset',
keywords: ['password', 'reset', 'forgot'],
k: 5,
});Multi-Query Search
const results = await knowledge.multiQuery(
'How to reset password?',
{
variants: [
'password recovery',
'forgot password',
'reset credentials',
],
k: 5,
}
);Agentic RAG in Agent
When RAG is configured in an agent, the SDK automatically creates a searchKnowledge tool that the LLM can decide when to use. This is more efficient than always searching, as the LLM only searches when necessary.
const agent = new Agent({
name: 'Support Agent',
instructions: 'You are a helpful support agent.',
model: openai('gpt-4o'),
rag: {
vectorStore: 'support-docs',
k: 5,
threshold: 0.7,
// Custom search prompt - guides when to search
searchPrompt: `Use searchKnowledge tool when user asks about:
- Technical problems
- Process questions
- Specific information
Don't use for greetings or casual chat.`,
toolDescription: 'Search in support documentation for solutions',
},
});
// Agent automatically has 'searchKnowledge' tool
// LLM decides when to search (not always - more efficient!)
const result = await agent.process({
message: 'How do I reset my password?',
});Multiple Vector Stores
const agent = new Agent({
name: 'Advanced Support Agent',
instructions: 'Help users with multiple knowledge bases.',
model: openai('gpt-4o'),
rag: {
vectorStores: [
{
id: 'support-docs',
name: 'Support Documentation',
description: 'General support articles',
threshold: 0.7,
k: 5,
searchPrompt: 'Use search_support-docs when user has technical problems or questions',
},
{
id: 'api-docs',
name: 'API Documentation',
description: 'Technical API reference',
threshold: 0.8,
k: 3,
searchPrompt: 'Use search_api-docs when user asks about API endpoints or integration',
},
],
},
});Managing Documents in Knowledge Base
Add text documents to your knowledge base:
import { Knowledge } from '@runflow-ai/sdk';
const knowledge = new Knowledge({
vectorStore: 'support-docs',
});
// Add a text document
const result = await knowledge.addDocument(
'How to reset password: Go to settings > security > reset password',
{
title: 'Password Reset Guide',
category: 'authentication',
version: '1.0'
}
);
console.log('Document added:', result.documentId);Upload files (PDF, DOCX, TXT, etc.):
import * as fs from 'fs';
// Node.js - Upload from file system
const fileBuffer = fs.readFileSync('./manual.pdf');
const result = await knowledge.addFile(
fileBuffer,
'manual.pdf',
{
title: 'User Manual',
mimeType: 'application/pdf',
metadata: {
department: 'Support',
version: '2.0'
}
}
);
console.log('File uploaded:', result.documentId);
// Browser - Upload from file input
const fileInput = document.querySelector('input[type="file"]');
const file = fileInput.files[0];
const result = await knowledge.addFile(
file,
file.name,
{
title: 'User Upload',
metadata: { source: 'web-portal' }
}
);List and delete documents:
// List all documents
const documents = await knowledge.listDocuments({ limit: 50 });
documents.forEach(doc => {
console.log(`ID: ${doc.id}`);
console.log(`Content: ${doc.content.substring(0, 100)}...`);
console.log(`Created: ${doc.createdAt}`);
});
// Delete a document
await knowledge.deleteDocument('document-id-here');RAG Interceptor & Rerank
Advanced features for customizing RAG results before they reach the LLM.
Interceptor - Filter & Transform Results:
const agent = new Agent({
name: 'Smart Agent',
model: openai('gpt-4o'),
rag: {
vectorStore: 'docs',
k: 10,
// Interceptor: Customize results before LLM
onResultsFound: async (results, query) => {
// 1. Filter sensitive data
const filtered = results.filter(r => !r.metadata?.internal);
// 2. Enrich with external data
const enriched = await Promise.all(
filtered.map(async r => ({
...r,
content: `${r.content}\n\nSource: ${r.metadata?.url}`,
}))
);
return enriched;
},
},
});Rerank - Improve Relevance:
// In Agent
const agent = new Agent({
model: openai('gpt-4o'),
rag: {
vectorStore: 'docs',
k: 10,
// Rerank strategy
rerank: {
enabled: true,
strategy: 'score-boost',
boostKeywords: ['official', 'tutorial', 'guide'],
},
},
});
// In Knowledge standalone
const knowledge = new Knowledge({ vectorStore: 'docs' });
const results = await knowledge.search('query');
// Rerank with custom logic
const reranked = await knowledge.rerank(results, 'query', {
enabled: true,
strategy: 'custom',
customScore: (result, query) => {
let score = result.score;
// Boost recent docs
const daysSince = daysSinceUpdate(result.metadata?.updatedAt);
if (daysSince < 30) score *= 1.5;
// Boost exact matches
if (result.content.includes(query)) score *= 1.3;
return score;
},
});Rerank Strategies:
reciprocal-rank-fusion- Standard RRF algorithmscore-boost- Boost results containing keywordsmetadata-weight- Weight by metadata field valuecustom- Custom scoring function
Combined - Rerank + Interceptor:
rag: {
vectorStore: 'docs',
// 1. Rerank first (improve relevance)
rerank: {
enabled: true,
strategy: 'score-boost',
boostKeywords: ['tutorial', 'guide'],
},
// 2. Interceptor after (enrich)
onResultsFound: async (results) => {
return results.map(r => ({
...r,
content: `${r.content}\n\n📚 ${r.metadata?.category}`,
}));
},
}LLM Standalone
The LLM module allows you to use language models directly without creating agents.
Basic Usage
import { LLM } from '@runflow-ai/sdk';
// Create LLM
const llm = LLM.openai('gpt-4o', {
temperature: 0.7,
maxTokens: 2000,
});
// Generate response
const response = await llm.generate('What is the capital of Brazil?');
console.log(response.text);
console.log('Tokens:', response.usage);With Messages
const response = await llm.generate([
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Tell me a joke.' },
]);With System Prompt
const response = await llm.generate(
'What is 2+2?',
{
system: 'You are a math teacher.',
temperature: 0.1,
}
);Streaming
const stream = llm.generateStream('Tell me a story');
for await (const chunk of stream) {
if (!chunk.done) {
process.stdout.write(chunk.text);
}
}Factory Methods
import { LLM } from '@runflow-ai/sdk';
// OpenAI
const gpt4 = LLM.openai('gpt-4o', { temperature: 0.7 });
// Anthropic (Claude)
const claude = LLM.anthropic('claude-3-5-sonnet-20241022', {
temperature: 0.9,
maxTokens: 4000,
});
// Bedrock
const bedrock = LLM.bedrock('anthropic.claude-3-sonnet-20240229-v1:0', {
temperature: 0.8,
});Observability
The Observability system automatically collects execution traces for analysis and debugging.
Automatic Tracing (Agent)
// Traces are collected automatically
const agent = new Agent({
name: 'Support Agent',
instructions: 'Help customers.',
model: openai('gpt-4o'),
});
// Each execution automatically generates traces
await agent.process({
message: 'Help me',
companyId: 'company_123', // Optional
sessionId: 'session_456', // Optional
executionId: 'exec_123', // Optional
threadId: 'thread_789', // Optional
});Manual Tracing
import { createTraceCollector } from '@runflow-ai/sdk';
// Create collector
const collector = createTraceCollector(apiClient, 'project_123', {
batchSize: 10,
flushInterval: 5000,
maxRetries: 3,
});
// Start span
const span = collector.startSpan('custom_operation', {
agentName: 'Custom Agent',
model: 'gpt-4o',
});
span.setInput({ data: 'input' });
try {
// Execute operation
const result = await doSomething();
span.setOutput(result);
span.setCosts({
tokens: { input: 100, output: 50, total: 150 },
costs: {
inputCost: 0.003,
outputCost: 0.002,
totalCost: 0.005,
currency: 'USD'
},
});
} catch (error) {
span.setError(error);
}
span.finish();
// Force flush
await collector.flush();Decorator for Auto-Tracing
import { traced } from '@runflow-ai/sdk';
class MyService {
private traceCollector: TraceCollector;
@traced('my_operation', { agentName: 'My Agent' })
async myMethod(input: any) {
// Automatically traced
return processData(input);
}
}Local Traces (Development)
# .env
RUNFLOW_LOCAL_TRACES=trueTraces will be saved to .runflow/traces.json in a structured format organized by executionId for analysis.
Trace Types
The SDK supports various trace types:
agent_execution- Full agent processingworkflow_execution- Workflow processingworkflow_step- Individual workflow steptool_call- Tool executionllm_call- LLM API callvector_search- Vector search operationmemory_operation- Memory accessconnector_call- Connector executionstreaming_session- Streaming responseexecution_summary- Custom execution (new)custom_event- Custom log event (new)error_event- Error logging (new)
Custom Executions (Non-Agent Flows)
For scenarios without agent.process() (document analysis, batch processing, etc.):
import { identify, startExecution, log } from '@runflow-ai/sdk/observability';
export async function analyzeDocument(docId: string) {
// 1. Identify context
identify({ type: 'document', value: docId });
// 2. Start custom execution
const exec = startExecution({
name: 'document-analysis',
input: { documentId: docId }
});
try {
// 3. Process with LLM calls
const llm = LLM.openai('gpt-4o');
const text = await llm.chat("Extract text from document...");
exec.log('text_extracted', { length: text.length });
const category = await llm.chat(`Classify this: ${text}`);
exec.log('document_classified', { category });
const summary = await llm.chat(`Summarize: ${text}`);
// 4. Finish with custom output
await exec.end({
output: {
summary,
category,
documentId: docId
}
});
return { summary, category };
} catch (error) {
exec.setError(error);
await exec.end();
throw error;
}
}In the Portal:
Thread: document_xxx_doc_456
└─ Execution: "document-analysis"
├─ llm_call: Extract text
├─ custom_event: text_extracted
├─ llm_call: Classify
├─ custom_event: document_classified
└─ llm_call: SummarizeCustom Logging
Log custom events within any execution:
import { log, logEvent, logError } from '@runflow-ai/sdk/observability';
// Simple log
log('cache_hit', { key: 'user_123' });
// Structured log
logEvent('validation', {
input: { orderId: '123', amount: 100 },
output: { valid: true, score: 0.95 },
metadata: { rule: 'fraud_detection' }
});
// Error log
try {
await riskyOperation();
} catch (error) {
logError('operation_failed', error);
throw error;
}Logs are automatically associated with the current execution and flushed with other traces.
Exception Safety
The SDK ensures traces are not lost even on crashes:
- Exit handlers: Auto-flush on
process.exit(),SIGTERM,SIGINT - Exception handlers: Flush on
uncaughtExceptionandunhandledRejection - Auto-cleanup: Custom executions auto-flush after 60s if not manually ended
- Worker safety: Execution engine waits 100ms for pending flushes
Coverage: ~95% trace recovery even on crashes.
Verbose Tracing Mode
New in v2.1: Control how much data is saved in traces - from minimal metadata to complete prompts and responses.
Modes:
minimal: Only essential metadata (production, minimal storage)standard: Balanced metadata + sizes (default)full: Complete data including prompts and responses (debugging)
Simple API (string preset):
const agent = new Agent({
name: 'My Agent',
model: openai('gpt-4o'),
observability: 'full' // 'minimal', 'standard', or 'full'
});Granular Control (object config):
const agent = new Agent({
name: 'My Agent',
model: openai('gpt-4o'),
observability: {
mode: 'standard', // Base mode
verboseLLM: true, // Override: save complete prompts
verboseMemory: false, // Override: keep memory minimal
verboseTools: true, // Override: save tool data (default)
maxInputLength: 5000, // Truncate large inputs
maxOutputLength: 5000, // Truncate large outputs
}
});Environment Variable:
# .env
RUNFLOW_VERBOSE_TRACING=true # Auto-sets mode to 'full'// Auto-detects from environment
const agent = new Agent({
name: 'My Agent',
model: openai('gpt-4o'),
// observability: 'full' auto-applied if env var set
});What Each Mode Saves:
| Trace Type | Minimal | Standard | Full | |------------|---------|----------|------| | LLM Call | messagesCount, config | messagesCount, config | Complete messages + responses | | Memory Load | messagesCount | messagesCount | First 10 messages (truncated) | | Memory Save | messagesSaved | messagesSaved | User + assistant messages | | Tool Call | Always full (with truncation) | Always full | Always full | | Agent Execution | Always full | Always full | Always full |
Storage Impact:
- Minimal: ~100 bytes/trace
- Standard: ~500 bytes/trace
- Full: ~5KB/trace (with truncation)
Recommended Usage:
// Production: minimal storage
const prodAgent = new Agent({
observability: 'minimal'
});
// Staging: balanced
const stagingAgent = new Agent({
observability: 'standard'
});
// Development: complete debugging
const devAgent = new Agent({
observability: 'full'
});
// Or detect automatically
const agent = new Agent({
observability: process.env.NODE_ENV === 'production' ? 'minimal' : 'full'
});Trace Interceptor (onTrace)
Intercept and modify traces before they are sent, useful for:
- Sending to external tools (DataDog, Sentry, CloudWatch)
- Adding custom metadata
- Filtering specific traces
- Audit logging
Example: Send to DataDog
const agent = new Agent({
observability: {
mode: 'full',
onTrace: (trace) => {
// Send to DataDog
datadogTracer.trace({
name: trace.operation,
resource: trace.type,
duration: trace.duration,
meta: trace.metadata
});
// Return trace unchanged to continue normal flow
return trace;
}
}
});Example: Add Custom Metadata
const agent = new Agent({
observability: {
onTrace: (trace) => {
// Enrich with custom data
trace.metadata.environment = 'production';
trace.metadata.version = '1.0.0';
trace.metadata.region = 'us-east-1';
return trace;
}
}
});Example: Filter LLM Calls
const agent = new Agent({
observability: {
onTrace: (trace) => {
// Only send LLM calls to external tracker
if (trace.type === 'llm_call') {
externalTracker.send(trace);
}
return trace;
}
}
});Example: Cancel Sensitive Traces
const agent = new Agent({
observability: {
onTrace: (trace) => {
// Cancel traces with sensitive data
if (trace.metadata?.containsSensitiveData) {
return null; // ← Cancel trace (won't be sent)
}
return trace;
}
}
});Example: Error Tracking with Sentry
const agent = new Agent({
observability: {
onTrace: (trace) => {
// Send errors to Sentry
if (trace.type === 'error_event' || trace.status === 'error') {
Sentry.captureException(new Error(trace.error || 'Unknown error'), {
extra: {
traceId: trace.traceId,
executionId: trace.executionId,
operation: trace.operation,
metadata: trace.metadata
}
});
}
return trace;
}
}
});Callback Return Values:
TraceData: Modified trace (will be sent with changes)null: Cancel trace (won't be sent or saved)void/undefined: Continue with original trace
🏗️ Advanced Examples
Multi-Modal (Images)
const agent = new Agent({
name: 'Vision Agent',
instructions: 'You can analyze images.',
model: openai('gpt-4o'),
});
await agent.process({
message: 'What is in this image?',
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What is in this image?' },
{
type: 'image_url',
image_url: { url: 'https://example.com/image.jpg' },
},
],
},
],
});Streaming with Memory
const agent = new Agent({
name: 'Streaming Agent',
instructions: 'You are helpful.',
model: openai('gpt-4o'),
memory: {
maxTurns: 10,
},
streaming: {
enabled: true,
},
});
const stream = await agent.processStream({
message: 'Tell me a story',
sessionId: 'session_123',
});
for await (const chunk of stream) {
if (!chunk.done) {
process.stdout.write(chunk.text);
}
}Custom Memory Provider
import { Memory, MemoryProvider } from '@runflow-ai/sdk';
class RedisMemoryProvider implements MemoryProvider {
async get(key: string): Promise<MemoryData> {
const data = await redis.get(key);
return JSON.parse(data);
}
async set(key: string, data: MemoryData): Promise<void> {
await redis.set(key, JSON.stringify(data));
}
async append(key: string, message: MemoryMessage): Promise<void> {
const data = await this.get(key);
data.messages.push(message);
await this.set(key, data);
}
async clear(key: string): Promise<void> {
await redis.del(key);
}
}
// Use custom provider
const memory = new Memory({
provider: new RedisMemoryProvider(),
maxTurns: 10,
});Complex E-Commerce Workflow
const workflow = createWorkflow({
id: 'e-commerce-workflow',
inputSchema: z.object({
customerId: z.string(),
query: z.string(),
}),
outputSchema: z.any(),
})
// 1. Analyze intent
.agent('analyzer', analyzerAgent, {
promptTemplate: 'Analyze customer query: {{input.query}}',
})
// 2. Load customer data in parallel
.parallel([
createFunctionStep('load-profile', async (input, ctx) => {
return await loadCustomerProfile(input.customerId);
}),
createFunctionStep('load-orders', async (input, ctx) => {
return await loadCustomerOrders(input.customerId);
}),
createFunctionStep('search-products', async (input, ctx) => {
return await searchProducts(ctx.stepResults.get('analyzer').text);
}),
])
// 3. Conditional: Sales vs Support
.condition(
'route',
(ctx) => ctx.stepResults.get('analyzer').text.includes('buy'),
// Sales path
[
createAgentStep('sales', salesAgent),
createConnectorStep('update-crm', 'hubspot', 'contacts', 'update', {
contactId: '{{input.customerId}}',
lastContact: new Date().toISOString(),
}),
],
// Support path
[
createAgentStep('support', supportAgent),
createConnectorStep('create-ticket', 'hubspot', 'tickets', 'create', {
subject: '{{analyzer.text}}',
}),
]
)
// 4. Send response
.agent('responder', responderAgent, {
promptTemplate: 'Create final response based on context',
})
.build();
const result = await workflow.execute({
customerId: 'customer_123',
query: 'I want to buy a laptop',
});💡 Real-World Use Cases
Complete, production-ready examples showcasing the platform's potential.
1. Customer Support Agent with RAG
A sophisticated support agent that searches documentation, remembers context, and creates tickets in HubSpot.
import { Agent, openai, createTool } from '@runflow-ai/sdk';
import { hubspotConnector } from '@runflow-ai/sdk/connectors';
import { z } from 'zod';
// Create a support agent with memory and RAG
const supportAgent = new Agent({
name: 'Customer Support AI',
instructions: `You are a helpful customer support agent.
- Search the knowledge base for relevant information
- Remember previous conversations
- Create tickets for complex issues
- Always be professional and empathetic`,
model: openai('gpt-4o'),
// Remember conversation history
memory: {
maxTurns: 20,
summarizeAfter: 10,
summarizePrompt: 'Summarize key customer issues and resolutions',
},
// Search in documentation
knowledge: {
vectorStore: 'support-docs',
k: 5,
threshold: 0.7,
},
// Available tools
tools: {
createTicket: hubspotConnector.tickets.create,
searchOrders: createTool({
id: 'search-orders',
description: 'Search customer orders',
inputSchema: z.object({
customerId: z.string(),
}),
execute: async ({ context }) => {
const orders = await fetchOrders(context.input.customerId);
return { orders };
},
}),
},
});
// Handle customer message
const result = await supportAgent.process({
message: 'My order #12345 has not arrived yet',
sessionId: 'session_xyz', // For conversation history
userId: 'user_123', // For user identification
});
console.log(result.message);
// Agent searches docs, retrieves order info, and provides solution2. Sales Automation with Multi-Step Workflow
Automate lead qualification, deal creation, and team notifications using workflows.
import { createWorkflow, Agent, openai } from '@runflow-ai/sdk';
import { hubspotConnector, slackConnector } from '@runflow-ai/sdk/connectors';
import { z } from 'zod';
// Qualification agent
const qualifierAgent = new Agent({
name: 'Lead Qualifier',
instructions: 'Analyze lead data and assign a score (1-10) with reasoning.',
model: openai('gpt-4o-mini'),
});
// Sales copy agent
const copywriterAgent = new Agent({
name: 'Sales Copywriter',
instructions: 'Write a personalized email for the lead based on their profile.',
model: openai('gpt-4o'),
});
// Complete workflow
const salesWorkflow = createWorkflow({