ai-agent-runtime
v0.5.18
Published
Runtime adapter that bridges agent frameworks (OpenAI SDK, LangChain) with production infrastructure
Maintainers
Readme
ai-agent-runtime
Runtime adapter that bridges agent frameworks with production infrastructure. This package is the glue between your chosen agent framework (OpenAI SDK, LangChain, etc.) and deployment targets.
What This Does
This runtime acts as an adapter layer that:
- Wraps different agent frameworks in a unified interface
- Handles framework-specific quirks and requirements
- Provides consistent deployment patterns across frameworks
- Enables hot-swapping between frameworks without changing deployment code
Architecture
Agent Frameworks (OpenAI SDK, LangChain, etc.)
↓
ai-agent-runtime (This Package)
↓
Production Infrastructure (Lambda, Docker, etc.)The runtime ensures that regardless of which agent framework you use, deployment and management remain consistent.
Installation
npm install @ai-agent-platform/runtimeQuick Start
Basic Usage
import { createAgentRuntime, loadManifestFromFile } from '@ai-agent-platform/runtime';
// Load agent configuration
const manifest = await loadManifestFromFile('./agent.yaml');
// Create runtime with custom tools
const runtime = await createAgentRuntime(manifest, customTools, {
verbose: true,
port: 3000
});
// Chat with the agent
const response = await runtime.chat('Hello, how can you help me?');
console.log(response);Agent Manifest (agent.yaml)
name: my-agent
version: 1.0.0
description: A helpful AI assistant
model: gpt-5
instructions: |
You are a helpful AI assistant with access to various tools.
Always be helpful and accurate.
temperature: 0.7
mcpServers:
- name: filesystem
url: npx @modelcontextprotocol/server-filesystem
required: true
requiredEnvVars:
- OPENAI_API_KEY
tags:
- assistant
- helpfulCore Components
AgentRuntime
Main class for running standalone agents:
import { AgentRuntime } from '@ai-agent-platform/runtime';
const runtime = new AgentRuntime({
model: 'gpt-5',
temperature: 0.7,
verbose: true
});
await runtime.initialize(manifest, customTools);Custom Tools
Define agent-specific tools:
import { z } from 'zod';
import type { ToolDefinition } from '@ai-agent-platform/runtime';
const customTools: ToolDefinition[] = [
{
name: 'weather_check',
description: 'Check current weather for a location',
parameters: z.object({
location: z.string().describe('City name or coordinates'),
}),
execute: async ({ location }) => {
// Your weather API integration
return { temperature: 72, condition: 'sunny' };
},
},
];HTTP Server
Create a REST API for your agent:
import express from 'express';
import { createAgentRuntime } from '@ai-agent-platform/runtime';
const app = express();
app.use(express.json());
const runtime = await createAgentRuntime(manifest, customTools);
app.post('/chat', async (req, res) => {
const { message } = req.body;
const response = await runtime.chat(message);
res.json({ response });
});
app.listen(3000);Built-in Tools
The runtime includes these built-in tools:
- calculator: Mathematical calculations
- read_file: Read file contents
- write_file: Write to files
- list_directory: List directory contents
- shell: Execute shell commands
- web_search: Web search (mock implementation)
MCP Integration
Connect to Model Context Protocol servers:
# In agent.yaml
mcpServers:
- name: filesystem
url: npx @modelcontextprotocol/server-filesystem /path/to/allowed/dir
required: true
- name: database
url: node ./custom-mcp-server.js
env:
DB_CONNECTION: "postgresql://..."
required: falseDeployment Modes
1. Interactive Development
import { startInteractiveMode } from './interactive.js';
await startInteractiveMode(runtime);2. HTTP Server
import { startServer } from './server.js';
await startServer(runtime, 3000);3. AWS Lambda
export const handler = async (event, context) => {
const runtime = await createAgentRuntime(manifest, customTools);
const response = await runtime.chat(event.message);
return { statusCode: 200, body: JSON.stringify({ response }) };
};4. Docker
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]API Reference
AgentRuntime
class AgentRuntime {
constructor(options: RuntimeOptions);
// Initialize with manifest and custom tools
async initialize(manifest: AgentManifest, customTools?: ToolDefinition[]): Promise<void>;
// Single message chat
async chat(message: string): Promise<string>;
// Streaming chat
async chatStream(message: string, onChunk?: (text: string) => void): Promise<string>;
// Conversation management
getHistory(): Message[];
clearHistory(): void;
// Add tools after initialization
addTools(tools: ToolDefinition[]): void;
// Clean up resources
async cleanup(): Promise<void>;
}Utility Functions
// Load and validate manifest
async function loadManifestFromFile(filePath: string): Promise<AgentManifest>;
function validateManifest(data: any): AgentManifest;
// Environment variable checking
function checkRequiredEnvVars(requiredVars: string[]): { missing: string[]; present: string[] };
// Convenience function
async function createAgentRuntime(
manifest: AgentManifest,
customTools?: ToolDefinition[],
options?: RuntimeOptions
): Promise<AgentRuntime>;Environment Variables
Required environment variables:
OPENAI_API_KEY=your-openai-api-key
# Optional
NODE_ENV=production
PORT=3000Error Handling
try {
const runtime = await createAgentRuntime(manifest, customTools);
const response = await runtime.chat(message);
} catch (error) {
if (error.message.includes('Manifest validation failed')) {
// Handle validation errors
} else if (error.message.includes('OPENAI_API_KEY')) {
// Handle missing API key
} else {
// Handle other errors
}
}Version Updates
When the platform team releases new runtime versions:
# Update to latest version
npm update @ai-agent-platform/runtime
# Update to specific version
npm install @ai-agent-platform/[email protected]
# Rebuild and redeploy
npm run build
npm run deployTypeScript Support
The package includes full TypeScript definitions:
import type {
AgentManifest,
ToolDefinition,
RuntimeOptions,
Message,
ChatSession
} from '@ai-agent-platform/runtime';Examples
See the examples directory for complete implementation examples:
- Basic Agent: Simple chat agent
- HTTP Server: REST API server
- Lambda Function: Serverless deployment
- Custom Tools: Advanced tool integration
- MCP Integration: External service connections
Contributing
This package is part of the AI Agent Platform. See the main repository for contribution guidelines.
License
MIT License - see LICENSE file for details.
For more information, visit the AI Agent Platform documentation.
