ai-agent-fleet
v0.1.78
Published
Framework for building AI agents through natural language and taking them from experiment to production at scale
Maintainers
Readme
Agent Fleet Framework
Build production AI agents through natural language. Deploy anywhere. Manage at scale.
What Makes This Different
This framework bridges the gap between experimenting with AI agents and running them in production. Unlike other tools, we don't build another agent framework - we make existing frameworks accessible through natural language and handle all the operational complexity.
Think of it as: A framework that sits above agent-building tools (OpenAI SDK, LangChain, etc.) to handle scaffolding, deployment, and fleet management.
The Problem We Solve
You can build an agent with OpenAI's SDK in minutes. But then what? How do you:
- Deploy it to production?
- Manage configuration and secrets?
- Monitor and update it?
- Scale from 1 to 100 agents?
This framework handles all of that, while letting you describe agents in plain language.
How It Works
- Natural Language Interface - Describe your agent like you're talking to Claude Code
- Framework Pass-Through - We use OpenAI Agents SDK under the hood (LangChain, AutoGen coming soon)
- Production Orchestration - Automatic deployment to Lambda, Docker, or any infrastructure
- Fleet Management - Monitor, update, and scale your agents from one place
Not Another Agent Framework
We don't compete with agent-building frameworks. We orchestrate them.
Your agents use standard frameworks underneath. No lock-in. Take your code anywhere.
🚀 Key Features
Natural Language Development
- Conversational Scaffolding: Describe agents in plain English, we generate the code
- Framework Pass-Through: Uses OpenAI Agents SDK underneath (more frameworks coming)
- No Code Required: Build production agents without writing a single line
Production-Ready from Day One
- Standalone Agent Deployment: Each agent is an independent, deployable application
- Multiple Deployment Targets: Lambda, Docker, HTTP servers, custom environments
- Automatic Setup: Dependencies and configurations handled automatically
- Tool-Aware Instructions: Agents understand their available tools contextually
Fleet Management at Scale
- Distributed Architecture: Centralized management, distributed deployment
- Version Management: Controlled updates across your entire agent fleet
- Runtime Package: Shared ai-agent-runtime for consistent behavior
- Bulk Operations: Update, deploy, or modify multiple agents at once
Extensibility & Integration
- MCP Server Support: Connect to any Model Context Protocol server
- OAuth Integration: Built-in auth for Google, Slack, and more
- Custom Tools: Add agent-specific tools and business logic
- Template System: Pre-configured templates for common use cases
Supported Models
- OpenAI: gpt-5, gpt-5-mini, gpt-5-nano, o4-mini, o3
- More Coming: Support for Anthropic, Cohere, and open models planned
Quick Start
Installation
# Install globally via npm
npm install -g ai-agent-fleet
# That's it! Run fleet to get started
fleetFirst Time Setup
On first run, you'll need to add your OpenAI API key:
fleet setup
> Enter your OpenAI API key: sk-...Or if you prefer, create a .env file:
echo "OPENAI_API_KEY=your-key-here" > ~/.fleet/.envBuild from Source (Optional)
git clone <repository>
cd agent-fleet
npm install
npm run build
npm link # Makes 'fleet' available globallyWhat's Working Today ✅
Natural Language to Production:
- Describe agents conversationally - framework generates the code
- Uses OpenAI Agents SDK underneath (more frameworks coming)
- Deploy to Lambda, Docker, or HTTP servers with one command
- Manage fleets of agents from centralized control
Framework Pass-Through:
- OpenAI Agents SDK integration with full feature support
- Built-in tools: calculator, file operations, shell, web search, PDF tools
- MCP server integration for external services (Google, Slack, etc.)
- OAuth support for secure API connections
Production Features:
- Standalone deployable agents
- Multi-environment configuration management
- Automatic dependency resolution
- Version control and updates across fleets
Create Your First Agent
Start by running fleet and describing what you want:
fleet
> What can I help you build today?
> "I need an agent that monitors my calendar and sends Slack reminders"
> I'll create that for you using OpenAI SDK with Calendar and Slack integrations.
> "sounds good"
> Agent created at ~/.agent-fleet/agents/calendar-reminder
> To test it: cd ~/.agent-fleet/agents/calendar-reminder && npm run devTest Your Agent
# Navigate to your agent
cd ~/.agent-fleet/agents/calendar-reminder
# Start interactive chat with your agent
npm run dev
> "What meetings do I have today?"
> [Agent responds with your calendar events]Deploy to Production
# From your agent directory
npm run deploy:lambda:complete
# Your agent is now live on AWS Lambda!Fleet Management
Use fleet for creating and managing agents:
fleet
# Creating agents
> "create a code review agent"
> "build a customer support bot with Slack integration"
> "I need a sales qualification agent"
# Managing your fleet
> "show me all my agents"
> "list agents with their locations"
> "update all agents to use gpt-5"
> "add web search to my research agent"Working with Individual Agents
Once created, test and deploy agents from their directories:
# Test an agent locally
cd ~/.agent-fleet/agents/my-agent
npm run dev
> "Hello, how can you help me?"
# Deploy to production
npm run deploy:lambda:complete # AWS Lambda
npm run deploy:docker # Docker
npm start # HTTP ServerArchitecture
The framework acts as an orchestration layer above existing agent frameworks:
Natural Language Input
↓
┌─ Agent Fleet Framework ─┐
│ ├─ NL Interface │ ← Describe agents conversationally
│ ├─ Scaffolding Engine │ ← Generates code using frameworks
│ ├─ Fleet Manager │ ← Monitors and manages agents
│ └─ Deployment Engine │ ← Handles production deployment
└─────────────────────────┘
↓
┌─ Agent Frameworks ─────┐
│ ├─ OpenAI Agents SDK │ ← Currently supported
│ ├─ LangChain │ ← Coming soon
│ ├─ AutoGen │ ← Planned
│ └─ Any Framework │ ← Future
└─────────────────────────┘
↓
┌─ Production Infrastructure ─┐
│ ├─ AWS Lambda │
│ ├─ Docker Containers │
│ ├─ HTTP Servers │
│ └─ Edge Functions │
└─────────────────────────────┘Key Components
- Agent Fleet CLI: Natural language interface and management tools
- Scaffolding Engine: Generates production-ready agent code
- Runtime Adapter: Bridges different agent frameworks to deployment targets
- Fleet Manager: Centralized control for distributed agents
- Framework Pass-Through: Uses existing frameworks, doesn't reinvent them
Configuration
Platform Environment Variables
Create a .env file in the project root:
OPENAI_API_KEY=your-openai-api-key
AI_MODEL=gpt-5Agent Environment Variables
Each agent has its own .env file:
# In ~/.agent-fleet/agents/my-agent/.env
OPENAI_API_KEY=your-openai-api-key
NODE_ENV=development
PORT=3000AWS Lambda Base Configuration
To avoid setting up AWS credentials for each agent individually, create a base .env.lambda file in the project root:
# Copy the template and fill in your AWS credentials
cp .env.lambda.example .env.lambda# In agent-fleet/.env.lambda
AWS_ACCESS_KEY_ID=your-aws-access-key-id
AWS_SECRET_ACCESS_KEY=your-aws-secret-access-key
AWS_REGION=us-east-1
AWS_ACCOUNT_ID=123456789012
ECR_REGISTRY=123456789012.dkr.ecr.us-east-1.amazonaws.com
LAMBDA_ROLE_ARN=arn:aws:iam::123456789012:role/lambda-execution-roleWhen you generate new agents, they'll automatically inherit these AWS credentials. Existing agents can sync credentials using:
cd ~/.agent-fleet/agents/my-agent
npm run sync:aws-credsAdvanced Features
Fleet-Wide Operations
Use fleet for managing multiple agents at once:
fleet
# Bulk operations
> "update all agents to gpt-5"
> "add web search to all customer-facing agents"
> "show performance metrics for all agents"
> "list all deployed agents with their endpoints"
# Configuration management
> "set temperature to 0.7 for creative agents"
> "enable debug mode for development agents"
> "add Google Calendar to all personal assistants"Extending Agents (Optional)
For developers who want to customize, you can modify the generated agent code directly in its directory. The framework generates standard TypeScript/JavaScript that you can extend.
Development
For Framework Contributors
If you're contributing to the framework itself:
# Build the framework
npm run build
npm run typecheck
# Build runtime package
cd packages/runtime/
npm run buildBut remember - most users should just use fleet and never need these commands.
Agent Templates
The framework includes templates for common use cases. Access them through the fleet CLI:
fleet
> "create a developer assistant agent"
> "I need a personal assistant with calendar and email"
> "build me a customer support agent"Production Deployment
Deploy agents from their directories after testing:
# Navigate to your agent
cd ~/.agent-fleet/agents/my-agent
# Deploy to your target
npm run deploy:lambda:complete # AWS Lambda with streaming
npm run deploy:docker # Docker container
npm run build && npm start # HTTP server
# Check deployment status
npm run statusThe framework handles all the complexity:
- Automatic dependency bundling
- Environment configuration
- Security and secrets management
- Health checks and monitoring
- Auto-scaling configuration
Built-in Tools
Each agent includes these built-in tools:
- calculator: Mathematical calculations
- read_file: Read file contents
- write_file: Write to files
- list_directory: List directory contents
- shell: Execute shell commands
- web_search: Web search via OpenAI Responses API
- analyze_pdf: Analyze PDF documents
- generate_pdf: Generate PDF documents
Tool Configuration
Customize tool behavior in your agent.yaml:
# Enable specific tools
enabledTools:
- web_search
- calculator
- analyze_pdf
# Configure individual tools
toolConfigurations:
web_search:
model: gpt-5-mini # Override model for web searches
maxTokens: 3000 # Limit response length
reasoning_effort: medium # For reasoning models (minimal/low/medium/high)
analyze_pdf:
model: gpt-5 # Model for PDF analysis
maxTokens: 8000 # Token limit for analysisAvailable configurations:
- web_search:
model(defaults to agent model),maxTokens(5000),reasoning_effort - analyze_pdf:
model(defaults to agent model),maxTokens
📖 Complete configuration guide: AGENT-CONFIGURATION.md
Contributing
Contributions are welcome! Please read the contributing guidelines before submitting PRs.
- Runtime Package: Shared functionality for all agents
- CLI Platform: Agent creation and management tools
- Templates: Reusable agent configurations
- Documentation: Guides and examples
Documentation
- Agent Configuration Guide: Complete agent.yaml reference
- Deployment Guide: Complete deployment instructions
- Development Guide: Detailed development documentation
- Runtime Package: ai-agent-runtime docs
- Architecture Overview: System architecture details
License
MIT License - see LICENSE file for details.
