aaab
v0.0.2
Published
Advanced AI Agent Builder - A comprehensive framework for building AI agents with TypeScript support
Maintainers
Readme
AAABuilder - Advanced AI/ML Agent Framework
AAABuilder (Advanced AI Agent Builder) is a comprehensive, production-ready AI/ML development framework that transforms complex AI workflows into scalable APIs. Built with TypeScript-first development, it provides unified support for all major AI/ML model types including LLMs, Computer Vision, Audio Processing, Traditional ML, and more.
What's New in v0.0.1
Major New Features
- Unified Provider System: Single interface for ALL AI/ML model types
- Advanced Debugging Tools: Step-by-step debugging with breakpoints and variable inspection
- Comprehensive Testing Framework: Automated test generation and performance benchmarking
- Project Scaffolding: Interactive project creation with templates
- Auto-Documentation: Generate comprehensive docs in multiple formats
- Interactive Playground: Real-time agent development environment
- Enterprise Security: JWT auth, rate limiting, encryption, and audit logging
- Model Optimization: Quantization, pruning, caching, and GPU acceleration
- Advanced Audio Processing: Real-time streaming, speaker identification, emotion detection
Table of Contents
- Overview
- Key Features
- Architecture
- Quick Start
- Installation
- Usage
- Provider System
- Development Tools
- Examples
- API Reference
- Deployment
- Contributing
- License
Overview
AAABuilder addresses the growing complexity of AI/ML development by providing a unified framework that handles everything from simple text generation to complex multimodal AI systems. Unlike other solutions that focus on specific domains, AAABuilder offers comprehensive support across all AI/ML paradigms with enterprise-grade features.
Why Choose AAABuilder?
| Feature | AAABuilder | LangChain | AutoGen | CrewAI | Flowise | |---------|------------|-----------|---------|--------|---------| | Model Support | All AI/ML types | LLM-focused | LLM-focused | LLM-focused | LLM-focused | | Vision Models | ✓ Full support | ✗ Limited | ✗ Limited | ✗ Limited | ✗ Limited | | Audio Processing | ✓ Advanced pipeline | ✗ None | ✗ None | ✗ None | ✗ None | | Traditional ML | ✓ Complete support | ✗ None | ✗ None | ✗ None | ✗ None | | Debugging | ✓ Step-by-step | ✗ None | ✗ None | ✗ None | ✗ None | | Testing | ✓ Automated framework | ✗ Manual | ✗ Manual | ✗ Manual | ✗ Limited | | TypeScript | ✓ First-class | ✗ Python-only | ✗ Python-only | ✗ Python-only | ✗ JavaScript | | Security | ✓ Enterprise-grade | ✗ Basic | ✗ Basic | ✗ Basic | ✗ Basic | | Optimization | ✓ Built-in | ✗ None | ✗ None | ✗ None | ✗ None |
Key Features
Unified Provider System
- Single Interface: Access all AI/ML models through one consistent API
- Automatic Discovery: Provider registry with automatic model detection
- Priority Routing: Intelligent provider selection based on capabilities
- Model Optimization: Built-in quantization, pruning, and caching
Advanced Audio Processing
- Real-time Streaming: WebRTC, WebSocket, gRPC backends
- Speaker Identification: ECAPA-TDNN, X-Vector, Wav2Vec2 models
- Emotion Detection: 7-emotion classification with arousal-valence
- Audio Enhancement: RNNoise, Facebook Denoiser, NVIDIA NoiseRed
- Speech Processing: OpenAI Whisper, custom ASR/TTS models
Computer Vision
- Object Detection: YOLO v5/v8/v10, Faster R-CNN, SSD
- Image Classification: ResNet, EfficientNet, MobileNet, Vision Transformers
- Segmentation: U-Net, DeepLab, Mask R-CNN
- OCR: Tesseract, EasyOCR, PaddleOCR integration
- Face Recognition: FaceNet, ArcFace, DeepFace
Traditional Machine Learning
- Scikit-learn: Classification, regression, clustering, dimensionality reduction
- XGBoost: Gradient boosting for structured data
- LightGBM: Light gradient boosting machine
- TensorFlow: Custom neural networks and Keras models
- Model Optimization: Hyperparameter tuning and model compression
Development Tools
Debugging Framework
import { AgentDebugger } from './src/debug';
const debugger = new AgentDebugger(providerRouter);
// Start debug session
const sessionId = await debugger.startDebugSession(agentContent, inputData);
// Set breakpoints
debugger.setBreakpoint(sessionId, 'step-id');
// Step through execution
const result = await debugger.stepNext(sessionId);
// Inspect variables
const variables = debugger.getSessionState(sessionId)?.variables;Testing Framework
import { AgentTester } from './src/testing';
const tester = new AgentTester(providerRouter);
// Run test suite
const result = await tester.runTestSuite({
name: 'My Agent Tests',
agentContent: agentFile,
testCases: [
{
name: 'Basic test',
input: { message: 'Hello' },
expectedOutput: { response: 'Hello back!' }
}
]
});
// Performance benchmarking
const benchmark = await tester.benchmarkAgent(agentContent, 100);Interactive Playground
# Start interactive development environment
npm run playground
# Available commands:
# load <file> - Load agent file
# debug <file> - Start debugging session
# test <file> - Run tests
# validate <file> - Validate agent
# docs <file> - Generate documentationProject Scaffolding
# Create new project with interactive wizard
npm run create
# Create from template
npm run create:interactive
# List available templates
npm run templatesAuto-Documentation
# Generate comprehensive documentation
npm run docs
# Watch mode for development
npm run docs:watch
# Generate OpenAPI spec
npm run docs:openapi
# Generate HTML documentation
npm run docs:htmlEnterprise Security
- JWT Authentication: Secure token-based authentication
- Role-Based Access Control: Granular permissions system
- Rate Limiting: Configurable request throttling
- Encryption: Data encryption at rest and in transit
- Audit Logging: Comprehensive security event tracking
- API Key Management: Secure API key handling
Architecture
AAABuilder/
├── src/ # TypeScript source code
│ ├── providers/ # Unified provider system
│ │ ├── ProviderRouter.ts # Central router
│ │ ├── ModelRegistry.ts # Provider registry
│ │ ├── ModelOptimizer.ts # Optimization framework
│ │ ├── AudioProviders.ts # Audio processing
│ │ ├── Traditional ML/ # Scikit-learn, XGBoost, etc.
│ │ ├── Computer Vision/ # YOLO, ResNet, etc.
│ │ └── Audio Processing/ # Whisper, emotion detection
│ ├── debug/ # Debugging tools
│ │ └── AgentDebugger.ts # Step-by-step debugging
│ ├── testing/ # Testing framework
│ │ └── AgentTester.ts # Automated testing
│ ├── scaffolding/ # Project scaffolding
│ │ └── ProjectScaffolder.ts # Template system
│ ├── documentation/ # Auto-documentation
│ │ └── DocumentationGenerator.ts
│ ├── playground/ # Interactive development
│ │ └── AgentPlayground.ts # Real-time environment
│ └── security/ # Security types
│ └── types.ts # Auth & authorization
├── lib/ # Compiled JavaScript
├── bin/ # CLI executables
├── examples/ # Example agents
├── templates/ # Project templates
└── plugins/ # Plugin systemQuick Start
Prerequisites
- Node.js 18.0.0 or higher
- npm 8.0.0 or higher
- TypeScript 5.3+ (recommended)
Installation
# Install globally
npm install -g aaab
# Verify installation
aaab --versionYour First Agent
# Create new project
aaab create my-ai-project
# Navigate to project
cd my-ai-project
# Set API keys
export OPENAI_API_KEY="your-api-key-here"
# Start development server
npm run devBasic Agent Example
@agent hello-world v1
description: "Simple greeting agent"
secrets:
- name: OPENAI_API_KEY
type: env
value: OPENAI_API_KEY
variables:
name:
type: input
path: name
required: true
steps:
- id: greet
type: llm
provider: openai
model: gpt-4o
prompt: "Greet {name} warmly"
save: greeting
outputs:
message: "{greeting}"
@endInstallation
Global Installation
npm install -g aaabLocal Development
# Clone the repository
git clone https://github.com/Init-Abdellm/AAABuilder.git
cd AAABuilder
# Install dependencies
npm install
# Build the project
npm run build
# Start development server
npm run devDocker Installation
# Pull the image
docker pull aaabuilder/aaabuilder:latest
# Run the container
docker run -p 5000:5000 \
-e OPENAI_API_KEY=your-key \
aaabuilder/aaabuilder:latestUsage
CLI Commands
# Core commands
aaab run <file> # Execute agent file
aaab validate <file> # Validate agent syntax
aaab serve --port 5000 # Start HTTP server
# Development tools
aaab debug <file> # Start debugging session
aaab test <file> # Run test suite
aaab playground # Start interactive playground
aaab create <project> # Create new project
aaab docs <file> # Generate documentation
# Provider management
aaab providers --list # List available providers
aaab providers --test # Test provider connections
aaab providers --optimize # Optimize model performanceAdvanced Usage
# Vision operations
aaab vision --classify image.jpg
aaab vision --detect image.jpg
aaab vision --ocr document.pdf
# Audio operations
aaab audio --stt audio.wav
aaab audio --tts "Hello world"
aaab audio --emotion audio.wav
aaab audio --speaker audio.wav
# Machine learning
aaab ml --train config.json
aaab ml --predict model.pkl data.json
aaab ml --optimize model.pkl
# Vector databases
aaab vector-db --create my-db
aaab vector-db --search my-db docs "query"Provider System
Unified Provider Interface
import { createProviderRouter, ProviderConfig } from './src/providers';
// Configure providers
const config: ProviderConfig = {
// Traditional ML
scikitLearn: { enabled: true, priority: 5 },
xgboost: { enabled: true, priority: 10 },
lightgbm: { enabled: true, priority: 8 },
tensorflow: { enabled: true, priority: 15 },
// Computer Vision
yolo: { enabled: true, priority: 20 },
imageClassification: { enabled: true, priority: 15 },
imageSegmentation: { enabled: true, priority: 18 },
visionTransformer: { enabled: true, priority: 25 },
// Audio Processing
whisper: { enabled: true, priority: 25 },
audioEnhancement: { enabled: true, priority: 20 },
realTimeAudio: { enabled: true, priority: 25 },
speakerEmotion: { enabled: true, priority: 22 }
};
// Create router
const router = await createProviderRouter(config);
// Execute any model request
const response = await router.executeRequest({
model: 'yolo-v8-detection',
input: imageBuffer,
parameters: { confidence: 0.5 }
});Model Optimization
// Get optimization recommendations
const recommendations = await router.getOptimizationRecommendations('my-model');
// Optimize model
const result = await router.optimizeModel('my-model', {
strategies: ['quantization', 'caching', 'batching'],
quantization: {
type: 'int8',
targetAccuracyLoss: 0.05
},
caching: {
strategy: 'lru',
maxSize: 1000
}
});
console.log(`Size reduction: ${result.metrics.sizeReduction}%`);
console.log(`Speed improvement: ${result.metrics.speedImprovement}x`);Audio Processing Pipeline
// Real-time audio processing
const audioPipeline = await router.createAudioPipeline([
{ type: 'noise-reduction', model: 'rnnoise' },
{ type: 'speaker-identification', model: 'ecapa-tdnn' },
{ type: 'emotion-detection', model: 'wav2vec2-emotion' }
]);
// Process audio stream
const results = await audioPipeline.processStream(audioStream, {
chunkSize: 1024,
sampleRate: 16000,
realTime: true
});Development Tools
Debugging Framework
The debugging system provides step-by-step execution with breakpoints, variable inspection, and execution tracing.
import { AgentDebugger } from './src/debug';
const debugger = new AgentDebugger(providerRouter);
// Start debug session
const sessionId = await debugger.startDebugSession(agentContent, inputData);
// Set breakpoint on specific step
debugger.setBreakpoint(sessionId, 'process-step');
// Execute until breakpoint
await debugger.continue(sessionId);
// Inspect variables
const variables = debugger.getVariable(sessionId, 'result');
// Step through execution
const stepResult = await debugger.stepNext(sessionId);
// Get execution trace
const trace = debugger.getExecutionTrace(sessionId);Testing Framework
Comprehensive testing with automated test generation, performance benchmarking, and mock providers.
import { AgentTester } from './src/testing';
const tester = new AgentTester(providerRouter);
// Run test suite
const testSuite = {
name: 'My Agent Tests',
agentContent: agentFile,
testCases: [
{
name: 'Basic functionality',
input: { message: 'Hello' },
expectedOutput: { response: 'Hello back!' },
timeout: 5000
},
{
name: 'Error handling',
input: { message: '' },
shouldFail: true,
expectedError: 'Message is required'
}
]
};
const results = await tester.runTestSuite(testSuite);
// Performance benchmarking
const benchmark = await tester.benchmarkAgent(agentContent, {
iterations: 100,
concurrency: 10
});Interactive Playground
Real-time development environment for agent creation and testing.
# Start playground
npm run playground
# Available commands:
agent> load examples/chatbot.agent
agent> debug examples/chatbot.agent
agent> test examples/chatbot.agent
agent> validate examples/chatbot.agent
agent> docs examples/chatbot.agent --format html
agent> helpProject Scaffolding
Interactive project creation with templates and best practices.
# Interactive project creation
npm run create
# Create from specific template
npm run create:interactive
# List available templates
npm run templates
# Available templates:
# - basic-agent
# - chatbot
# - vision-analyzer
# - audio-processor
# - ml-pipeline
# - api-server
# - webhook-processorAuto-Documentation
Generate comprehensive documentation in multiple formats.
# Generate markdown documentation
npm run docs
# Generate HTML documentation
npm run docs:html
# Generate OpenAPI specification
npm run docs:openapi
# Watch mode for development
npm run docs:watch
# Validate documentation
npm run docs:validateExamples
Simple Chat Agent
@agent chatbot v1
description: "AI-powered chatbot"
secrets:
- name: OPENAI_API_KEY
type: env
value: OPENAI_API_KEY
variables:
message:
type: input
path: message
required: true
steps:
- id: process
type: llm
provider: openai
model: gpt-4o
prompt: "Respond to: {message}"
save: response
outputs:
reply: "{response}"
@endVision Analysis Agent
@agent vision-analyzer v1
description: "Comprehensive image analysis"
variables:
image:
type: input
path: image
required: true
steps:
- id: detect_objects
type: vision
provider: yolo
model: yolo-v8-detection
input: "{image}"
save: objects
- id: classify_image
type: vision
provider: image-classification
model: resnet-50
input: "{image}"
save: classification
- id: extract_text
type: vision
provider: ocr
model: tesseract
input: "{image}"
save: text
- id: analyze
type: llm
provider: openai
model: gpt-4o-vision
prompt: |
Analyze this image:
Objects detected: {objects}
Classification: {classification}
Text found: {text}
save: analysis
outputs:
objects: "{objects}"
classification: "{classification}"
text: "{text}"
analysis: "{analysis}"
@endAudio Processing Agent
@agent audio-processor v1
description: "Advanced audio analysis and enhancement"
variables:
audio:
type: input
path: audio
required: true
steps:
- id: enhance_audio
type: audio
provider: audio-enhancement
model: rnnoise
input: "{audio}"
save: enhanced_audio
- id: transcribe
type: audio
provider: whisper
model: whisper-large-v3
input: "{enhanced_audio}"
save: transcription
- id: identify_speaker
type: audio
provider: speaker-identification
model: ecapa-tdnn
input: "{enhanced_audio}"
save: speaker
- id: detect_emotion
type: audio
provider: emotion-detection
model: wav2vec2-emotion
input: "{enhanced_audio}"
save: emotion
outputs:
transcription: "{transcription}"
speaker: "{speaker}"
emotion: "{emotion}"
enhanced_audio: "{enhanced_audio}"
@endMachine Learning Pipeline
@agent ml-pipeline v1
description: "Traditional ML model training and prediction"
variables:
data:
type: input
path: data
required: true
operation:
type: input
path: operation
required: true
steps:
- id: train_model
type: ml
provider: scikit-learn
model: random-forest
action: train
input: "{data}"
parameters:
n_estimators: 100
max_depth: 10
save: model
when: "{operation} == 'train'"
- id: predict
type: ml
provider: scikit-learn
model: "{model}"
action: predict
input: "{data}"
save: predictions
when: "{operation} == 'predict'"
outputs:
model: "{model}"
predictions: "{predictions}"
@endAPI Reference
Agent File Format
@agent agent-name v1
description: "Agent description"
version: "1.0.0"
# Trigger configuration
trigger:
type: http | schedule | webhook
method: GET | POST | PUT | DELETE
path: /endpoint
schedule: "0 */5 * * * *" # Cron expression
# Secrets management
secrets:
- name: API_KEY
type: env | file | vault
value: OPENAI_API_KEY
required: true
# Input variables
variables:
input_var:
type: input | file | url
path: data.field
required: true
default: "default value"
validation:
- type: string
- minLength: 1
- maxLength: 1000
# Processing steps
steps:
- id: step_name
type: llm | vision | audio | ml | http | condition
provider: openai | anthropic | gemini | yolo | whisper
model: gpt-4o | claude-3 | gemini-pro | yolo-v8
input: "{input_var}"
prompt: "Process: {input_var}"
parameters:
temperature: 0.7
max_tokens: 1000
save: result
retry:
attempts: 3
delay: 1000
timeout: 30000
# Conditional execution
- id: conditional_step
type: condition
condition: "{result.confidence} > 0.8"
steps:
- id: high_confidence
type: llm
provider: openai
model: gpt-4o
prompt: "High confidence result: {result}"
save: enhanced_result
# Output formatting
outputs:
response: "{result}"
confidence: "{result.confidence}"
metadata:
processing_time: "{execution_time}"
model_used: "{model}"
@endProvider Configuration
interface ProviderConfig {
// Traditional ML
scikitLearn?: ProviderSettings;
xgboost?: ProviderSettings;
lightgbm?: ProviderSettings;
tensorflow?: ProviderSettings;
// Computer Vision
yolo?: ProviderSettings;
imageClassification?: ProviderSettings;
imageSegmentation?: ProviderSettings;
visionTransformer?: ProviderSettings;
// Audio Processing
whisper?: ProviderSettings;
audioEnhancement?: ProviderSettings;
realTimeAudio?: ProviderSettings;
speakerEmotion?: ProviderSettings;
// Language Models
openai?: ProviderSettings;
anthropic?: ProviderSettings;
gemini?: ProviderSettings;
ollama?: ProviderSettings;
}
interface ProviderSettings {
enabled: boolean;
priority?: number;
config?: Record<string, any>;
timeout?: number;
retries?: number;
}Security Configuration
interface SecurityConfig {
auth: {
enableAuth: boolean;
jwtSecret: string;
apiKeys: string[];
tokenExpiry: string;
};
cors: {
origin: string[];
credentials: boolean;
methods: string[];
};
rateLimit: {
windowMs: number;
max: number;
};
encryption: {
algorithm: string;
key: string;
};
audit: {
enabled: boolean;
logLevel: string;
};
}Deployment
Docker Deployment
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
EXPOSE 5000
CMD ["npm", "start"]# Build and run
docker build -t aaabuilder .
docker run -p 5000:5000 \
-e OPENAI_API_KEY=your-key \
-e NODE_ENV=production \
aaabuilderKubernetes Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: aaabuilder
spec:
replicas: 3
selector:
matchLabels:
app: aaabuilder
template:
metadata:
labels:
app: aaabuilder
spec:
containers:
- name: aaabuilder
image: aaabuilder:latest
ports:
- containerPort: 5000
env:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: api-keys
key: openaiEnvironment Variables
| Variable | Description | Required | Default |
|----------|-------------|----------|---------|
| OPENAI_API_KEY | OpenAI API key | Yes | - |
| ANTHROPIC_API_KEY | Anthropic API key | No | - |
| GEMINI_API_KEY | Google Gemini API key | No | - |
| NODE_ENV | Environment | No | development |
| PORT | Server port | No | 5000 |
| JWT_SECRET | JWT signing secret | No | random |
| LOG_LEVEL | Logging level | No | info |
| CORS_ORIGIN | CORS origins | No | * |
Contributing
We welcome contributions from the community! Please read our contributing guidelines before submitting pull requests.
Development Setup
# Fork and clone the repository
git clone https://github.com/your-username/AAABuilder.git
cd AAABuilder
# Install dependencies
npm install
# Run tests
npm test
# Start development server
npm run dev
# Run playground
npm run playgroundCode Style
- Use TypeScript for all new code
- Follow ESLint and Prettier configurations
- Write comprehensive tests
- Update documentation for new features
- Use conventional commit messages
Testing
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Run specific test suites
npm test -- --testNamePattern="Provider"
npm test -- --testPathPattern="debug"
# Generate coverage report
npm run test:coveragePull Request Process
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests if applicable
- Update documentation
- Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
License Summary
- License: Apache License 2.0
- Commercial Use: ✓ Allowed
- Modification: ✓ Allowed
- Distribution: ✓ Allowed
- Patent Use: ✓ Allowed
- Private Use: ✓ Allowed
- Liability: ✗ Limited
- Warranty: ✗ Limited
For more information about the Apache License 2.0, visit https://www.apache.org/licenses/LICENSE-2.0.
AAABuilder - Transforming AI/ML development with comprehensive, production-ready solutions.
Built with ❤️ by INIT-ABDELLM
What's Next?
- Model Marketplace: Browse and install pre-trained models
- Workflow Designer: Visual drag-and-drop agent builder
- Analytics Dashboard: Real-time performance monitoring
- Multi-language Support: Python, Rust, and Go bindings
- Cloud Integration: AWS, GCP, Azure native support
- AutoML: Automated model selection and hyperparameter tuning
Stay tuned for updates!
