dank-ai
v1.0.50
Published
Dank Agent Service - Docker-based AI agent orchestration platform
Maintainers
Readme
🚀 Dank Agent Service
Docker-based AI Agent Orchestration Platform
Dank is a powerful Node.js service that allows you to define, deploy, and manage AI agents using Docker containers. Each agent runs in its own isolated environment with configurable resources, LLM providers, and custom handlers. Built for production with comprehensive CI/CD support and Docker registry integration.
🌐 Website: https://ai-dank.xyz
📦 NPM Package: https://www.npmjs.com/package/dank-ai
☁️ Cloud Deployment: https://cloud.ai-dank.xyz - Serverless for AI Agents
☁️ Deploy to the Cloud
Serverless for AI Agents - Deploy your Dank agents seamlessly to the cloud with zero infrastructure management.
👉 https://cloud.ai-dank.xyz - The seamless cloud deployment management serverless solution for Dank. Scale your AI agents automatically, pay only for what you use, and focus on building great agents instead of managing servers.
✨ Features
- 🤖 Multi-LLM Support: OpenAI, Anthropic, Cohere, Ollama, and custom providers
- 🐳 Docker Orchestration: Isolated agent containers with resource management
- ⚡ Easy Configuration: Define agents with simple JavaScript configuration
- 📦 NPM Package Support: Use any npm package in your handlers with top-level imports
- 📘 TypeScript Ready: Full support for TypeScript and compiled projects
- 📊 Real-time Monitoring: Built-in health checks and status monitoring
- 🔧 Flexible Handlers: Custom event handlers for agent outputs and errors
- 🎯 CLI Interface: Powerful command-line tools for agent management
- 🏗️ Production Builds: Build and push Docker images to registries with custom naming
- 🔄 CI/CD Ready: Seamless integration with GitHub Actions, GitLab CI, and other platforms
🚀 Quick Start
Prerequisites
- Node.js 16+ installed
- Docker Desktop or Docker Engine (auto-installed if missing)
- API keys for your chosen LLM provider(s)
🆕 Auto-Docker Installation: Dank automatically detects, installs, and starts Docker if unavailable. No manual setup required!
Installation & Setup
# 1. Install globally
npm install -g dank-ai
# 2. Initialize project
mkdir my-agent-project && cd my-agent-project
dank init my-agent-project
# 3. Set environment variables
export OPENAI_API_KEY="your-api-key"
# 4. Configure agents in dank.config.js
# (see Agent Configuration section below)
# 5. Start agents
dank run
# 6. Monitor
dank status --watch
dank logs assistant --followmy-project/
├── dank.config.js # Agent configuration
├── .dankignore # Build ignore patterns (optional)
├── agents/ # Custom agent code (optional)
│ └── example-agent.js
└── .dank/ # Generated files
├── project.yaml # Project state
└── logs/ # Agent logs📋 CLI Commands
Core Commands
dank run # Start all defined agents
dank run --config <path> # Use custom config path (for compiled projects)
dank status [--watch] # Show agent status (live updates)
dank stop [agents...] # Stop specific agents or --all
dank logs [agent] [--follow] # View agent logs
dank init [name] # Initialize new project
dank build # Build Docker images
dank build:prod # Build production images
dank clean # Clean up Docker resourcesProduction Build Options
dank build:prod --push # Build and push to registry
dank build:prod --tag v1.0.0 # Custom tag
dank build:prod --registry ghcr.io # GitHub Container Registry
dank build:prod --namespace mycompany # Custom namespace
dank build:prod --tag-by-agent # Use agent name as tag
dank build:prod --force # Force rebuild
dank build:prod --output-metadata <file> # Generate deployment metadata
dank build:prod --json # JSON output💡 Push Control: The
--pushoption is the only way to push images. Agent config defines naming, CLI controls pushing.
🤖 Agent Configuration
Basic Setup
// Import npm packages at the top - they'll be available in handlers
const axios = require('axios');
const { format } = require('date-fns');
const { processData } = require('./utils'); // Local files work too
const { createAgent } = require('dank-ai');
const { v4: uuidv4 } = require('uuid');
module.exports = {
name: 'my-project',
agents: [
createAgent('assistant')
.setId(uuidv4()) // Required: Unique UUIDv4
.setLLM('openai', {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-3.5-turbo',
temperature: 0.7
})
.setPrompt('You are a helpful assistant.')
.setPromptingServer({ port: 3000 })
.setInstanceType('small') // Cloud only: 'small', 'medium', 'large', 'xlarge'
.addHandler('request_output', async (data) => {
// Use imported packages directly in handlers
console.log(`[${format(new Date(), 'yyyy-MM-dd HH:mm')}] Response:`, data.response);
await axios.post('https://api.example.com/log', { response: data.response });
processData(data);
})
]
};📦 NPM Packages: Any packages you
require()at the top of your config are automatically available in your handlers. Just make sure they're in yourpackage.json.
For ESM-only packages that don't support require(), use dynamic import():
// Dynamic imports return Promises - define at top level
const uniqueString = import("unique-string").then((m) => m.default);
const chalk = import("chalk").then((m) => m.default);
// Multiline .then() is also supported
const ora = import("ora").then((m) => {
return m.default;
});
module.exports = {
agents: [
createAgent('my-agent')
.addHandler('output', async (data) => {
// Await the promise to get the actual module
const generateString = await uniqueString;
const colors = await chalk;
console.log(colors.green(`ID: ${generateString()}`));
})
]
};Note: Dynamic imports are asynchronous, so you must await them inside your handlers.
Supported LLM Providers
| Provider | Configuration |
|----------|-------------|
| OpenAI | .setLLM('openai', { apiKey, model, temperature, maxTokens }) |
| Anthropic | .setLLM('anthropic', { apiKey, model, maxTokens }) |
| Ollama | .setLLM('ollama', { baseURL, model }) |
| Cohere | .setLLM('cohere', { apiKey, model, temperature }) |
| Hugging Face | .setLLM('huggingface', { apiKey, model }) |
| Custom | .setLLM('custom', { baseURL, apiKey, model }) |
HTTP Routes
HTTP automatically enables when you add routes:
createAgent('api-agent')
.setPromptingServer({ port: 3000 })
.post('/hello', (req, res) => {
res.json({ message: 'Hello, World!', received: req.body });
})
.get('/status', (req, res) => {
res.json({ status: 'ok' });
});Event Handlers
🆕 Auto-Detection: Dank automatically enables features based on usage:
- Event Handlers: Auto-enabled with
.addHandler()- Direct Prompting: Auto-enabled with
.setPrompt()+.setLLM()- HTTP API: Auto-enabled with
.get(),.post(), etc.
Direct Prompting Events (request_output)
agent
// Main response event
.addHandler('request_output', (data) => {
console.log('Response:', data.response);
})
// Modify prompt before LLM processing
.addHandler('request_output:start', (data) => {
return { prompt: `Enhanced: ${data.prompt}` };
})
// Modify response before returning
.addHandler('request_output:end', (data) => {
return { response: `${data.response}\n\n---\nGenerated by Dank` };
})
// Error handling
.addHandler('request_output:error', (data) => {
console.error('Error:', data.error);
});Event Flow: request_output:start → LLM Processing → request_output → request_output:end → Response Sent
Passing Custom Data to Handlers
You can pass any custom data in the request body to the /prompt endpoint, and it will be available in your handlers via data.params. This enables powerful use cases like user authentication, conversation tracking, RAG (Retrieval-Augmented Generation), and custom lookups.
Client Request:
// POST /prompt
{
"prompt": "What's the weather today?",
"userId": "user-12345",
"conversationId": "conv-abc-xyz",
"sessionId": "sess-789",
"userPreferences": {
"language": "en",
"timezone": "America/New_York"
}
}Handler Access:
agent
.addHandler('request_output:start', async (data) => {
// Access custom data via data.params
const userId = data.params.userId;
const conversationId = data.params.conversationId;
// Perform authentication
const user = await authenticateUser(userId);
if (!user) throw new Error('Unauthorized');
// Load conversation history for context
const history = await getConversationHistory(conversationId);
// Perform RAG lookup
const relevantDocs = await vectorSearch(data.prompt, userId);
// Enhance prompt with context
return {
prompt: `Context: ${JSON.stringify(history)}\n\nRelevant Docs: ${relevantDocs}\n\nUser Question: ${data.prompt}`
};
})
.addHandler('request_output', async (data) => {
// Log with user context
await logInteraction({
userId: data.params.userId,
conversationId: data.params.conversationId,
prompt: data.prompt,
response: data.response,
timestamp: data.timestamp
});
// Update user preferences based on interaction
if (data.params.userPreferences) {
await updateUserPreferences(data.params.userId, data.params.userPreferences);
}
});Use Cases:
- User Authentication: Pass
userIdorapiKeyto authenticate and authorize requests - Conversation Tracking: Pass
conversationIdto maintain context across multiple requests - RAG (Retrieval-Augmented Generation): Pass user context to fetch relevant documents from vector databases
- Personalization: Pass
userPreferencesto customize responses - Analytics: Pass tracking IDs to correlate requests with user sessions
- Multi-tenancy: Pass
tenantIdororganizationIdfor isolated data access
Available Data Structure:
{
prompt: "User's prompt",
metadata: {
// All custom fields from request body
userId: "...",
conversationId: "...",
// ... any other fields you pass
},
// System fields (directly on data object)
protocol: "http",
clientIp: "127.0.0.1",
response: "LLM response",
usage: { total_tokens: 150 },
model: "gpt-3.5-turbo",
processingTime: 1234,
timestamp: "2024-01-01T00:00:00.000Z"
}Tool Events (tool:*)
.addHandler('tool:httpRequest:*', (data) => {
console.log('HTTP Request Tool:', data);
});Pattern: tool:<tool-name>:<action> (e.g., tool:httpRequest:call, tool:httpRequest:response)
System Events
.addHandler('output', (data) => console.log('Output:', data))
.addHandler('error', (error) => console.error('Error:', error))
.addHandler('start', () => console.log('Agent started'))
.addHandler('stop', () => console.log('Agent stopped'))Advanced Patterns
// Wildcard matching
.addHandler('tool:*', (data) => console.log('Any tool:', data))
.addHandler('request_output:*', (data) => console.log('Any request event:', data))
// Multiple handlers for same event
.addHandler('request_output', (data) => console.log('Log:', data))
.addHandler('request_output', (data) => saveToDatabase(data))
.addHandler('request_output', (data) => trackAnalytics(data))Resource Management
.setInstanceType('small') // Options: 'small', 'medium', 'large', 'xlarge'
// small: 512m, 1 CPU
// medium: 1g, 2 CPU
// large: 2g, 2 CPU
// xlarge: 4g, 4 CPUNote: setInstanceType() is only used during deployments to Dank Cloud. Local runs with dank run disregard this setting.
Production Image Configuration
.setAgentImageConfig({
registry: 'ghcr.io', // Docker registry URL
namespace: 'mycompany', // Organization/namespace
tag: 'v1.0.0' // Image tag
})Image Naming
- Default:
{registry}/{namespace}/{agent-name}:{tag} - Tag by Agent (
--tag-by-agent):{registry}/{namespace}/dank-agent:{agent-name} - No Config:
{agent-name}:{tag}
Deployment Metadata
The --output-metadata option generates JSON with:
- Base image, ports, resource limits
- LLM provider and model info
- Event handlers, environment variables
- Build options (registry, namespace, tag)
Perfect for CI/CD pipelines to auto-configure deployment infrastructure.
{
"project": "my-agent-project",
"agents": [{
"name": "customer-service",
"imageName": "ghcr.io/mycompany/customer-service:v1.2.0",
"baseImage": { "full": "deltadarkly/dank-agent-base:nodejs-20" },
"promptingServer": { "port": 3000, "authentication": false },
"resources": { "memory": "512m", "cpu": 1 },
"llm": { "provider": "openai", "model": "gpt-3.5-turbo" },
"handlers": ["request_output", "request_output:start"]
}]
}Registry Authentication
# Docker Hub
docker login
dank build:prod --registry docker.io --namespace myusername --push
# GitHub Container Registry
echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin
dank build:prod --registry ghcr.io --namespace myorg --push
# Private Registry
docker login registry.company.com
dank build:prod --registry registry.company.com --namespace ai-agents --pushCI/CD Integration
name: Build and Push Production Images
on:
push:
tags: ['v*']
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: '18'
- run: npm install -g dank-ai
- uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- run: |
dank build:prod \
--registry ghcr.io \
--namespace ${{ github.repository_owner }} \
--tag ${{ github.ref_name }} \
--pushversion: '3.8'
services:
customer-service:
image: ghcr.io/mycompany/customer-service:v1.2.0
ports: ["3000:3000"]
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
restart: unless-stopped🐳 Docker Architecture
Dank uses a layered Docker approach:
- Base Image (
deltadarkly/dank-agent-base): Common runtime with Node.js, LLM clients - Agent Images: Extend base image with agent-specific code
- Containers: Running instances with resource limits and networking
Build File Management (.dankignore)
Dank automatically copies files from your project directory into Docker containers during the build process. Use .dankignore to control which files are included.
Default Behavior:
- If
.dankignoredoesn't exist: All files are copied to the container - If
.dankignoreexists: Only files not matching the patterns are copied
Creating .dankignore:
# Created automatically during dank init
dank init my-project
# Or create manually
touch .dankignoreExample .dankignore:
# Security - Environment variables (IMPORTANT: Never commit .env files!)
.env
.env.*
*.key
*.pem
secrets/
# Dependencies (installed fresh in container)
node_modules/
# Version control
.git/
# Build artifacts
dist/
build/
coverage/
# OS files
.DS_Store
Thumbs.db
# IDE files
.vscode/
.idea/
# Logs
*.log
logs/Pattern Matching:
node_modules- Exact match*.log- Wildcard (matches any.logfile)dist/- Directory pattern (matchesdistdirectory and contents).env.*- Pattern matching (matches.env.local,.env.production, etc.)
Best Practices:
- ✅ Always exclude
.envfiles (security) - ✅ Exclude
node_modules/(dependencies installed in container) - ✅ Exclude build artifacts (
dist/,build/) - ✅ Include source files, assets, and configuration needed at runtime
Container Features
- Isolated Environments: Each agent runs in its own container
- Resource Limits: Memory and CPU constraints per agent
- Health Monitoring: Built-in health checks and status reporting
- Automatic Restarts: Container restart policies for reliability
- Logging: Centralized log collection and viewing
Dank automatically handles Docker installation and startup:
Auto-Detection & Installation:
- Checks if Docker is installed
- Installs Docker if missing (macOS: Homebrew, Linux: apt, Windows: Chocolatey)
- Starts Docker if stopped
- Waits for availability
Platform-Specific:
- macOS:
brew install --cask docker && open -a Docker - Linux:
sudo apt-get install docker-ce && sudo systemctl start docker - Windows:
choco install docker-desktop
If automatic installation fails, Dank provides clear manual instructions.
💼 Usage Examples
Customer Support Automation
createAgent('support-bot')
.setLLM('openai', { apiKey: process.env.OPENAI_API_KEY, model: 'gpt-3.5-turbo' })
.setPrompt('You are a customer support specialist. Be polite, helpful, and escalate when needed.')
.addHandler('output', (response) => sendToCustomer(response))
.addHandler('error', (error) => escalateToHuman(error));Content Generation Pipeline
const agents = [
createAgent('researcher')
.setLLM('openai', { model: 'gpt-4' })
.setPrompt('Research and gather information on given topics')
.addHandler('output', (research) => triggerContentCreation(research)),
createAgent('writer')
.setLLM('anthropic', { model: 'claude-3-sonnet' })
.setPrompt('Write engaging blog posts based on research data')
.addHandler('output', (article) => saveDraft(article)),
createAgent('seo-optimizer')
.setLLM('openai', { model: 'gpt-3.5-turbo' })
.setPrompt('Optimize content for SEO and readability')
.addHandler('output', (content) => publishContent(content))
];Data Analysis Workflow
createAgent('data-processor')
.setLLM('openai', { model: 'gpt-4', temperature: 0.1 })
.setPrompt('Analyze data and provide insights as JSON: trends, metrics, recommendations')
.setInstanceType('large')
.addHandler('output', (analysis) => {
const results = JSON.parse(analysis);
saveAnalysisResults(results);
generateReport(results);
checkAlerts(results);
});Custom Agent Code
// agents/custom-agent.js
module.exports = {
async main(llmClient, handlers) {
setInterval(async () => {
const response = await llmClient.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'Generate a daily report' }
]
});
handlers.get('output')?.forEach(h => h(response.choices[0].message.content));
}, 60000);
}
};Environment-Specific Configuration
const env = process.env.NODE_ENV || 'development';
const config = {
development: { model: 'gpt-3.5-turbo', instanceType: 'small' },
production: { model: 'gpt-4', instanceType: 'medium' }
};
createAgent('main-agent')
.setLLM('openai', { model: config[env].model })
.setInstanceType(config[env].instanceType);Resource Management
createAgent('light-agent').setInstanceType('small'); // Light tasks
createAgent('heavy-agent').setInstanceType('large'); // Heavy processingError Handling
createAgent('robust-agent')
.addHandler('error', (error) => {
console.error('Agent error:', error.message);
logError(error);
if (error.type === 'CRITICAL') sendAlert(error);
scheduleRetry(error.context);
})
.addHandler('output', (data) => {
try { processOutput(data); }
catch (error) { console.error('Processing failed:', error); }
});Monitoring and Logging
createAgent('monitored-agent')
.addHandler('output', (data) => {
logger.info('Agent output', { agent: 'monitored-agent', data: data.substring(0, 100) });
})
.addHandler('error', (error) => {
logger.error('Agent error', { agent: 'monitored-agent', error: error.message });
});Security
createAgent('secure-agent')
.setLLM('openai', { apiKey: process.env.OPENAI_API_KEY }) // Never hardcode
.setPrompt('Never reveal API keys or execute system commands')
.addHandler('output', (data) => console.log(sanitizeOutput(data)));Dank works with TypeScript and any build tool (Webpack, esbuild, etc.) that outputs CommonJS JavaScript.
Setup
- Write your config in TypeScript:
// src/dank.config.ts
import axios from 'axios';
import { processData } from './utils';
import { createAgent } from 'dank-ai';
import { v4 as uuidv4 } from 'uuid';
export = {
name: 'my-ts-project',
agents: [
createAgent('assistant')
.setId(uuidv4())
.setLLM('openai', { apiKey: process.env.OPENAI_API_KEY })
.addHandler('request_output', async (data) => {
await axios.post('/api/log', data);
processData(data);
})
]
};- Configure TypeScript for CommonJS output:
// tsconfig.json
{
"compilerOptions": {
"module": "commonjs",
"target": "ES2020",
"outDir": "./dist",
"esModuleInterop": true
}
}- Compile and run:
# Compile TypeScript
tsc
# Run with --config pointing to compiled output
dank run --config ./dist/dank.config.js
# Production build
dank build:prod --config ./dist/dank.config.js --push💡 Tip: The
--configflag tells Dank where to find your compiled config. Yourpackage.jsonis still read from the project root for dependency installation.
Local Development
NODE_ENV=development dank run
# Make changes to dank.config.js
dank stop --all && dank runTesting
dank run --detached
dank logs test-agent --follow
curl http://localhost:3001/health
docker stats dank-test-agentProduction Deployment
export NODE_ENV=production
dank build --force
dank run --detached
dank status --watch🚨 Troubleshooting
1. Docker Connection Issues
# Dank handles this automatically, but if manual steps needed:
docker --version && docker ps
# macOS/Windows: Start Docker Desktop
# Linux: sudo systemctl start docker2. API Key Issues
export OPENAI_API_KEY="sk-your-key-here"
# Or create .env file: echo "OPENAI_API_KEY=sk-..." > .env3. Base Image Not Found
dank build --base
# Or manually: docker pull deltadarkly/dank-agent-base:nodejs-204. Container Resource Issues
// Increase memory allocation (cloud only)
createAgent('my-agent').setInstanceType('medium');5. Agent Not Starting
dank logs agent-name
docker ps -f name=dank-
docker logs container-idProduction Build Issues:
- Authentication:
docker login ghcr.io - Push Permissions: Check namespace permissions
- Image Exists: Use different tag or
--force - Build Context: Add
.dankignorefile to control which files are copied
📦 Package Exports
const {
createAgent, // Convenience function to create agents
DankAgent, // Main agent class
DankProject, // Project management class
SUPPORTED_LLMS, // List of supported LLM providers
DEFAULT_CONFIG // Default configuration values
} = require("dank");📋 Example Files
The examples/ directory contains:
dank.config.js- Local development exampledank.config.template.js- Production template
# Local development
dank run --config example/dank.config.js
# Production
cp example/dank.config.template.js ./dank.config.js
npm install dank-ai
dank run📦 Installation
Global Installation
npm install -g dank-aiLocal Development
git clone https://github.com/your-org/dank
cd dank
npm install
npm link # Creates global symlink🤝 Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
📄 License
ISC License - see LICENSE file for details.
🆘 Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
