ald-01
v1.0.0
Published
ALD-01 — Advanced Local Desktop Intelligence. Your personal AI agent system with 10+ free providers, 5 specialized agents, and a professional web dashboard.
Maintainers
Readme
ALD-01 is a fully open-source, privacy-first AI agent system that runs locally on your desktop. It combines 10+ free AI providers, 5 specialized agents, advanced reasoning strategies, a professional web dashboard, and full device access — all in a single install.
Think of it as your own local, open-source AI assistant — with the power of commercial tools, but free, private, and fully under your control.
Prerequisites
- Python 3.10+ — python.org/downloads
- Node.js 16+ (optional, for npm install) — nodejs.org
Install via npm (recommended)
npm install -g ald-01Installs the
ald-01global command. On first run, it auto-detects Python and installs all Python dependencies for you.
Install via pip
pip install ald-01Install from Source
git clone https://github.com/aditya4232/ALD-01.git
cd ALD-01
# Editable install (dev)
pip install -e .
# With voice support
pip install -e ".[voice]"
# With dev tools (pytest, ruff, black)
pip install -e ".[dev]"Verify
ald-01 --helpFirst Run
ald-01 setup # Interactive setup wizard
ald-01 chat # Start chatting
ald-01 dashboard # Launch web UISet Up a Free Provider
# Groq — fastest, generous free tier (console.groq.com)
export GROQ_API_KEY=gsk_your_key_here # Linux / Mac
set GROQ_API_KEY=gsk_your_key_here # Windows CMD
$env:GROQ_API_KEY="gsk_your_key_here" # PowerShell
# Cerebras (cloud.cerebras.ai)
export CEREBRAS_API_KEY=your_key_here
# Fully local — no key needed (ollama.ai)
ollama pull llama3.2
# Check what's available
ald-01 provider list| Agent | Specialty | Example | |:------|:----------|:--------| | Code Gen | Code generation and scaffolding | "Write a REST API in FastAPI" | | Debug | Debugging and error resolution | "Fix this TypeError in my code" | | Review | Code review and best practices | "Review this function for issues" | | Security | Security analysis and hardening | "Check this endpoint for vulns" | | General | General Q&A and reasoning | "Explain decorators in Python" |
- Automatic agent routing — queries go to the best agent
- 10 brain power levels — from basic Q&A to full autonomous reasoning
All providers below offer free tiers — no credit card required.
| Provider | Model | Notes | |:---------|:------|:------| | Groq | Llama 3.3 70B | Ultra-fast inference, generous free tier | | Cerebras | Llama 3.3 70B | High throughput | | OpenRouter | Various | Aggregator, many free models | | Together AI | Mixtral | Free tier available | | GitHub Copilot | GPT-4.1 | Free for Pro users | | Google Gemini | Gemini 2.0 | Google's latest | | SambaNova | Llama 3.1 | Free tier | | Novita AI | Llama 3 | Free tier | | Hyperbolic | Deepseek R1 | Free tier | | Ollama | Any local model | 100% offline, no API key |
Built-in automatic failover — if one provider drops, the next one picks up.
- Chain-of-Thought — step-by-step logical reasoning
- Tree-of-Thought — multi-branch problem exploration
- Reflexion — self-correcting iterative refinement
- Problem Decomposition — complex task breakdown into subtasks
- Depth scales automatically with brain power level (1–10)
- Glassmorphism dark UI with modern aesthetics
- Real-time activity visualizer via WebSocket
- Chat interface with streaming responses
- Sandbox code editor with Python execution and export
- File browser for full filesystem navigation
- Terminal for direct command execution
- System monitor with live process listing
- Doctor diagnostics with 12+ health checks
- Provider management with one-click testing
| Category | Capabilities | |:---------|:-------------| | Filesystem | Read, write, search, delete, move files | | Terminal | Execute shell commands | | Code Sandbox | Run Python in isolated subprocess | | System Info | CPU, RAM, disk, GPU detection | | Process Mgmt | List and manage running processes | | Clipboard | Read and write clipboard | | HTTP | Make web requests | | File Watcher | Monitor files for real-time changes | | Backup | Create and restore backups | | Analytics | Usage analytics and insights | | Scheduler | Schedule recurring tasks | | Export | Export data (JSON, CSV, etc.) | | Webhooks | Event-driven webhook system | | Code Analyzer | Static code analysis | | API Gateway | Built-in gateway | | Sessions | Multi-session management | | Templates | Jinja2-powered templating | | Plugins | Extensible plugin architecture | | Themes | Customizable UI themes | | i18n | Multi-language support |
- Edge TTS — free Microsoft Neural voices (50+ voices, high quality)
- pyttsx3 — offline TTS fallback
- System TTS — OS-native speech (Windows, macOS, Linux)
- Telegram Bot — control ALD-01 from your phone
- Ask questions, check status, change settings remotely
- SQLite-backed conversation and knowledge storage
- Semantic memory — facts, preferences, patterns
- Decision logs — track AI reasoning over time
- User profile — personalized experience
- Context manager — intelligent conversation context
CLI Commands
# Chat
ald-01 chat # Interactive chat
ald-01 chat --agent security # Specific agent
ald-01 chat --voice # With voice output
ald-01 chat --stream # Stream responses
# Quick question
ald-01 ask "How do I reverse a linked list in Python?"
# Dashboard
ald-01 dashboard # Default: localhost:7860
ald-01 dashboard --port 8080 # Custom port
# System
ald-01 status # System status
ald-01 doctor # Full health check
ald-01 setup # Setup wizard
# Providers
ald-01 provider list # All providers
ald-01 provider free # Free options
ald-01 provider add groq # Add interactively
# Config
ald-01 config show # Current config
ald-01 config set brain_power 7 # Set brain power
ald-01 config reset # Reset defaults
# Voice
ald-01 voice test # Test TTS
ald-01 voice voices # List voicesIn-Chat Commands
/exit — Exit chat
/clear — Clear conversation history
/agent — Switch agent (code_gen, debug, review, security, general)
/voice — Toggle voice on/off
/status — System statusPython API
import asyncio
from ald01.core.orchestrator import get_orchestrator
async def main():
orch = get_orchestrator()
await orch.initialize()
# Simple query
response = await orch.process_query("Explain decorators in Python")
print(response.content)
# Stream response
async for chunk in orch.stream_query("Write a sorting algorithm"):
print(chunk, end="")
# Use specific agent
response = await orch.process_query(
"Review this code for security issues",
agent_name="security"
)
await orch.shutdown()
asyncio.run(main())ALD-01/
├── bin/cli.js # npm global CLI wrapper
├── package.json # npm package config
│
├── src/ald01/
│ ├── __init__.py # Package init & directory setup
│ ├── __main__.py # python -m ald01 entry point
│ ├── cli.py # Click CLI (all commands)
│ ├── config.py # YAML config with brain power presets
│ │
│ ├── core/ # Core Systems (40+ modules)
│ │ ├── orchestrator.py # Central coordinator
│ │ ├── brain.py # AI brain & decision engine
│ │ ├── chat_engine.py # Chat processing engine
│ │ ├── reasoning.py # Multi-strategy reasoning
│ │ ├── memory.py # SQLite persistent memory
│ │ ├── tools.py # Tool executor (fs, terminal, etc.)
│ │ ├── events.py # Async pub-sub event bus
│ │ ├── context_manager.py # Conversation context
│ │ ├── pipeline.py # Processing pipeline
│ │ ├── plugins.py # Plugin system
│ │ ├── scheduler.py # Task scheduler
│ │ ├── analytics.py # Usage analytics
│ │ ├── backup_manager.py # Backup & restore
│ │ ├── code_analyzer.py # Static analysis
│ │ ├── export_system.py # Data export
│ │ ├── file_watcher.py # File monitoring
│ │ ├── gateway.py # API gateway
│ │ ├── webhooks.py # Webhook engine
│ │ ├── session_manager.py # Session management
│ │ ├── template_engine.py # Jinja2 templating
│ │ ├── themes.py # Theme engine
│ │ ├── localization.py # i18n
│ │ ├── self_heal.py # Self-healing & recovery
│ │ └── ... # 20+ more modules
│ │
│ ├── agents/ # Specialized AI Agents
│ │ ├── base.py # Base agent class
│ │ ├── codegen.py # Code generation
│ │ ├── debug.py # Debugging
│ │ ├── review.py # Code review
│ │ ├── security.py # Security analysis
│ │ └── general.py # General purpose
│ │
│ ├── providers/ # AI Model Providers
│ │ ├── base.py # Abstract provider
│ │ ├── openai_compat.py # OpenAI-compatible
│ │ ├── ollama.py # Local Ollama
│ │ ├── manager.py # Routing & failover
│ │ └── benchmark.py # Benchmarking
│ │
│ ├── dashboard/ # Web Dashboard
│ │ ├── server.py # FastAPI + WebSocket
│ │ ├── api_routes.py # REST API v1
│ │ ├── api_v2.py # REST API v2
│ │ ├── api_ext.py # Extended endpoints
│ │ └── static/ # Frontend (HTML/JS/CSS)
│ │
│ ├── services/voice.py # TTS engine
│ ├── doctor/diagnostics.py # Health checks
│ ├── telegram/bot.py # Telegram bot
│ ├── onboarding/wizard.py # Setup wizard
│ └── utils/hardware.py # Hardware detection
│
├── pyproject.toml # Python package config
├── requirements.txt # pip dependencies
├── LICENSE # MIT
└── README.mdStored in ~/.ald01/config.yaml:
brain_power: 5 # 1–10, controls reasoning depth
providers:
groq:
enabled: true
priority: 1 # Lower = tried first
ollama:
enabled: true
host: http://localhost:11434
dashboard:
host: 127.0.0.1
port: 7860
auto_open: true
voice:
enabled: false
tools:
terminal:
enabled: false # Shell command execution
code_execute:
enabled: false # Python sandbox
telegram:
token: ""
allowed_users: []| Level | Name | Depth | Autonomous | Best For | |:-----:|:-----|:-----:|:----------:|:---------| | 1 | Basic | 1 | No | Simple Q&A | | 2 | Simple | 2 | No | Quick answers | | 3 | Moderate | 3 | No | Step-by-step explanations | | 4 | Standard | 4 | No | Multi-step problem solving | | 5 | Advanced | 5 | Limited | Complex analysis | | 6 | Deep | 6 | Limited | Multi-perspective evaluation | | 7 | Expert | 7 | Yes | Expert-level reasoning | | 8 | Master | 8 | Yes | Deep research & synthesis | | 9 | Genius | 9 | Yes | Multi-strategy reasoning | | 10 | AGI | 10 | Yes | Full autonomous reasoning |
ald-01 config set brain_power 7Run ald-01 doctor to check:
| Check | Details | |:------|:--------| | Python version | 3.10+ compatibility | | Dependencies | Required and optional packages | | Config file | YAML validity | | Data directory | Permissions | | Memory database | SQLite health | | Dashboard port | Availability | | System resources | RAM, disk space | | Connectivity | Internet access | | Ollama | Local model availability | | Providers | API connections | | API keys | Free tier configuration | | Voice/TTS | Engine availability |
| Principle | Details | |:----------|:--------| | Fully local | Runs 100% offline with Ollama | | No telemetry | Zero data sent without consent | | API keys | Stored as env vars, never in code | | Tool access | Configurable — enable only what you need | | Sandbox | Code execution in isolated subprocess | | Open source | Full code transparency |
Core (auto-installed)
| Package | Purpose |
|:--------|:--------|
| click | CLI framework |
| rich | Terminal UI |
| httpx | Async HTTP client |
| fastapi | Web dashboard & API |
| uvicorn | ASGI server |
| websockets | Real-time communication |
| pyyaml | Config parsing |
| psutil | System monitoring |
| python-dotenv | Environment variables |
| prompt_toolkit | Interactive input |
| jinja2 | Template engine |
| aiosqlite | Async SQLite |
Optional
pip install ald-01[voice] # Edge TTS + pyttsx3
pip install ald-01[dev] # pytest, black, ruff# Fork & clone
git clone https://github.com/YOUR_USERNAME/ALD-01.git
cd ALD-01
# Install dev mode
pip install -e ".[dev]"
# Feature branch
git checkout -b feature/awesome-feature
# Test
pytest
# PRGuidelines: PEP 8 style (enforced by ruff) · docstrings on new functions · tests for new features · focused PRs
MIT License — see LICENSE for details.
