@aman_asmuei/aman-agent
v0.43.1
Published
Your AI companion, running locally — powered by the aman ecosystem
Maintainers
Readme
- What's New
- The Problem
- The Solution
- Architecture at a Glance
- Quick Start
- Standalone CLI install
- Usage Guide
- Features
- How It Works
- Commands
- What It Loads
- Supported LLMs
- Configuration
- The Ecosystem
- What Makes This Different
- FAQ
- Contributing
What's New in v0.42.0
Your memory, as plain Markdown. Every memory is now mirrored to
~/.aman-agent/memories/as a.mdfile you can read, edit, git-commit, or sync via Dropbox/iCloud.
# snapshot your memory to any dir
aman-agent # then: /memory export --to ~/backups/amem
# see mirror status
/memory mirror status
# rebuild if you lose the DB but have the files
/memory mirror rebuild
# pull edits from a synced folder (multi-device)
/memory sync --from ~/Dropbox/aman-memoriesWhat you get:
- Round-trip contract — files round-trip cleanly through amem-core's existing sync parser.
- Multi-device sync — on every
aman-agentstartup, edits in the mirror dir are imported back into the DB (disable viaconfig.mirror.autoSyncOnStartup = false). - No lock-in — plain YAML-frontmatter Markdown. If you ever stop using aman-agent, the memories stay useful anywhere.
- Opt-out with one flag:
config.mirror.enabled = falsedisables all mirror I/O.
Requires @aman_asmuei/[email protected] (shipped simultaneously).
What's New in v0.41.0
From companion to orchestrator — fully wired. One command decomposes, delegates, reviews, and tracks cost.
/orchestrate Build user auth with JWT, password hashing, rate limiting, and testsProject type: api-backend (auto-detected)
Template: full-feature
## User Authentication
**Tasks:** 5 | **Gates:** 1
- **Design auth architecture** → architect [advanced] (root)
- **Implement JWT middleware** → coder [standard] (after: design)
- **Add rate limiting** → coder [standard] (after: design) ← parallel
- **Write test suite** → tester [standard] (after: jwt, rate-limit)
- **Security review** → security [standard] (after: tests)
- 🔒 **Human approval** [approval gate]
Status: completed (34.2s)
Cost: ~$0.12 (3 standard + 1 advanced)
Policy: passed (0 errors, 2 warnings)What happens in that one command:
- Auto-detects project type from your cwd (web-frontend, api-backend, mobile, etc.)
- Selects the best orchestration template (or decomposes via LLM)
- Runs policy check (7 built-in rules — requires review, testing, approval gates)
- Executes DAG with parallel agent scheduling and circuit breakers
- Tracks LLM cost per tier with budget enforcement
- Self-review loop: reviewer + tester evaluate output before completion
Flags: --template bug-fix, --no-review, --no-policy
GitHub-native: /github plan 42 does the same thing, starting from a GitHub issue.
| Module | What it does |
|:---|:---|
| Orchestrator Engine | DAG scheduler, multi-tier LLM routing (fast/standard/advanced), approval gates, structured audit trails, immutable state machine |
| GitHub-Native | Safe gh CLI wrapper, issue-to-DAG planner, PR manager, CI gate polling |
| Agent Factory | 4 profiles (architect, security, tester, reviewer), 3 workflow templates (full-feature, bug-fix, security-audit), self-review loop |
| Project Manager | Project type classifier, module boundary mapper for parallel agents, orchestration monitoring |
| Enterprise | Circuit breakers (closed/open/half-open), checkpoint/resume, cost tracking + budget enforcement, 7-rule policy engine |
| Integration | Unified runner (runOrchestrationFull), smart orchestrate (smartOrchestrate), profile auto-install |
28 new source files, 32 new test files. Full release notes →
aman-agent dev --copilottargets GitHub Copilot (writes.github/copilot-instructions.md)aman-agent dev --cursortargets Cursor (writes.cursorrules)- Multi-project simultaneous sessions sharing the same memory database
One command. Full context. Zero setup.
aman-agent dev — Your New Way to Start Coding
cd ~/projects/amantrade && aman-agent devOpen any project, and aman-agent automatically detects your stack, recalls your past decisions from memory, and generates a project-specific CLAUDE.md — then launches Claude Code with everything loaded. No more re-explaining yourself.
$ aman-agent dev ~/projects/amantrade
Detected: Go (Fiber) + Postgresql + Docker + Github-actions
Recalled: 8 memories (4 decisions, 3 corrections, 1 convention)
✓ CLAUDE.md written (template mode)
Launching Claude Code...| Flag | What it does |
|:---|:---|
| --smart | Use your configured LLM to synthesize a smarter context file |
| --yolo | Launch with skip-permissions (Claude Code only) |
| --copilot | Target GitHub Copilot — writes .github/copilot-instructions.md, opens VS Code |
| --cursor | Target Cursor — writes .cursorrules, opens Cursor |
| --no-launch | Generate context file only, don't launch editor |
| --diff | Preview what would change without writing |
| --force | Regenerate even if context file is fresh |
Works with multiple projects simultaneously — each terminal gets its own aman-agent dev, all sharing the same memory database. Decisions from one project flow into the next.
Install on any machine — no Node.js required
curl -fsSL https://raw.githubusercontent.com/amanasmuei/aman-agent/main/install.sh | bashWorks on Linux (x64, arm64, armv7l), macOS (x64, Apple Silicon), Raspberry Pi, VPS, and servers. Vendors Node.js 22 LTS invisibly. No sudo needed.
| Feature | Details |
|:---|:---|
| Consolidated config | All state now lives under ~/.aman-agent/ — one directory to backup, sync, or scp to a new machine. Existing users are auto-migrated on first run. |
| Docker support | docker run -it -e ANTHROPIC_API_KEY=sk-... ghcr.io/amanasmuei/aman-agent — multi-arch image (amd64 + arm64). |
| aman-agent setup | Full configuration wizard — provider, identity, and presets. |
| aman-agent update | Self-update, works with both vendored and npm installs. |
| aman-agent uninstall | Clean removal of all data and config. |
| Headless mode | Auto-detects LLM provider from env vars. Clean error when no TTY (systemd, Docker, CI). |
v0.31 — Multi-agent (A2A) via MCP server mode
aman-agent serveruns any profile as a local MCP server/delegate @coder <task>for cross-agent delegation/agents list|info|pingfor discovery and health checks
v0.30 — Agent hardening
- Delegation confirmation prompts (no more silent sub-agents)
- Persistent background task state surviving crashes
- Rich
/eval reportwith trust, sentiment, energy, burnout risk
v0.29 — Ecosystem parity
- Auto-relate memories after extraction (knowledge graph edges)
- Stale reference cleanup
v0.28 — Learning loop completion
- Rejection feedback, cross-session reinforcement, skill merging + versioning
- Adaptive nudge learning, semantic trigger matching, feed-forward v2
- LLM-based sentiment, burnout predictor,
/skills list --autoenhancements
v0.27 — Dynamic user model
- Cross-session profile: trust (EMA), sentiment baseline, energy distribution
- Feed-forward personalized energy/mode overrides
- Frustration correlations (Pearson r), nudge tracking
/identity dynamicsview +--json+--reset
v0.26 — Skill crystallization
- Post-mortems identify reusable procedures → opt-in prompt → saved as auto-triggering skills
- Runtime trigger matching,
/skills crystallize,/skills list --auto - Post-mortem JSON sidecar for lossless re-parsing
v0.24 — Observation & post-mortem
- Passive session telemetry (tool calls, errors, file changes, sentiment, blockers)
- LLM-generated post-mortem reports with smart auto-trigger
- Pattern memory loop — the agent learns from its own session history
/observedashboard,/postmortemcommands
v0.18 — User onboarding
- Interactive first-run setup capturing name, role, expertise, communication style
- 13 showcase templates (fitness, freelancer, Muslim, finance, etc.)
/profile meand/profile editfor user identity management
v0.16 — Session resilience
- Streaming cancellation (Ctrl+C aborts response, not session)
- Session checkpointing every 10 turns — crash-safe
- MCP auto-reconnect on connection failure
- Token-safe tool loop with conversation trimming inside
- Non-blocking memory extraction
v0.14 — Sub-agent infrastructure
- Sub-agent guardrails enforce same safety rules as main agent
- Sub-agent memory recall for better delegation context
- 16K system prompt token ceiling
Full release history →
The Problem
AI coding assistants forget everything between sessions. You re-explain your stack, preferences, and boundaries every time. There's no single place where your AI loads its full context and just works.
Other "memory" solutions are just markdown files the AI reads on startup — they don't learn from conversation, they don't recall per-message, and they silently lose context when the window fills up.
The Solution
aman-agent is the first open-source AI companion that genuinely learns from conversation and orchestrates multi-agent workflows. It recalls memories per-message, extracts new knowledge automatically, decomposes complex requirements into parallel task graphs, and delegates to specialized agents — all running locally with any LLM.
npx @aman_asmuei/aman-agentYour AI knows who it is, what it remembers, what tools it has, what rules to follow, what time it is, and what reminders are due — before you say a word.
New to the ecosystem? See docs/ecosystem.md for "which package do I install, and when" + the full package relationship diagram.
Architecture at a Glance
aman-agent is the runtime at the center of the aman ecosystem — 78 TypeScript modules that stitch together memory, identity, orchestration, and any LLM into one coherent system.
flowchart TB
User([You]) <--> CLI[aman-agent CLI]
subgraph core [" Agent Core "]
Agent[agent.ts<br/>message loop]
Hooks[hooks.ts<br/>lifecycle]
Skills[skill-engine<br/>+ crystallization]
Obs[observation<br/>+ postmortem]
Personality[user-model<br/>+ personality]
end
subgraph orchestrator [" Orchestrator Engine "]
DAG[DAG scheduler<br/>parallel execution]
Decompose[LLM decompose<br/>requirement → DAG]
Templates[workflow templates<br/>feature · bugfix · audit]
ReviewLoop[self-review loop<br/>reviewer + tester]
Gate[approval gates<br/>+ CI gates]
CB[circuit breaker<br/>+ checkpoint]
Policy[policy engine<br/>+ cost tracker]
end
subgraph github [" GitHub-Native "]
GH[gh CLI wrapper]
Issues[issue → DAG<br/>planner]
PRs[PR manager<br/>+ CI polling]
end
subgraph project [" Project Manager "]
Detect[project classifier<br/>stack → type]
ModMap[module boundary<br/>mapper]
Monitor[orchestration<br/>monitoring]
end
CLI --> Agent
Agent --> Hooks
Agent --> orchestrator
Agent -->|recall & extract| Memory[(amem-core<br/>SQLite + vectors)]
Agent -->|identity| Identity[(acore-core)]
Agent -->|guardrails| Rules[(arules-core)]
orchestrator --> Delegate[delegate.ts<br/>+ teams.ts]
orchestrator --> Profiles[agent profiles<br/>architect · security<br/>tester · reviewer]
orchestrator --> github
orchestrator --> project
Agent --> LLM{LLM Router}
LLM --> Claude[Anthropic]
LLM --> GPT[OpenAI]
LLM --> Copilot[GH Copilot]
LLM --> Ollama[Ollama]
Agent <-->|MCP| MCP[aman-mcp · amem]
classDef core fill:#58a6ff15,stroke:#58a6ff,color:#e6edf3,stroke-width:2px;
classDef orch fill:#a371f715,stroke:#a371f7,color:#e6edf3,stroke-width:2px;
classDef gh fill:#3fb95015,stroke:#3fb950,color:#e6edf3,stroke-width:2px;
classDef proj fill:#d29f2215,stroke:#d29f22,color:#e6edf3,stroke-width:2px;
classDef store fill:#3fb95022,stroke:#3fb950,color:#e6edf3,stroke-width:1px;
classDef llm fill:#d29f2222,stroke:#d29f22,color:#e6edf3,stroke-width:1px;
class Agent,Hooks,Skills,Obs,Personality core
class DAG,Decompose,Templates,ReviewLoop,Gate,CB,Policy orch
class GH,Issues,PRs gh
class Detect,ModMap,Monitor proj
class Memory,Identity,Rules store
class Claude,GPT,Copilot,Ollama,LLM llm
class Delegate,Profiles,MCP core| Piece | What it does | Where it lives |
|:---|:---|:---|
| agent.ts | The main event loop — reads your message, recalls memories, streams the LLM response, executes tools, extracts new memories | src/agent.ts (40 KB) |
| commands.ts | 60+ slash commands (/memory, /skills, /plan, /delegate, /orchestrate, /eval, /observe, /postmortem, …) | src/commands.ts (100 KB) |
| orchestrator/ | DAG-based task decomposition, parallel scheduling, multi-tier model routing, approval gates, audit trails | src/orchestrator/ (8 files) |
| github/ | GitHub-native automation — issue planning, PR management, CI gates, safe gh CLI wrapper | src/github/ (6 files) |
| profiles/ | Specialized agent profiles for orchestrator delegation (architect, security, tester, reviewer) | src/profiles/ |
| orchestrator/templates/ | Pre-built DAG templates for common workflows (full-feature, bug-fix, security-audit) | src/orchestrator/templates/ |
| hooks.ts | 5 lifecycle hooks that fire at startup, before/after tools, on workflow match, on session end | src/hooks.ts (26 KB) |
| memory.ts + memory-extractor.ts | Per-message recall and silent, non-blocking extraction of preferences, decisions, patterns, corrections | delegates to @aman_asmuei/[email protected] |
| skill-engine.ts + crystallization.ts | Auto-triggers domain skills from context; promotes post-mortem lessons into reusable, versioned skills | src/skill-engine.ts, src/crystallization.ts |
| user-model.ts + personality.ts | Cross-session trust (EMA), sentiment baseline, burnout risk, time-of-day tone shifts, wellbeing nudges | src/user-model.ts, src/personality.ts |
| observation.ts + postmortem.ts | Passive session telemetry + LLM-generated structured post-mortems on session end | src/observation.ts, src/postmortem.ts |
| dev/ | Project stack detection, context assembly, CLAUDE.md generation — powers aman-agent dev | src/dev/ |
| llm/ | 6 pluggable providers — Anthropic, OpenAI, Ollama, GitHub Copilot, OpenAI-compatible, Claude Code CLI | src/llm/ |
| mcp/ | MCP v1.27 client with stdio transport and auto-reconnect | src/mcp/ |
Stateless by default. All state lives in ~/.aman-agent/ — identity, rules, workflows, skills, eval, and memory in one portable directory. Nothing leaves your machine except what you send to your chosen LLM.
Install tiers — pick what you need
The aman ecosystem is ~10 packages. You don't need them all. Pick a tier:
Minimal (2 packages, 2 minutes)
Just aman-agent + persistent memory. Standalone CLI, works with any LLM provider. Best for "does this feel useful?" evaluation.
npm install -g @aman_asmuei/aman-agent @aman_asmuei/amem-core
# or, once you have Node 18+:
curl -fsSL https://raw.githubusercontent.com/amanasmuei/aman-agent/main/bin/aman-setup.sh | bashWhat you get: aman-agent CLI, per-message memory recall, memory extraction, /memory commands.
Productive (5 packages, 10 minutes)
Minimal + identity + guardrails + Claude Code integration. Adds your AI's personality, rules the LLM must respect, and first-class Claude Code plugin support.
npm install -g @aman_asmuei/aman-agent @aman_asmuei/amem-core \
@aman_asmuei/acore-core @aman_asmuei/arules-core @aman_asmuei/aman-mcp
# + install the Claude Code plugin:
# see aman-claude-code repo READMEWhat you get (on top of Minimal): named companion ("Aman" by default), rule enforcement via /rules, auto-loaded ecosystem context at session start, MCP tools for memory/identity/rules from inside Claude Code.
Complete (all packages, 30 minutes)
Everything: orchestration, multi-agent delegation, showcase personalities, tool installer, skill manager, VS Code Copilot integration.
# After Productive, add:
npm install -g @aman_asmuei/aman-copilot @aman_asmuei/akit @aman_asmuei/askill @aman_asmuei/aman-showcaseWhat you get (on top of Productive): /orchestrate DAG task decomposition, /delegate + /team multi-agent workflows, /showcase 13 personality templates, /akit CLI tool manager, /askill skill library. VS Code Copilot users also get aman-copilot for inline Copilot Chat memory.
Not sure which tier? Start with Minimal. You can layer Productive or Complete on top whenever you want — nothing gets overwritten, just extended.
Quick Start
Claude Code plugin (recommended)
If you already use Claude Code, this is the fastest path — no new CLI to install, no Node.js, no API key to paste. Install the plugin once and your full aman ecosystem (identity, rules, memory, skills, live tools) auto-loads every session.
claude plugin marketplace add amanasmuei/aman-claude-code
claude plugin install aman-claude-code@amanThen reload Claude Code (/reload-plugins or restart). That's it — type anything and aman-agent is already remembering you.
Also on VS Code Copilot Chat or the Copilot CLI? The
aman-copilotsibling gives you the same identity, rules, and memory on those surfaces.
Run
Inside Claude Code with the plugin installed, you don't run anything — just start chatting. Skip to First Launch for the one-time setup prompt.
If you went with the standalone CLI instead:
# Start a conversation
aman-agent
# Or jump straight into a project with full context
aman-agent dev ~/projects/my-appZero config if you already have an API key in your environment:
# aman-agent auto-detects these (in priority order):
export ANTHROPIC_API_KEY="sk-ant-..." # → uses Claude Sonnet 4.6
export OPENAI_API_KEY="sk-..." # → uses GPT-4o
# Or if Ollama is running locally # → uses llama3.2No env var? First run prompts for your LLM provider and model:
◇ LLM provider
│ ● Claude (Anthropic) — recommended, uses Claude Code CLI
│ ○ GitHub Copilot — uses GitHub Models API
│ ○ GPT (OpenAI)
│ ○ Ollama (local) — free, runs offlineClaude — authentication handled by Claude Code CLI (claude login). Supports subscription (Pro/Max/Team/Enterprise), API billing, Bedrock, and Vertex AI. No API key needed.
GitHub Copilot — authentication handled by GitHub CLI (gh auth login). Uses GitHub Models API with access to GPT-4o, Claude Sonnet, Llama, Mistral, and more.
OpenAI — enter your API key directly.
Ollama — local models, no account needed.
First Launch — You'll Be Asked About You
On first run, a quick interactive setup captures who you are:
◆ What should I call you?
◆ What's your main thing? (developer, designer, student, manager, generalist)
◆ How deep in the game? (beginner → expert)
◆ How do you like answers? (concise, balanced, thorough, socratic)
◆ What are you working on? (optional)
◆ Want a companion specialty? (13 pre-built personalities from aman-showcase)Takes ~30 seconds. Update anytime with /profile edit.
Talk
# Override model per session (standalone CLI)
aman-agent --model claude-opus-4-6
# Adjust system prompt token budget
aman-agent --budget 12000Standalone CLI install
For automation, CI, VPS, Raspberry Pi, or anyone who prefers the CLI directly — install the aman-agent binary. Everything above in Quick Start still applies once it's installed.
# One-liner install (no Node.js required) — Linux, macOS, Raspberry Pi
curl -fsSL https://raw.githubusercontent.com/amanasmuei/aman-agent/main/install.sh | bash
# Or via npm (if you already have Node.js 18+)
npm install -g @aman_asmuei/aman-agent
# Or via Docker
docker run -it -e ANTHROPIC_API_KEY=sk-... ghcr.io/amanasmuei/aman-agentWorks on Linux (x64, arm64, armv7l), macOS (x64, Apple Silicon), Raspberry Pi, VPS, and servers. The installer vendors Node.js 22 LTS invisibly — no sudo, no prerequisites.
Usage Guide
A step-by-step walkthrough of how to use aman-agent day-to-day. Click any section below to expand.
Project Dev Mode
The fastest way to start working on any project. One command sets up everything:
aman-agent devWhat happens:
- Stack Detection — Scans your project directory for
package.json,go.mod,Cargo.toml,pyproject.toml,pubspec.yaml,docker-compose.yml,.github/workflows/, and more - Memory Recall — Queries your amem database for past decisions, corrections, and conventions related to this project and stack
- Context Assembly — Pulls your identity (acore), guardrails (arules), and developer preferences into a structured CLAUDE.md
- Auto-Launch — Launches Claude Code in the project directory with full context loaded
$ aman-agent dev ~/projects/amantrade
Detected: Go (Fiber) + Postgresql + Docker + Github-actions
Recalled: 8 memories (4 decisions, 3 corrections, 1 convention)
✓ CLAUDE.md written (template mode)
Launching Claude Code...Smart mode — Use your LLM to synthesize a more tailored CLAUDE.md:
aman-agent dev --smartThe LLM merges related corrections into single convention statements and removes redundancy. Falls back to template mode automatically if the LLM call fails.
Multi-editor support — Same memory, any editor:
aman-agent dev # Claude Code (default) → CLAUDE.md
aman-agent dev --copilot # VS Code + Copilot → .github/copilot-instructions.md
aman-agent dev --cursor # Cursor → .cursorrulesAll three use the same pipeline: stack detection → amem recall → context assembly. Only the output file and launcher differ.
Yolo mode — Full autonomous, no permission prompts (Claude Code only):
aman-agent dev --yolo # skip permissions
aman-agent dev --yolo --smart # skip permissions + LLM-generated contextMulti-project workflow — Each terminal is independent:
# Terminal 1
aman-agent dev ~/projects/amantrade
# Terminal 2
aman-agent dev ~/projects/aman-mcp
# Terminal 3
aman-agent dev ~/projects/new-apiAll three share the same amem database. A decision you make in one project is available to the others on next run.
Staleness detection — If you've made new decisions since the last CLAUDE.md generation, aman-agent dev auto-updates it and shows you what changed:
✓ CLAUDE.md updated (3 changes)
+ Added: zerolog convention (from correction 2026-04-10)
+ Added: rate limiting at gateway level (from decision 2026-04-11)
- Removed: slog preference (superseded by zerolog correction)If the CLAUDE.md is still fresh, it skips regeneration and launches Claude Code immediately.
Your First Conversation
On first run, you set up your profile, then the agent greets you personally:
$ aman-agent
aman agent — your AI companion
✓ Auto-detected Anthropic API key. Using claude-sonnet-4-6.
✓ Profile saved for Aman
✓ Loaded: identity, user, guardrails (2,847 tokens)
✓ Memory consolidated
✓ MCP connected
✓ Aman is ready for Aman. Model: claude-sonnet-4-6
You > Hey, I'm working on a Node.js API
Aman ──────────────────────────────────────────────
Nice to meet you! I'm Aman, your AI companion. I'll remember
what matters across our conversations — your preferences,
decisions, and patterns.
What kind of API are you building? I can help with architecture,
auth, database design, or whatever you need.
────────────────────────────────────── [1 memory stored]That's it. No setup required. The agent remembers your stack from this point forward.
How Memory Works
Memory is automatic. You don't need to do anything — the agent silently extracts important information from every conversation:
- Preferences — "I prefer Vitest over Jest" → remembered
- Decisions — "Let's use PostgreSQL" → remembered
- Patterns — "User always writes tests first" → remembered
- Facts — "The auth service is in /services/auth" → remembered
Memory shows up naturally in responses:
You > Let's add a new endpoint
Aman ──────────────────────────────────────────────
Based on your previous decisions, I'll set it up with:
- PostgreSQL (your preference)
- JWT auth (decided last session)
- Vitest for tests
──────────────────────────────── memories: ~47 tokensUseful memory commands:
/memory search auth Search your memories
/memory timeline See memory growth over time
/decisions View your decision logMemory mirror (markdown)
Your memory DB is great for the agent, but sometimes you want memories as plain Markdown files you can grep, edit in any editor, commit to git, or sync with Dropbox/iCloud across devices. The mirror gives you exactly that — a human-readable copy of every memory that round-trips cleanly back into the DB.
/memory export --to <dir> One-shot snapshot of all memories as .md files
/memory mirror status Show live mirror path, file count, last sync
/memory mirror rebuild Re-materialise the mirror from the DB
/memory sync --from <dir> Reconstitute the DB from a directory of .md filesWhen you want this:
- Human-readable backup —
grep -r "postgres" ~/.aman-agent/memories/to see every memory that mentions Postgres. - Edit outside the agent — fix a typo or retire a stale decision in your favourite editor, then run
/memory sync --from ...(or just restart — startup auto-syncs). - Version-control your memory —
git init ~/.aman-agent/memories/and you get a full history of every memory the agent has ever learned. - Multi-device sync — point the mirror dir at a synced folder (Dropbox, iCloud, Syncthing) and your memories follow you.
- No lock-in — YAML-frontmatter Markdown is readable without aman-agent ever being installed again.
Opt-out: config.mirror.enabled = false disables all mirror I/O. Disable auto-sync on startup only with config.mirror.autoSyncOnStartup = false.
Working with Files & Images
Reference any file path in your message — it gets attached automatically:
You > Review this code ~/projects/api/src/auth.ts
[attached: auth.ts (3.2KB)]
Aman ──────────────────────────────────────────────
Looking at your auth middleware...Images work the same way — the agent can see them:
You > What's wrong with this schema? ~/Desktop/schema.png
[attached image: schema.png (142.7KB)]
Aman ──────────────────────────────────────────────
I see a few issues with your schema...Supported files:
- Code/text:
.ts,.js,.py,.go,.rs,.md,.json,.yaml, and 30+ more - Images:
.png,.jpg,.jpeg,.gif,.webp,.bmp(also URLs) - Documents:
.pdf,.docx,.xlsx,.pptx(via Docling)
Multiple files in one message work too.
Working with Plans
Plans help you track multi-step work that spans sessions.
Create a plan:
You > /plan create Auth API | Ship JWT auth | Design schema, Build endpoints, Write tests, Deploy
Plan created!
Plan: Auth API (active)
Goal: Ship JWT auth
Progress: [░░░░░░░░░░░░░░░░░░░░] 0/4 (0%)
1. [ ] Design schema
2. [ ] Build endpoints
3. [ ] Write tests
4. [ ] Deploy
Next: Step 1 — Design schemaMark progress as you work:
You > /plan done
Step 1 done!
Plan: Auth API (active)
Progress: [█████░░░░░░░░░░░░░░░] 1/4 (25%)
1. [✓] Design schema
2. [ ] Build endpoints ← Next
3. [ ] Write tests
4. [ ] DeployThe AI knows your plan. Every turn, the active plan is injected into context. The AI knows which step you're on and reminds you to commit after completing steps.
Resume across sessions. Close the terminal, come back tomorrow — your plan is still there:
$ aman-agent
Welcome back. You're on step 2 of Auth API — Build endpoints.All plan commands:
/plan Show active plan
/plan done [step#] Mark step complete (next if no number)
/plan undo <step#> Unmark a step
/plan list Show all plans
/plan switch <name> Switch active plan
/plan show <name> View a specific planPlans are stored as markdown in .acore/plans/ — they're git-trackable.
Skills in Action
Skills activate automatically based on what you're talking about. No commands needed.
You > How should I handle SQL injection in this query?
[skill: security Lv.3 activated]
[skill: database Lv.2 activated]
Aman ──────────────────────────────────────────────
Use parameterized queries — never interpolate user input...Skills level up as you use them:
| Level | Label | What changes | |:---|:---|:---| | Lv.1 | Learning | Detailed explanations, examples | | Lv.2 | Familiar | Brief reasoning, show patterns | | Lv.3 | Proficient | Task-focused, skip basics | | Lv.4 | Advanced | Edge cases, proactive suggestions | | Lv.5 | Expert | Just execute, no hand-holding |
Skills also self-improve — when the agent learns your patterns (e.g., "user prefers Prisma over raw SQL"), it enriches the skill with your preferences.
12 built-in skill domains: testing, api-design, security, performance, code-review, documentation, git-workflow, debugging, refactoring, database, typescript, accessibility
10 knowledge library items auto-suggested when relevant: security-headers, docker-node, github-actions, env-config, error-handling, rate-limiter, prisma-setup, zod-validation, testing-patterns, git-hooks
Project Workflow
aman-agent is project-aware. When you run it in a project directory, it loads project-specific context.
Set up a project:
cd ~/my-project
npx @aman_asmuei/acore # Creates .acore/context.md with detected stack
aman-agent # Loads project context automaticallyWhat gets scoped to your project:
- Decisions and topology (stored in amem with
project:my-projectscope) - Session state (saved to
.acore/context.mdon exit) - Plans (stored in
.acore/plans/)
Switch projects naturally:
cd ~/project-a && aman-agent # Loads project-a context + memories
cd ~/project-b && aman-agent # Loads project-b context + memoriesGlobal preferences (coding style, tool choices) carry across all projects.
Personality & Wellbeing
The agent adapts its tone based on time of day and how you're doing:
- Morning: energetic, direct
- Afternoon: steady, focused
- Evening: warm, winding down
- Late night: gentle, reflective
It also reads your messages for frustration, confusion, or fatigue — and adapts:
You > ugh nothing works, tried everything!!
[sentiment: frustrated]
Aman ──────────────────────────────────────────────
That sounds frustrating. Let's step back and try a different
angle — what changed since it last worked?Wellbeing nudges appear when needed:
- Long late-night session → gentle suggestion to rest
- 90+ minutes of frustration → suggest a break
- User seems tired → concise responses, support wrapping up
These are one-time nudges — the agent won't nag.
Customization
Persona presets for different work styles:
aman-agent init
# Choose: Coding Partner, Creative Collaborator,
# Personal Assistant, Learning Buddy, or MinimalGuardrails control what the AI should and shouldn't do:
/rules add Coding Always write tests before merging
/rules add Never Delete production data without confirmationWorkflows teach the AI multi-step processes:
/workflows add code-reviewHook toggles in ~/.aman-agent/config.json:
{
"hooks": {
"memoryRecall": true,
"personalityAdapt": true,
"extractMemories": true,
"featureHints": true
}
}Set any to false to disable.
Showcase Templates
Give your companion a pre-built specialty from aman-showcase:
| Template | What it does | |:---|:---| | Muslim | Islamic daily companion — prayer times, hadith, du'a | | Quran | Quranic Arabic vocabulary with transliteration | | Fitness | Personal trainer — workout tracking, nutrition | | Freelancer | Client & invoice tracking for independents | | Kedai | Small business assistant (BM/EN) | | Money | Personal finance & budget tracker | | Monitor | Price/website/keyword watchdog | | Bahasa | Malay/English language tutor | | Team | Standups, tasks, team memory | | Rutin | Medication reminders for family | | Support | Customer support with escalation | | IoT | Sensor monitoring for smart homes | | Feed | News aggregation & filtering |
Install during onboarding or anytime:
npx @aman_asmuei/aman-showcase install muslimEach template includes identity, workflows, rules, and domain skills — all installed into your ecosystem.
Your Profile vs Agent Profiles
Your profile is who YOU are — name, role, expertise, communication style. Set during onboarding, injected into every conversation:
/profile me View your profile
/profile edit Edit a field
/profile setup Re-run full setupAgent profiles are different AI personalities for different tasks:
aman-agent --profile coder # direct, code-first
aman-agent --profile writer # creative, story-driven
aman-agent --profile researcher # analytical, citation-focusedEach agent profile has its own identity, rules, and skills — but shares the same memory. Create profiles:
/profile create coder Install built-in template
/profile create mybot Create custom profile
/profile list Show all profilesTask Orchestration
Describe what you want to build — aman-agent decomposes it into a DAG of tasks, assigns each to the right specialist agent, and executes them in parallel:
You > /orchestrate Add user authentication with JWT, password hashing, and rate limiting
Decomposing requirement into task DAG...
## User Authentication
**Goal:** JWT auth with security hardening
**Tasks:** 5 | **Gates:** 1
- **Design auth architecture** → architect [advanced] (root)
- **Implement JWT middleware** → coder [standard] (after: design)
- **Add rate limiting** → coder [standard] (after: design)
- **Write test suite** → tester [standard] (after: jwt, rate-limit)
- **Security review** → security [standard] (after: tests)
- 🔒 **Human approval** [approval]How it works:
- Your LLM decomposes the requirement into a validated DAG (no cycles, valid refs)
- The scheduler runs independent tasks in parallel (configurable concurrency)
- Each task is routed to the right LLM tier — Haiku for simple, Sonnet for coding, Opus for architecture
- Approval gates pause execution for human review at critical points
- After completion, the self-review loop runs reviewer + tester agents on the output
- Circuit breakers prevent cascade failures; checkpoints enable crash recovery
Pre-built templates:
/orchestrate --template full-feature # architect → coders → review + test
/orchestrate --template bug-fix # reproduce → fix → test → review
/orchestrate --template security-audit # scan → triage → fix → rescanCost awareness: The cost tracker monitors token usage per tier. Set a budget limit in config to prevent runaway costs.
GitHub Automation
aman-agent speaks GitHub natively via the gh CLI:
You > /github
Repo: amanasmuei/aman-agent (authenticated)
Branch: main
You > /github issues --limit 5
#42 Add user auth feature, security alice
#41 Fix login redirect bug bob
#40 Update dependencies chore unassigned
...
You > /github plan 42
Fetching issue #42: "Add user auth"...
Decomposing into task DAG...
## Add user auth
**Tasks:** 4 | **Gates:** 1
- **Design auth flow** → architect [advanced] (root)
- **Implement JWT** → coder [standard] (after: design)
- **Write tests** → tester [standard] (after: implement)
- **Security review** → security [standard] (after: tests)
You > /github ci main
CI Status: ✓ passing (workflow: ci.yml, 2m ago)Available commands:
| Command | Description |
|:---|:---|
| /github | Show current repo info and auth status |
| /github issues | List open issues |
| /github prs | List open pull requests |
| /github plan <number> | Decompose issue into orchestrator task DAG |
| /github ci <branch> | Check CI status for a branch |
Agent Delegation
Delegate tasks to sub-agents with specialist profiles:
/delegate writer Write a blog post about AI companions
[delegating to writer...]
[writer] ✓ (2 tool turns)
# Building AI Companions That Actually Remember You
...Pipeline delegation — chain agents sequentially:
/delegate pipeline writer,researcher Write and fact-check an article
[writer] ✓ — drafted article
[researcher] ✓ — verified claims, added citationsThe AI also auto-suggests delegation when it recognizes a task matches a specialist profile — always asks for your permission first.
Agent Teams
Named teams of agents that collaborate on complex tasks:
/team create content-team Install built-in team
/team run content-team Write a blog post about AI
Team: content-team (pipeline)
Members: writer → researcher
[writer: drafting...] ✓
[researcher: fact-checking...] ✓
Final output with verified claims.3 execution modes:
| Mode | How it works |
|:---|:---|
| pipeline | Sequential: agent1 → agent2 → agent3 |
| parallel | All agents work concurrently, coordinator merges |
| coordinator | AI plans how to split the task, assigns to members |
Built-in teams:
| Team | Mode | Members |
|:---|:---|:---|
| content-team | pipeline | writer → researcher |
| dev-team | pipeline | coder → researcher |
| research-team | pipeline | researcher → writer |
Create custom teams:
/team create review-squad pipeline coder:implement,researcher:review
/team run review-squad Build a rate limiter in TypeScriptThe AI auto-suggests teams when appropriate — always asks permission first.
Multi-agent (A2A)
Run aman-agent as a local MCP server so other aman-agent instances — on the same machine — can delegate tasks to it via the @name syntax. No new protocol, no new daemon, no broker — just MCP over localhost using the existing @modelcontextprotocol/sdk bits.
Start a specialist agent in server mode:
aman-agent serve --name coder --profile coder
✓ Ecosystem loaded
✓ registered as @coder
✓ port 52341 (127.0.0.1) — token is in ~/.aman-agent/registry.jsonLeave it running. The process binds an ephemeral localhost port and writes its {name, pid, port, token} into ~/.aman-agent/registry.json (mode 0600). On SIGINT/SIGTERM it cleans up the registry entry.
From another aman-agent instance, delegate to it:
You > /agents list
Running agents:
@coder coder pid=4512 port=52341 up 34s
You > /delegate @coder Refactor src/auth.ts to use async/await
[delegating to @coder...]
✓ @coder completed in 12.4s
...Commands:
| Command | What it does |
|:---|:---|
| aman-agent serve --name X --profile Y | Run as a local A2A server |
| /agents list | Show running agents on this machine |
| /agents info <name> | Call agent.info MCP tool on the named agent |
| /agents ping <name> | Bearer-authed /health latency check |
| /delegate @<name> <task> | Delegate via MCP to another running agent |
How it works: Each serve process binds an ephemeral localhost port, mounts an MCP StreamableHTTPServerTransport, and writes its {name, profile, pid, port, token} into ~/.aman-agent/registry.json (mode 0600). The calling agent looks up the target in the registry, dials via StreamableHTTPClientTransport with the bearer token, and invokes the agent.delegate MCP tool. Three tools are exposed on every serve instance: agent.info, agent.delegate, agent.send.
Trust model: Same-user, same-machine. OS file permissions on ~/.aman-agent/registry.json are the trust boundary — if you can read the registry, you know the tokens. Cross-machine A2A is a future addition; it will need explicit auth and is out of scope for v0.31.
Known limitations (v0.31):
- Event-loop leak on
delegateRemote— the MCP SDK'sStreamableHTTPClientTransportkeeps an internal resource alive afterclose()/terminateSession(), so a Node script that callsdelegateTask("@name", ...)and expects clean exit must callprocess.exit(0)explicitly. The interactive REPL is unaffected (it exits via/quit). Follow-up investigation planned. - No cross-machine A2A — registry is a local JSON file; LAN/remote agents not supported yet.
agent.sendis in-memory only — messages are lost if the target agent crashes before draining them.- No proactive push to humans — delivering a message to Telegram/Discord/etc. is
achannel's job, not aman-agent's.
Session Telemetry & Post-Mortems
aman-agent now passively observes what happens during a session and can produce a structured post-mortem on demand or automatically.
Live observation dashboard:
You > /observe
Session: 47 min | Tools: 23 calls (2 errors) | Files: 5 changed
Blockers: 1 | Milestones: 3 | Topic shifts: 2Pause / resume capture when you don't want noisy commands recorded:
You > /observe pause
Observation paused. Use /observe resume to continue.Generate a post-mortem on demand:
You > /postmortem
# Post-Mortem: 2026-04-11
**Session:** session-2026-04-11-2143 | **Duration:** 47 min | **Turns:** 23
## Summary
Refactored the auth middleware and shipped JWT validation...
## Completed
- [x] Extracted token handler
- [x] Wired rate limiter
## Blockers
- Rate limit hit on token endpoint mid-session
## Patterns
- Detect rate limits earlier
...
Saved → ~/.acore/postmortems/2026-04-11-sess.mdAutomatic on session end when any of these triggers fire:
- ≥3 tool errors
- ≥2 user blockers
- >60 minute session
- Plan steps abandoned
- Sustained frustration (5+ frustration signals)
Recurring patterns from the report are stored as pattern memories so the next session benefits.
Cross-session trends:
/postmortem --since 7d Analyze the last 7 days of post-mortems
/postmortem last Show the most recent report
/postmortem list List all saved post-mortemsStorage: ~/.acore/observations/*.jsonl (raw events) and ~/.acore/postmortems/*.md (reports). Old observations auto-prune after 30 days. Disable with recordObservations: false or autoPostmortem: false in config.
Daily Workflow Summary
Here's what a typical day looks like with aman-agent:
Morning:
$ aman-agent dev ~/projects/amantrade
→ Detects stack: Go (Fiber) + PostgreSQL + Docker
→ Recalls 12 memories (decisions, conventions, corrections)
→ Generates project-specific CLAUDE.md
→ Launches Claude Code — full context loaded, zero re-explaining
→ "Welcome back. You're on step 3 of Auth API."
# Working on a second project in parallel?
$ aman-agent dev ~/projects/aman-mcp # new terminal
→ Same memory database, different project context
→ Decisions from amantrade are available here too
Afternoon:
→ Work on your plan, skills auto-activate as needed
→ /plan done after each step, commit your work
→ Personality shifts to steady pace
Evening:
→ /quit or Ctrl+C
→ Session auto-saved to memory
→ Plan progress persisted
→ Optional quick session rating
Next morning:
$ aman-agent dev # in any project
→ CLAUDE.md auto-updates if new memories exist
→ Everything picks up where you left offFeatures
Intelligent Companion Features
Per-Message Memory Recall with Progressive Disclosure
Every message you send triggers a semantic search against your memory database. Results use progressive disclosure — a compact index (~50-100 tokens) is injected instead of full content (~500-1000 tokens), giving ~10x token savings. The agent shows the cost:
You > Let's set up the auth service
[memories: ~47 tokens]
Agent recalls:
a1b2c3d4 [decision] Auth service uses JWT tokens... (92%)
e5f6g7h8 [preference] User prefers PostgreSQL... (88%)
i9j0k1l2 [fact] Auth middleware rewrite driven by compliance... (75%)
Aman > Based on our previous decisions, I'll set up JWT-based auth
with PostgreSQL, keeping the compliance requirements in mind...Silent Memory Extraction
After every response, the agent analyzes the conversation and extracts memories worth keeping — preferences, facts, patterns, decisions, corrections, and topology are all stored automatically. No confirmation prompts interrupting your flow.
You > I think we should go with microservices for the payment system
Aman > That makes sense given the compliance isolation requirements...
[1 memory stored]Don't want something remembered? Use /memory search to find it and /memory clear to remove it.
Rich Terminal Output
Responses are rendered with full markdown formatting — bold, italic, code, code blocks, tables, lists, and headings all display beautifully in your terminal. Responses are framed with visual dividers:
Aman ──────────────────────────────────────────────
Here's how to set up Docker for this project...
──────────────────────────────── memories: ~45 tokensFirst-Run & Returning Greeting
First session: Your companion introduces itself and asks your name — the relationship starts naturally.
Returning sessions: A warm one-liner greets you with context from your last conversation:
Welcome back. Last time we talked about your Duit Raya tracker.
Reminder: Submit PR for auth refactor (due today)Progressive Feature Discovery
aman-agent surfaces tips about features you haven't tried yet, at the right moment:
Tip: Teach me multi-step processes with /workflows addOne hint per session, never repeated. Disable with hooks.featureHints: false.
Human-Readable Errors
No more cryptic API errors. Every known error maps to an actionable message:
API key invalid. Run /reconfig to fix.
Rate limited. I'll retry automatically.
Network error. Check your internet connection.Failed messages are preserved — just press Enter to retry naturally.
LLM-Powered Context Summarization
When the conversation gets long, the agent uses your LLM to generate real summaries — preserving decisions, preferences, and action items. No more losing critical context to 150-character truncation.
Parallel Tool Execution
When the AI needs multiple tools, they run in parallel via Promise.all instead of sequentially. Faster responses, same guardrail checks.
Retry with Backoff
LLM calls and MCP tool calls automatically retry on transient errors (rate limits, timeouts) with exponential backoff and jitter. Auth errors fail immediately.
Passive Tool Observation Capture
Every tool the AI executes is automatically logged to amem's conversation log — tool name, input, and result. This happens passively (fire-and-forget) without slowing down the agent. Your AI builds a complete history of what it did, not just what it said.
Token Cost Visibility
Every memory recall shows how many tokens it costs, so you always know the overhead:
[memories: ~47 tokens]Personality Engine
The agent adapts its personality in real-time based on signals:
- Time of day: morning (high-drive) → afternoon (steady) → night (reflective)
- Session duration: gradually shifts from energetic to mellow
- User sentiment: detects frustration, excitement, confusion, fatigue from keywords
- Wellbeing nudges: suggests breaks when you've been at it too long, gently mentions sleep during late-night sessions
All state syncs to acore's Dynamics section — works across aman-agent, achannel, and aman-plugin.
Auto-Triggered Skills
When you talk about security, the security skill activates. Debugging? The debugging skill loads. No commands needed.
- 12 skill domains with keyword matching
- Skill leveling (Lv.1→Lv.5): adapts explanation depth to your demonstrated mastery
- Self-improving: memory extraction enriches skills with your specific patterns over time
- Knowledge library: 10 curated reference items auto-suggested when relevant
Persistent Plans
Create multi-step plans that survive session resets:
/plan create Auth | Add JWT auth | Design schema, Implement middleware, Add tests, Deploy
Plan: Auth (active)
Goal: Add JWT auth
Progress: [████████░░░░░░░░░░░░] 2/5 (40%)
1. [✓] Design schema
2. [✓] Implement middleware
3. [ ] Add tests ← Next
4. [ ] DeployPlans stored as markdown in .acore/plans/ — git-trackable, project-local.
Background Task Execution
Long-running tools (tests, builds, Docker) run in the background while the conversation continues. Results appear when ready.
Project-Aware Sessions
The agent detects your project from the current directory. On exit, it auto-updates .acore/context.md with session state. Next time you open the same project, the AI picks up where you left off.
Reminders
You > Remind me to review PR #42 by Thursday
Aman > I'll set that reminder for you.
[Reminder set: "Review PR #42" — due 2026-03-27]Next session:
[OVERDUE] Review PR #42 (was due 2026-03-27)Reminders persist in SQLite across sessions. Set them, forget them, get nudged.
Memory Consolidation
On every startup, the agent automatically merges duplicate memories, prunes stale low-confidence ones, and promotes frequently-accessed entries.
Memory health: 94% (merged 2 duplicates, pruned 1 stale)Structured Debug Logging
Every operation that can fail logs to ~/.aman-agent/debug.log with structured JSON. No more silent failures — use /debug to see what's happening under the hood.
Passive Session Observation
Every tool call, error, file change, sentiment shift, blocker, milestone, and topic shift is captured as a typed event and written to ~/.acore/observations/*.jsonl. Capture is non-blocking (events buffer in memory and flush in batches). Stats are visible live via /observe.
LLM-Powered Post-Mortems
On session end, if any smart trigger fires (≥3 tool errors, ≥2 blockers, >60 min, abandoned plan steps, or sustained frustration), the agent uses your LLM to generate a structured post-mortem report — summary, goals, completed work, blockers, decisions, sentiment arc, recurring patterns, and actionable recommendations. Patterns are stored back as pattern memories. Reports persist as markdown in ~/.acore/postmortems/.
How It Works
Per-message flow
sequenceDiagram
autonumber
participant U as You
participant A as aman-agent
participant M as amem (memory)
participant L as LLM
participant T as MCP tools
U->>A: your message
A->>M: semantic recall (top 5)
M-->>A: compact index (~50–100 tok)
A->>A: build prompt: identity + rules + skills + memories
A->>L: stream request
L-->>A: response + tool calls
par parallel execution
A->>T: tool call 1 (guardrail-checked)
and
A->>T: tool call 2 (guardrail-checked)
end
T-->>A: results
A-->>U: streamed response
A->>M: extract memories (non-blocking)
A->>A: update user model + sentiment + skills┌───────────────────────────────────────────────────────────┐
│ Your Terminal │
│ │
│ You > tell me about our auth decisions │
│ │
│ [recalling memories...] │
│ Agent > Based on your previous decisions: │
│ - OAuth2 with PKCE (decided 2 weeks ago) │
│ - JWT for API tokens... │
│ │
│ [1 memory stored] │
└──────────────────────┬────────────────────────────────────┘
│
┌──────────────────────▼────────────────────────────────────┐
│ aman-agent runtime │
│ │
│ On Startup │
│ ┌────────────────────────────────────────────────┐ │
│ │ 1. Load ecosystem (identity, tools, rules...) │ │
│ │ 2. Connect MCP servers (aman-mcp + amem) │ │
│ │ 3. Consolidate memory (merge/prune/promote) │ │
│ │ 4. Check reminders (overdue/today/upcoming) │ │
│ │ 5. Inject time context (morning/evening/...) │ │
│ │ 6. Recall session context from memory │ │
│ └────────────────────────────────────────────────┘ │
│ │
│ Per Message │
│ ┌────────────────────────────────────────────────┐ │
│ │ 1. Semantic memory recall (top 5 relevant) │ │
│ │ 2. Augment system prompt with memories │ │
│ │ 3. Stream LLM response (with retry) │ │
│ │ 4. Execute tools in parallel (with guardrails) │ │
│ │ 5. Extract memories from response │ │
│ │ - Auto-store: preferences, facts, patterns │ │
│ │ - All types auto-stored silently │ │
│ └────────────────────────────────────────────────┘ │
│ │
│ Context Management │
│ ┌────────────────────────────────────────────────┐ │
│ │ Auto-trim at 80K tokens │ │
│ │ LLM-powered summarization (not truncation) │ │
│ │ Fallback to text preview if LLM call fails │ │
│ └────────────────────────────────────────────────┘ │
│ │
│ MCP Integration │
│ ┌────────────────────────────────────────────────┐ │
│ │ aman-mcp → identity, tools, workflows, eval │ │
│ │ amem → memory, knowledge graph, reminders│ │
│ └────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────┘Session Lifecycle
| Phase | What happens | |:---|:---| | Start | Load ecosystem, connect MCP, consolidate memory, check reminders, compute personality state, load active plan | | Each turn | Recall memories, auto-trigger skills, inject active plan, detect sentiment, stream response, execute tools (parallel + background), extract memories, enrich skills | | Every 5 turns | Refresh personality state, check wellbeing, sync to acore | | Auto-trim | LLM-powered summarization when approaching 80K tokens | | Exit | Save conversation to amem, update session resume, persist personality state, update project context.md, optional session rating |
Commands
CLI Commands
| Command | Description |
|:---|:---|
| aman-agent | Start interactive chat session |
| aman-agent dev [path] | Scan project, generate context, launch editor [--smart\|--yolo\|--copilot\|--cursor\|--no-launch\|--force\|--diff] |
| aman-agent init | Set up your AI companion with a guided wizard |
| aman-agent serve | Run as a local MCP server for agent delegation [--name\|--profile] |
| aman-agent setup | Full reconfiguration wizard |
| aman-agent update | Self-update to latest version |
| aman-agent uninstall | Clean removal of all data and config |
Slash Commands (inside a session)
| Command | Description |
|:---|:---|
| /help | Show available commands |
| /plan | Show active plan [create\|done\|undo\|list\|switch\|show] |
| /profile | Your profile + agent profiles [me\|edit\|setup\|create\|list\|show\|delete] |
| /orchestrate | Decompose requirement into task DAG and execute with parallel agents [<requirement>] |
| /github | GitHub operations [issues\|prs\|plan <number>\|ci <branch>] |
| /delegate | Delegate task to a profile [<profile> <task>\|pipeline] |
| /agents | Multi-agent A2A [list\|info <name>\|ping <name>] |
| /team | Manage agent teams [create\|run\|list\|show\|delete] |
| /identity | View identity [update <section>] [dynamics [--json\|--reset]] |
| /rules | View guardrails [add\|remove\|toggle ...] |
| /workflows | View workflows [add\|remove ...] |
| /tools | View tools [add\|remove ...] |
| /skills | View skills [install\|uninstall\|crystallize\|list --auto] |
| /eval | View evaluation [milestone ...] |
| /memory | View memories [search\|clear\|timeline] |
| /observe | Live session telemetry dashboard [pause\|resume] |
| /postmortem | Generate session post-mortem [last\|list\|--since 7d] |
| /decisions | View decision log [<project>] |
| /export | Export conversation to markdown |
| /debug | Show debug log (last 20 entries) |
| /status | Ecosystem dashboard |
| /doctor | Health check all layers |
| /save | Save conversation to memory |
| /model | Show current LLM model |
| /update | Check for updates |
| /reconfig | Reset LLM configuration |
| /clear | Clear conversation history |
| /quit | Exit |
What It Loads
On every session start, aman-agent assembles your full AI context:
| Layer | Source | What it provides |
|:---|:---|:---|
| Identity | ~/.acore/core.md | AI personality, your preferences, relationship state |
| User | ~/.acore/user.md | Your name, role, expertise level, c
