ddap
v1.0.0
Published
Desire-Driven Adaptive Planning - A GOAP (Goal-Oriented Action Planning) library for TypeScript
Maintainers
Readme
DDAP - Desire-Driven Adaptive Planning
A comprehensive TypeScript library for Goal-Oriented Action Planning (GOAP), implementing the A* search algorithm with advanced optimizations, desire hierarchies, skill systems, and batch processing capabilities.
Features
- 🎯 Goal-Oriented Planning: Define goals and let the planner find the optimal action sequence
- 🔍 Optimized A* Search: Efficient pathfinding with plan caching, early termination, and priority queues
- 🧠 Desire Hierarchies: Multi-tier priority system for complex agent behaviors
- 📊 Skill Progression: Experience-based skill system with insights and teaching
- 🚀 Batch Processing: Efficient agent management with async execution support
- 🐛 Debug Tools: Plan inspection, state debugging, and action logging
- 📦 Type-Safe: Full TypeScript support with generic types for state management
- ⚡ High Performance: Optimized for <1ms per agent tick for simple agents, <60ms for complex agents
- ✅ Well Tested: Comprehensive unit tests and benchmarks
Installation
npm install ddapQuick Start
Basic Planning
import { GOAPPlanner, DefaultWorldState } from 'ddap';
import type { Action } from 'ddap';
// Define your state type
type MyState = 'hasItem' | 'atLocation';
// Create initial world state
const worldState = new DefaultWorldState<MyState>();
worldState.set('hasItem', false);
worldState.set('atLocation', false);
// Define actions
const actions: Action<MyState>[] = [
{
name: 'GoToLocation',
cost: 1,
preconditions: {},
effects: { atLocation: true },
canExecute: () => true,
execute: (state) => state.set('atLocation', true),
},
{
name: 'PickUpItem',
cost: 1,
preconditions: { atLocation: true },
effects: { hasItem: true },
canExecute: (state) => state.get('atLocation') === true,
execute: (state) => state.set('hasItem', true),
},
];
// Define goal
const goal = { hasItem: true };
// Plan
const planner = new GOAPPlanner<MyState>({
maxIterations: 1000,
maxDepth: 50,
enableCache: true,
});
const plan = planner.plan(worldState, goal, actions);
if (plan) {
console.log('Plan found!');
plan.actions.forEach((action) => {
console.log(`- ${action.name}`);
});
}Creating an Agent
import { Agent, DesireHierarchy, EvaluatorBuilder, DefaultWorldState } from 'ddap';
type AgentState = 'hunger' | 'atKitchen' | 'hasFood';
const worldState = new DefaultWorldState<AgentState>();
worldState.set('hunger', 80);
worldState.set('atKitchen', false);
worldState.set('hasFood', false);
// Create desire hierarchy
const hierarchy = new DesireHierarchy<AgentState>();
// Add a tier for hunger
const hungerEvaluator = new EvaluatorBuilder<AgentState>()
.when('hunger')
.above(50)
.then({ hunger: 0 })
.build();
hierarchy.addTier({
priority: 10,
name: 'Basic Needs',
evaluator: hungerEvaluator,
enabled: true,
});
// Create agent
const agent = new Agent(worldState, hierarchy, {
id: 'my-agent',
autoReplan: true,
maxIterations: 1000,
});
// Define actions
const actions: Action<AgentState>[] = [
{
name: 'GoToKitchen',
cost: 1,
preconditions: {},
effects: { atKitchen: true },
canExecute: () => true,
execute: (state) => state.set('atKitchen', true),
},
{
name: 'Eat',
cost: 1,
preconditions: { atKitchen: true },
effects: { hunger: 0 },
canExecute: (state) => state.get('atKitchen') === true,
execute: (state) => state.set('hunger', 0),
},
];
// Tick the agent
agent.tick(actions);Batch Processing with AgentManager
import { AgentManager } from 'ddap';
const manager = new AgentManager<AgentState>();
// Add multiple agents
for (let i = 0; i < 100; i++) {
const agent = createAgent(`agent-${i}`);
manager.addAgent(agent);
}
// Process all agents in batch
const stats = await manager.tickBatch((agent) => getAvailableActions(agent), {
async: true,
maxConcurrency: 10,
});
console.log(`Processed ${stats.totalAgents} agents in ${stats.executionTimeMs}ms`);
console.log(`Average: ${stats.averageTimePerAgentMs}ms per agent`);Architecture
DDAP is built on several key components:
Core Components
- GOAPPlanner: Optimized A* search with caching, early termination, and depth limits
- Agent: Integrates planning, desire evaluation, and action execution
- DesireHierarchy: Multi-tier priority system for goal selection
- StateEvaluator: Evaluates world state and produces goals
- ActionRegistry: Manages available actions with skill-based unlocks
Advanced Features
- Plan Caching: Plans are cached based on world state and goal state hashes
- Early Termination: Search stops when cost exceeds best known solution
- Priority Queue: Binary heap for efficient open set management
- Evaluator Caching: Results cached per tick with dirty flag invalidation
- Skill Progression: Experience-based system with insights and teaching
- Batch Processing: Efficient multi-agent processing with async support
API Reference
GOAPPlanner
new GOAPPlanner<T>(options?: PlannerOptions)Options:
maxIterations?: number- Maximum search iterations (default: 1000)maxDepth?: number- Maximum plan depth (default: 50)enableCache?: boolean- Enable plan caching (default: true)maxCacheSize?: number- Maximum cache entries (default: 1000)
Methods:
plan(worldState, goalState, actions): Plan<T> | null- Find optimal planclearCache(): void- Clear plan cachegetCacheSize(): number- Get current cache size
Agent
new Agent<T>(worldState, desireHierarchy, options?: AgentOptions)Options:
autoReplan?: boolean- Auto-replan when plan invalid (default: true)maxIterations?: number- Maximum planning iterationsid?: string- Unique agent identifierskillProgression?: SkillProgression- Skill progression systeminsightSystem?: InsightSystem- Insight system
Methods:
tick(availableActions): void- Main update loopformPlan(availableActions): Plan<T> | null- Form new plangetCurrentPlan(): Plan<T> | null- Get current plangetCurrentGoal(): GoalState<T> | null- Get current goalclearPlan(): void- Clear current plan
AgentManager
new AgentManager<T>();Methods:
addAgent(agent): void- Add agent to managerremoveAgent(agent): void- Remove agent from managertickBatch(getAvailableActions, options): Promise<BatchStatistics>- Process all agentstickBatchSync(getAvailableActions, options): BatchStatistics- Synchronous batch processinggetAgents(): readonly Agent<T>[]- Get all agentsclear(): void- Clear all agents
Debug Tools
PlanInspector
const inspector = new PlanInspector();
const breakdown = inspector.inspectPlan(plan);
const visualization = inspector.visualizePlan(plan);
const json = inspector.exportPlan(plan);StateDebugger
const debugger = new StateDebugger();
const info = debugger.getAgentDebugInfo(agent);
const formatted = debugger.formatDebugInfo(info);ActionLogger
const logger = new ActionLogger();
logger.log(agentId, action, worldStateBefore, worldStateAfter);
const logs = logger.getLogs({ agentId: 'agent-1', limit: 100 });
const csv = logger.exportCSV();
const stats = logger.getStatistics();Examples
See the examples/ directory for complete examples:
- simple-agent/ - Basic agent finding and eating food
- survival-work-agent/ - Complex agent with job system and skills
- goblin-tribe/ - Multi-agent settlement simulation with 10 goblins
Performance
DDAP is optimized for high performance:
- Simple Agents: <1ms per agent tick
- Complex Agents: <60ms per agent tick (accounts for CI environments; typically ~30ms locally)
- Batch Processing: Efficient parallel execution with configurable concurrency
Run benchmarks:
npm test -- tests/benchmarksDevelopment
# Install dependencies
npm install
# Build
npm run build
# Test
npm test
# Run benchmarks
npm test -- tests/benchmarks
# Generate documentation
npm run docs
# Lint
npm run lint
# Format
npm run formatContributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Ensure all tests pass
- Submit a pull request
Code Style
- Use TypeScript strict mode
- Follow ESLint and Prettier configurations
- Write tests for new features
- Update documentation as needed
License
MIT
Documentation
Full API documentation is available at: https://tandemwolf.github.io/ddap/
For more examples and advanced usage, see the examples/ directory.
