n8n-nodes-token-aware-memory
v0.1.30
Published
Token aware memory node for n8n AI workflows
Maintainers
Keywords
Readme
n8n-nodes-token-aware-memory
This is an n8n community node that provides token-aware memory management for AI workflows with Redis persistence and automatic compression.
The Token-Aware Memory node stores conversation history with intelligent token management, hierarchical memory organization, and automatic summarization when token limits are approached.
n8n is a fair-code licensed workflow automation platform.
Installation Operations Configuration Compatibility Usage Resources Version history
Installation
Follow the installation guide in the n8n community nodes documentation.
Operations
Memory Management
- Store Messages: Automatically stores every user and AI message sent through connected nodes
- Token Monitoring: Tracks total token usage in real-time
- Hierarchical Storage: Organizes memory into short-term, mid-term, and long-term levels
- Automatic Compression: Compresses older messages when reaching 80% of maxTokens limit
- Full History Retrieval: Returns complete conversation history when requested
Memory Levels
- Short-Term: Recent messages stored verbatim
- Mid-Term: Partially summarized older messages
- Long-Term: Fully compressed historical summaries
Configuration
Required Parameters
- Max Tokens: Maximum total tokens allowed before triggering compression (default: 8000)
- Redis URL: Redis connection URL in format
redis://[password@]host:port[/database](default: redis://localhost:6379) - Summarization Prompt: Custom prompt template for LLM-based compression
Optional Parameters
- Session ID: Unique session identifier to separate memory between different conversations/executions (leave empty for auto-generated)
Connections
- AI Language Model Input: Connect an LLM node for intelligent message summarization during compression
Compatibility
- Minimum n8n version: 1.0.0
- Requires Redis server for memory persistence
- Tested with Redis 6.0+
- Note: Redis dependency may not be compatible with n8n Cloud deployments. For cloud usage, consider alternative memory solutions or contact n8n support.
Usage
Basic Setup
- Add the Token-Aware Memory node to your workflow
- Configure Redis connection parameters
- Connect AI memory output to nodes that need conversation history
- Optionally connect an LLM node for summarization
Memory Persistence
Memory is automatically persisted to Redis and survives workflow restarts. Each workflow and node instance maintains separate memory spaces.
Token Management
- Messages are automatically compressed when total tokens reach 80% of maxTokens
- Compression uses connected LLM for intelligent summarization
- Fallback to simple truncation if no LLM is connected
Example Workflow
[AI Chat Node] → [Token-Aware Memory] → [AI Response Node]
↓
[LLM Node for Summarization]Redis URL Examples
redis://localhost:6379- Local Redis without passwordredis://:password@localhost:6379- Local Redis with passwordredis://:[email protected]:6379/1- Remote Redis with password and database 1
Session Isolation
Each Token-Aware Memory node instance uses isolated Redis keys based on:
- Workflow ID
- Node ID
- Session ID (user-provided or auto-generated)
This ensures that multiple conversations or workflow executions don't interfere with each other's memory. Use the Session ID parameter to manually control session grouping.
Resources
Version history
0.1.3
- Initial release with hierarchical memory management
- Redis persistence support
- Token-aware automatic compression
- LLM integration for summarization
