npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

n8n-nodes-token-aware-memory

v0.1.30

Published

Token aware memory node for n8n AI workflows

Readme

n8n-nodes-token-aware-memory

This is an n8n community node that provides token-aware memory management for AI workflows with Redis persistence and automatic compression.

The Token-Aware Memory node stores conversation history with intelligent token management, hierarchical memory organization, and automatic summarization when token limits are approached.

n8n is a fair-code licensed workflow automation platform.

Installation Operations Configuration Compatibility Usage Resources Version history

Installation

Follow the installation guide in the n8n community nodes documentation.

Operations

Memory Management

  • Store Messages: Automatically stores every user and AI message sent through connected nodes
  • Token Monitoring: Tracks total token usage in real-time
  • Hierarchical Storage: Organizes memory into short-term, mid-term, and long-term levels
  • Automatic Compression: Compresses older messages when reaching 80% of maxTokens limit
  • Full History Retrieval: Returns complete conversation history when requested

Memory Levels

  • Short-Term: Recent messages stored verbatim
  • Mid-Term: Partially summarized older messages
  • Long-Term: Fully compressed historical summaries

Configuration

Required Parameters

  • Max Tokens: Maximum total tokens allowed before triggering compression (default: 8000)
  • Redis URL: Redis connection URL in format redis://[password@]host:port[/database] (default: redis://localhost:6379)
  • Summarization Prompt: Custom prompt template for LLM-based compression

Optional Parameters

  • Session ID: Unique session identifier to separate memory between different conversations/executions (leave empty for auto-generated)

Connections

  • AI Language Model Input: Connect an LLM node for intelligent message summarization during compression

Compatibility

  • Minimum n8n version: 1.0.0
  • Requires Redis server for memory persistence
  • Tested with Redis 6.0+
  • Note: Redis dependency may not be compatible with n8n Cloud deployments. For cloud usage, consider alternative memory solutions or contact n8n support.

Usage

Basic Setup

  1. Add the Token-Aware Memory node to your workflow
  2. Configure Redis connection parameters
  3. Connect AI memory output to nodes that need conversation history
  4. Optionally connect an LLM node for summarization

Memory Persistence

Memory is automatically persisted to Redis and survives workflow restarts. Each workflow and node instance maintains separate memory spaces.

Token Management

  • Messages are automatically compressed when total tokens reach 80% of maxTokens
  • Compression uses connected LLM for intelligent summarization
  • Fallback to simple truncation if no LLM is connected

Example Workflow

[AI Chat Node] → [Token-Aware Memory] → [AI Response Node]
                     ↓
              [LLM Node for Summarization]

Redis URL Examples

  • redis://localhost:6379 - Local Redis without password
  • redis://:password@localhost:6379 - Local Redis with password
  • redis://:[email protected]:6379/1 - Remote Redis with password and database 1

Session Isolation

Each Token-Aware Memory node instance uses isolated Redis keys based on:

  • Workflow ID
  • Node ID
  • Session ID (user-provided or auto-generated)

This ensures that multiple conversations or workflow executions don't interfere with each other's memory. Use the Session ID parameter to manually control session grouping.

Resources

Version history

0.1.3

  • Initial release with hierarchical memory management
  • Redis persistence support
  • Token-aware automatic compression
  • LLM integration for summarization