npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

n8n-nodes-inner-batched-chain-summarization

v0.1.2

Published

n8n community node with intelligent batched chain summarization for processing large documents efficiently

Downloads

106

Readme

n8n-nodes-inner-batched-chain-summarization

This is an n8n community node that provides intelligent batched chain summarization for processing large documents efficiently with built-in rate limiting and pause functionality.

The Batched Chain Summarization node transforms text into concise summaries using multiple strategies (map-reduce, refine, stuff) with intelligent batching to handle large documents while respecting API rate limits through configurable delays between batches.

n8n is a fair-code licensed workflow automation platform.

Installation Operations Configuration Usage Compatibility Resources Version History

Installation

Follow the installation guide in the n8n community nodes documentation.

npm install n8n-nodes-inner-batched-chain-summarization

Operations

The node supports three powerful summarization strategies:

🗺️ Map-Reduce (Recommended)

Best for: Large documents with many chunks

  • Process: Summarizes each document/chunk individually in parallel batches, then combines all summaries
  • Batching: Full batching support with configurable delays between batches
  • Scalability: High - handles large document sets efficiently
  • API Calls: Most calls (one per document + one combine)

🔄 Refine

Best for: Documents where order and context matter

  • Process: Iteratively refines summary by processing each subsequent document against the existing summary
  • Batching: Partial batching support with delays between refinement batches
  • Scalability: Medium - good for contextual content
  • API Calls: Moderate (one per document)

📦 Stuff

Best for: Small documents that fit within model context limits

  • Process: Combines all documents into a single prompt for one LLM call
  • Batching: No batching (single call)
  • Scalability: Low - limited by context window
  • API Calls: Minimal (only one)

Configuration

Data Input Modes

  • Use Node Input (JSON): Process JSON data from the previous node
  • Use Node Input (Binary): Process binary files from the previous node
  • Use Document Loader: Use a dedicated document loader sub-node with advanced options

Chunking Strategies

  • Simple: Built-in recursive character text splitter with configurable size and overlap
  • Advanced: Use an external text splitter sub-node for complex requirements
  • None: Process documents without chunking (document loader mode only)

Batching & Rate Limiting

  • Batch Size: Number of documents to process simultaneously (default: 5, range: 1-1000)
  • Delay Between Batches: Milliseconds to wait between batches (default: 0, max: 10 minutes)
  • Input Validation: Automatic bounds checking prevents infinite loops and invalid configurations

Custom Prompts

Full customization support for all summarization methods:

  • Map-Reduce: Individual summary prompt + combine prompt
  • Refine: Initial prompt + refinement prompt
  • Stuff: Single summarization prompt

Usage

Basic Workflow

  1. Connect your data source (previous node, binary files, or document loader)
  2. Choose summarization method based on your document size and requirements
  3. Configure batching to respect your API provider's rate limits
  4. Set chunking strategy if processing large documents
  5. Customize prompts if needed for specific summarization requirements

Rate Limiting Best Practices

Start with conservative settings and adjust based on your API provider:

Batch Size: 2-3 documents
Delay: 1000-2000ms between batches

The pause functionality helps prevent rate limit violations during processing.

Example Configurations

For Large Document Sets:

  • Method: Map-Reduce
  • Batch Size: 5
  • Delay: 1000ms
  • Chunking: Simple (1000 chars, 200 overlap)

For Narrative Content:

  • Method: Refine
  • Batch Size: 3
  • Delay: 500ms
  • Chunking: Advanced (with custom splitter)

For Quick Processing:

  • Method: Stuff
  • No batching required
  • Ensure documents fit in context window

Error Handling

Enable "Continue on Fail" in node settings to handle:

  • API rate limit errors gracefully
  • Individual document processing failures
  • Network timeout issues

Compatibility

  • Minimum n8n version: 1.0.0
  • Node.js version: ≥20.15.0
  • Tested with: n8n 1.82.0+

Dependencies

  • LangChain: ^0.3.34 (document processing and LLM integration)
  • LangChain Core: ^0.3.76 (base functionality)
  • LangChain Text Splitters: ^0.1.0 (chunking support)

Resources

Version History

0.1.0 (Current)

  • Initial Release: Complete batched chain summarization implementation
  • Features: Three summarization methods (map-reduce, refine, stuff)
  • Batching: Intelligent batching with configurable delays and rate limiting
  • Testing: Comprehensive test suite with 111+ tests covering all functionality
  • Performance: Optimized for large document processing with pause functionality
  • Validation: Input validation prevents infinite loops and invalid configurations
  • Architecture: Shared constants system prevents circular dependencies

Upcoming Features

  • Enhanced document format support
  • Advanced prompt template management
  • Integration with more LangChain document loaders
  • Performance monitoring and metrics

Author: Morgan C. Nicholson ([email protected]) License: MIT Repository: GitHub