npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@cloudwarriors-ai/rlm

v0.3.0

Published

Recursive Language Model - Process massive contexts through recursive LLM decomposition

Readme

@cloudwarriors-ai/rlm

Process massive amounts of text with AI - way more than fits in a normal context window.

What Problem Does This Solve?

LLMs have context limits. Claude can handle ~200K tokens, GPT-4 around 128K. But what if you need to analyze:

  • An entire codebase (500+ files)
  • Years of log files
  • A collection of documents

You can't just paste it all in. RLM solves this.

How It Works

Instead of trying to cram everything into one prompt, RLM:

  1. Gives your data to an LLM as a Python variable
  2. Asks the LLM to write code to analyze it
  3. Runs that code in a safe sandbox
  4. If needed, recursively processes chunks

The LLM becomes a programmer that writes its own analysis tools.

Your huge context (10MB of code)
              │
              ▼
     LLM writes Python to
     analyze and chunk it
              │
              ▼
      Sandbox runs code
              │
              ▼
         Answer

Installation

npm install @cloudwarriors-ai/rlm

Requirement: You need an OpenRouter API key.

Basic Usage

import { createRLM } from '@cloudwarriors-ai/rlm';
import fs from 'node:fs';

// Create an RLM instance
const rlm = createRLM({
  apiKey: process.env.OPENROUTER_API_KEY,
});

// Query with your massive context
const result = await rlm.query(
  fs.readFileSync('./huge-codebase.txt', 'utf-8'),
  'Find all the security vulnerabilities'
);

// Get the answer
if (result.success) {
  console.log(result.answer);
  console.log(`Cost: $${result.usage.costUsd.toFixed(4)}`);
} else {
  console.error('Failed:', result.error);
}

Configuration Options

const rlm = createRLM({
  // Required
  apiKey: process.env.OPENROUTER_API_KEY,

  // Optional - defaults shown
  model: 'anthropic/claude-sonnet-4',

  config: {
    maxRecursionDepth: 5,    // How many levels deep it can go
    maxCostUsd: 10.0,        // Stop if cost exceeds this
    maxTokens: 1000000,      // Total token budget
    timeoutMs: 300000,       // 5 minute timeout
  },
});

What You Get Back

const result = await rlm.query(context, question);

result.sessionId    // Unique ID for this query
result.success      // true if it worked
result.answer       // The LLM's answer
result.error        // Error message if failed
result.usage        // Token counts and cost
result.trace        // Step-by-step execution log

Environment Variables

You can also configure via environment:

OPENROUTER_API_KEY=sk-or-...     # Required
RLM_MODEL=anthropic/claude-sonnet-4
RLM_MAX_DEPTH=5
RLM_MAX_COST_USD=10
RLM_TIMEOUT_SECONDS=300

Real-World Example

Analyzing an entire codebase:

import { createRLM } from '@cloudwarriors-ai/rlm';
import { readdir, readFile } from 'node:fs/promises';
import { join } from 'node:path';

async function analyzeCodebase(dir: string) {
  // Gather all source files
  const files = await readdir(dir, { recursive: true });
  const sourceFiles = files.filter(f => f.endsWith('.ts') || f.endsWith('.js'));

  // Read contents
  let context = '';
  for (const file of sourceFiles) {
    const content = await readFile(join(dir, file), 'utf-8');
    context += `\n### ${file}\n\`\`\`\n${content}\n\`\`\`\n`;
  }

  // Analyze with RLM
  const rlm = createRLM({
    apiKey: process.env.OPENROUTER_API_KEY,
  });

  const result = await rlm.query(
    context,
    'Analyze this codebase. What are the main components? Any code smells or issues?'
  );

  return result.answer;
}

Error Handling

import { createRLM, LimitExceededError, LLMError } from '@cloudwarriors-ai/rlm';

try {
  const result = await rlm.query(context, query);

  if (!result.success) {
    console.error('Query failed:', result.error);
  }
} catch (error) {
  if (error instanceof LimitExceededError) {
    console.error('Hit a limit:', error.message);
  } else if (error instanceof LLMError) {
    console.error('LLM error:', error.message);
  } else {
    throw error;
  }
}

License

MIT