npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@superadnim/rlm-pro-mcp

v1.0.2

Published

RLM-PRO MCP Server - Recursive Language Model for corpus analysis

Readme

RLM-PRO MCP Server

Pure TypeScript implementation of Recursive Language Models for corpus analysis.

npm version License: MIT

✨ What is RLM?

RLM (Recursive Language Model) is a powerful technique for analysing unstructured data corpora. Instead of simply feeding context to an LLM, RLM enables the model to write and execute code to explore data iteratively—much like a human researcher would.

How it Works

┌─────────────────────────────────────────────────────────────────┐
│  1. Build Context                                               │
│     Read files → Format → Create searchable index               │
├─────────────────────────────────────────────────────────────────┤
│  2. LLM Analysis                                                │
│     Send context + query → LLM reasons about what to explore    │
├─────────────────────────────────────────────────────────────────┤
│  3. Code Execution                                              │
│     LLM writes JavaScript → Execute in sandbox → Get results    │
├─────────────────────────────────────────────────────────────────┤
│  4. Iterate                                                     │
│     Results → LLM → More code → Repeat until final answer       │
└─────────────────────────────────────────────────────────────────┘

🚀 Quick Start

Via npx (No Installation Required)

npx @CG-Labs/RLM-PRO

Claude Desktop Configuration

Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json):

{
  "mcpServers": {
    "rlm-pro": {
      "command": "npx",
      "args": ["-y", "@CG-Labs/RLM-PRO"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

Environment Variables

| Variable | Description | Required | |----------|-------------|----------| | OPENAI_API_KEY | OpenAI API key | One of these | | ANTHROPIC_API_KEY | Anthropic API key | is required |

🛠️ Available Tools

rlm_analyze

The main analysis tool. Uses recursive exploration to answer questions about data.

{
  path: string,          // Path to analyse
  query: string,         // Question or task
  backend?: "openai" | "anthropic",  // LLM provider
  model?: string,        // Specific model
  maxIterations?: number,  // Max exploration cycles (default: 30)
  maxContextSize?: number, // Max context bytes (default: 500000)
  verbose?: boolean      // Enable logging
}

Example prompts:

  • "What is the main purpose of this codebase?"
  • "How does the authentication system work?"
  • "Find all API endpoints and document them"
  • "What are the dependencies and why are they used?"

rlm_context

Extract formatted context without LLM calls. Useful for previewing data.

{
  path: string,          // Path to extract from
  maxFileSize?: number,  // Max per-file size (default: 100000)
  maxTotalSize?: number, // Max total size (default: 500000)
  includeTree?: boolean, // Include directory tree
  pattern?: string       // Glob filter pattern
}

rlm_list_dir

List directory contents (files and subdirectories).

{
  path: string  // Directory to list
}

rlm_search_files

Search for files matching a glob pattern.

{
  path: string,    // Base search path
  pattern: string  // Glob pattern (e.g., "**/*.ts")
}

🔒 Security

RLM-PRO uses isolated-vm for secure JavaScript execution:

  • Memory isolation: Each sandbox has its own memory space
  • CPU limits: Configurable timeouts prevent infinite loops
  • Path sandboxing: File access restricted to target directory
  • No network access: Sandbox cannot make arbitrary network requests

📁 Project Structure

rlm-pro-mcp/
├── src/
│   ├── index.ts              # MCP server entry point
│   ├── rlm/
│   │   ├── core.ts           # Main RLM loop
│   │   ├── context-builder.ts # File reading/formatting
│   │   ├── sandbox.ts        # JS code execution
│   │   ├── llm-client.ts     # OpenAI/Anthropic client
│   │   └── prompts.ts        # System prompts
│   └── tools/
│       ├── analyze.ts        # rlm_analyze tool
│       └── context.ts        # rlm_context tool
├── package.json
├── tsconfig.json
└── README.md

🔧 Development

Prerequisites

  • Node.js 18+
  • npm or yarn

Setup

git clone https://github.com/CG-Labs/RLM-PRO
cd rlm-pro-mcp
npm install

Build

npm run build

Test Locally

# Set API key
export OPENAI_API_KEY="sk-..."

# Run server
node dist/index.js

Test with MCP Inspector

npx @modelcontextprotocol/inspector node dist/index.js

🆚 Comparison with Python RLM

| Aspect | Python RLM | TypeScript RLM | |--------|-----------|----------------| | Installation | Requires Python + uv | Just npx | | Code execution | Python in subprocess | JavaScript in isolated-vm | | Distribution | PyPI + npm wrapper | npm only | | Startup time | ~2-3s (Python init) | ~100ms | | Package size | ~50MB (with deps) | ~5MB | | Security | Subprocess isolation | V8 isolate |

📖 API Reference

JavaScript Sandbox Functions

Code executed by RLM has access to these async functions:

// File Operations
await readFile(path)        // Read file contents → string
await listDir(path)         // List directory → { files: [], directories: [] }
await searchFiles(pattern)  // Glob search → string[]
await fileExists(path)      // Check existence → boolean

// Analysis
await llmCall(prompt)       // Sub-LLM call → string

// Output
print(message)              // Add to output
setResult(data)             // Set structured result

Example LLM-Generated Code

// Explore the codebase structure
const dirs = await listDir('.');
print('Root: ' + dirs.directories.join(', '));

// Find TypeScript files
const tsFiles = await searchFiles('**/*.ts');
print('Found ' + tsFiles.length + ' TypeScript files');

// Read and analyse package.json
const pkg = await readFile('package.json');
const parsed = JSON.parse(pkg);
print('Package: ' + parsed.name);

// Use sub-LLM for complex analysis
const analysis = await llmCall('Summarise: ' + pkg);
print('Summary: ' + analysis);

📄 License

MIT © Anthropic

🤝 Contributing

Contributions welcome! Please read our contributing guidelines first.

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request

Built with ❤️ by Anthropic