@superadnim/rlm-pro-mcp
v1.0.2
Published
RLM-PRO MCP Server - Recursive Language Model for corpus analysis
Maintainers
Readme
RLM-PRO MCP Server
Pure TypeScript implementation of Recursive Language Models for corpus analysis.
✨ What is RLM?
RLM (Recursive Language Model) is a powerful technique for analysing unstructured data corpora. Instead of simply feeding context to an LLM, RLM enables the model to write and execute code to explore data iteratively—much like a human researcher would.
How it Works
┌─────────────────────────────────────────────────────────────────┐
│ 1. Build Context │
│ Read files → Format → Create searchable index │
├─────────────────────────────────────────────────────────────────┤
│ 2. LLM Analysis │
│ Send context + query → LLM reasons about what to explore │
├─────────────────────────────────────────────────────────────────┤
│ 3. Code Execution │
│ LLM writes JavaScript → Execute in sandbox → Get results │
├─────────────────────────────────────────────────────────────────┤
│ 4. Iterate │
│ Results → LLM → More code → Repeat until final answer │
└─────────────────────────────────────────────────────────────────┘🚀 Quick Start
Via npx (No Installation Required)
npx @CG-Labs/RLM-PROClaude Desktop Configuration
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"rlm-pro": {
"command": "npx",
"args": ["-y", "@CG-Labs/RLM-PRO"],
"env": {
"OPENAI_API_KEY": "sk-..."
}
}
}
}Environment Variables
| Variable | Description | Required |
|----------|-------------|----------|
| OPENAI_API_KEY | OpenAI API key | One of these |
| ANTHROPIC_API_KEY | Anthropic API key | is required |
🛠️ Available Tools
rlm_analyze
The main analysis tool. Uses recursive exploration to answer questions about data.
{
path: string, // Path to analyse
query: string, // Question or task
backend?: "openai" | "anthropic", // LLM provider
model?: string, // Specific model
maxIterations?: number, // Max exploration cycles (default: 30)
maxContextSize?: number, // Max context bytes (default: 500000)
verbose?: boolean // Enable logging
}Example prompts:
- "What is the main purpose of this codebase?"
- "How does the authentication system work?"
- "Find all API endpoints and document them"
- "What are the dependencies and why are they used?"
rlm_context
Extract formatted context without LLM calls. Useful for previewing data.
{
path: string, // Path to extract from
maxFileSize?: number, // Max per-file size (default: 100000)
maxTotalSize?: number, // Max total size (default: 500000)
includeTree?: boolean, // Include directory tree
pattern?: string // Glob filter pattern
}rlm_list_dir
List directory contents (files and subdirectories).
{
path: string // Directory to list
}rlm_search_files
Search for files matching a glob pattern.
{
path: string, // Base search path
pattern: string // Glob pattern (e.g., "**/*.ts")
}🔒 Security
RLM-PRO uses isolated-vm for secure JavaScript execution:
- Memory isolation: Each sandbox has its own memory space
- CPU limits: Configurable timeouts prevent infinite loops
- Path sandboxing: File access restricted to target directory
- No network access: Sandbox cannot make arbitrary network requests
📁 Project Structure
rlm-pro-mcp/
├── src/
│ ├── index.ts # MCP server entry point
│ ├── rlm/
│ │ ├── core.ts # Main RLM loop
│ │ ├── context-builder.ts # File reading/formatting
│ │ ├── sandbox.ts # JS code execution
│ │ ├── llm-client.ts # OpenAI/Anthropic client
│ │ └── prompts.ts # System prompts
│ └── tools/
│ ├── analyze.ts # rlm_analyze tool
│ └── context.ts # rlm_context tool
├── package.json
├── tsconfig.json
└── README.md🔧 Development
Prerequisites
- Node.js 18+
- npm or yarn
Setup
git clone https://github.com/CG-Labs/RLM-PRO
cd rlm-pro-mcp
npm installBuild
npm run buildTest Locally
# Set API key
export OPENAI_API_KEY="sk-..."
# Run server
node dist/index.jsTest with MCP Inspector
npx @modelcontextprotocol/inspector node dist/index.js🆚 Comparison with Python RLM
| Aspect | Python RLM | TypeScript RLM | |--------|-----------|----------------| | Installation | Requires Python + uv | Just npx | | Code execution | Python in subprocess | JavaScript in isolated-vm | | Distribution | PyPI + npm wrapper | npm only | | Startup time | ~2-3s (Python init) | ~100ms | | Package size | ~50MB (with deps) | ~5MB | | Security | Subprocess isolation | V8 isolate |
📖 API Reference
JavaScript Sandbox Functions
Code executed by RLM has access to these async functions:
// File Operations
await readFile(path) // Read file contents → string
await listDir(path) // List directory → { files: [], directories: [] }
await searchFiles(pattern) // Glob search → string[]
await fileExists(path) // Check existence → boolean
// Analysis
await llmCall(prompt) // Sub-LLM call → string
// Output
print(message) // Add to output
setResult(data) // Set structured resultExample LLM-Generated Code
// Explore the codebase structure
const dirs = await listDir('.');
print('Root: ' + dirs.directories.join(', '));
// Find TypeScript files
const tsFiles = await searchFiles('**/*.ts');
print('Found ' + tsFiles.length + ' TypeScript files');
// Read and analyse package.json
const pkg = await readFile('package.json');
const parsed = JSON.parse(pkg);
print('Package: ' + parsed.name);
// Use sub-LLM for complex analysis
const analysis = await llmCall('Summarise: ' + pkg);
print('Summary: ' + analysis);📄 License
MIT © Anthropic
🤝 Contributing
Contributions welcome! Please read our contributing guidelines first.
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
Built with ❤️ by Anthropic
