@superadnim/rlm-pro
v1.0.3
Published
RLM PRO - Enterprise-grade Recursive Language Models for infinite context code analysis. Analyze entire codebases with AI.
Maintainers
Readme
RLM PRO
Analyze any corpus of unstructured data using Recursive Language Models - enables LLMs to handle near-infinite context through recursive decomposition.
Works with codebases, document collections, research papers, logs, and any text-based data corpus.
Based on the RLM research from MIT OASYS lab.
Installation
# Using npx (recommended - auto-installs Python package from GitHub)
npx @superadnim/rlm-pro ./my-project -q "Explain the architecture"
# Or install globally
npm install -g @superadnim/rlm-proPrerequisites
- Node.js 18+
- uv (Python package manager) - will be prompted to install if missing
- OpenAI API key (set as environment variable)
# Install uv if not already installed
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install the Python package (auto-installed on first run, or install manually)
uv pip install git+https://github.com/CG-Labs/RLM-PRO.git
# Set your API key
export OPENAI_API_KEY="your-key"Usage
Command Line
# Basic usage
npx @superadnim/rlm-pro ./my-project -q "Explain the architecture"
# Get JSON output (for programmatic use)
npx @superadnim/rlm-pro ./my-project -q "List all API endpoints" --json
# Use a specific model
npx @superadnim/rlm-pro ./my-project -q "Find potential bugs" -m gpt-5.2
# Use Anthropic backend
npx @superadnim/rlm-pro ./my-project -q "Review this code" -b anthropic
# Verbose output for debugging
npx @superadnim/rlm-pro ./my-project -q "How does authentication work?" -v
# Only build context (no LLM call)
npx @superadnim/rlm-pro ./my-project -q "" --context-onlyProgrammatic Usage (Node.js)
const { analyzeCodebase } = require('@superadnim/rlm-pro');
async function main() {
const result = await analyzeCodebase('./my-project', {
query: 'Summarize the codebase structure',
backend: 'openai',
model: 'gpt-5.2',
});
console.log(result.response);
console.log('Execution time:', result.execution_time, 'seconds');
}
main();Options
| Option | Description | Default |
|--------|-------------|---------|
| -q, --query <query> | Question or task to perform (required) | - |
| -b, --backend <backend> | LLM backend (openai, anthropic, etc.) | openai |
| -m, --model <model> | Model name | gpt-5.2 |
| -e, --env <env> | Execution environment (local, docker) | local |
| --max-depth <n> | Maximum recursion depth | 1 |
| --max-iterations <n> | Maximum iterations | 30 |
| --max-file-size <bytes> | Max size per file | 100000 |
| --max-total-size <bytes> | Max total context size | 500000 |
| --no-tree | Exclude directory tree from context | - |
| --json | Output as JSON | - |
| -v, --verbose | Enable verbose output | - |
| --context-only | Only output built context | - |
Environment Variables
| Variable | Description | Required |
|----------|-------------|----------|
| OPENAI_API_KEY | OpenAI API key | Yes (for OpenAI backend) |
| ANTHROPIC_API_KEY | Anthropic API key | For Anthropic backend |
How It Works
RLM (Recursive Language Models) enables LLMs to handle near-infinite context by:
- Context Building: Intelligently reads and formats your codebase
- Recursive Decomposition: Breaks complex queries into manageable sub-tasks
- Code Execution: Runs Python code in a sandboxed environment to explore and analyze
- Iterative Refinement: Continues until a complete answer is found
This allows answering complex questions about large codebases that would exceed normal context limits.
Examples
Architecture Analysis
npx @superadnim/rlm-pro ./backend -q "Describe the system architecture and key design patterns"Bug Finding
npx @superadnim/rlm-pro ./src -q "Find potential security vulnerabilities" --jsonDocumentation Generation
npx @superadnim/rlm-pro ./api -q "Generate API documentation for all endpoints"Code Review
npx @superadnim/rlm-pro ./feature-branch -q "Review this code for best practices"License
MIT
