npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@unravel-tech/rlm-cli

v0.1.3

Published

Recursive Language Model - AI reasoning with REPL-based code execution for handling arbitrarily long contexts

Downloads

1,013

Readme

loop-infer

RLM - Recursive Language Models

npm version npm downloads License: MIT

A Clojure implementation of Recursive Language Models (RLMs) - a task-agnostic inference paradigm that enables LLMs to handle arbitrarily long contexts by storing data in a sandboxed REPL environment.

Based on the paper "Recursive Language Models" by Zhang, Kraska, and Khattab.

How It Works

Instead of passing large contexts directly to the model, RLMs:

  1. Store context in a sandboxed Clojure REPL
  2. Let the LLM write code to examine, decompose, and query the context
  3. Execute code safely and return results to the LLM
  4. Repeat until the LLM produces a final answer

The LLM can also spawn recursive LLM calls for analysis of context chunks.

Installation

Via npm (Recommended)

# Run directly without installation
npx @unravel-tech/rlm-cli -c document.txt -q "Summarize this"

# Or install globally
npm install -g @unravel-tech/rlm-cli
rlm-cli -c document.txt -q "Summarize this"

From Source

Prerequisites:

git clone https://github.com/unravel-team/loop-infer.git
cd loop-infer

Set your API key:

export OPENAI_API_KEY="sk-..."
# or
export ANTHROPIC_API_KEY="sk-ant-..."

CLI Usage

# Query a file
rlm -c document.txt -q "Summarize the key points"

# Query from stdin
cat large-file.txt | rlm -c - -q "Find all dates mentioned"

# Inline context
rlm -C "The capital of France is Paris." -q "What is the capital of France?"

# Query a directory of data files
rlm -d ./data -e parquet,csv -q "What patterns do you see?"

# Use a different model
rlm -m anthropic/claude-sonnet-4-20250514 -c doc.txt -q "Explain the main argument"

# Generate an HTML execution report
rlm -c doc.txt -q "Summarize" -H report.html

# Verbose output with JSON format
rlm -c doc.txt -q "Summarize" -v -o json

Server Mode

RLM can run as an HTTP server with real-time streaming support:

# Start server on default port 8080
rlm --server

# Start on custom port
rlm --server --port 3000

HTTP API Endpoints

| Endpoint | Method | Description | |----------|--------|-------------| | /health | GET | Health check, returns {"status": "ok", "active-tasks": N} | | /api/query | POST | Synchronous query, returns JSON result | | /api/stream | POST | Streaming query with SSE progress events | | /api/task/:id | GET | Get task status | | /api/task/:id/report | GET | Get HTML execution report | | /api/task/:id/live | GET | Live-updating HTML report page |

Query Example

curl -X POST http://localhost:8080/api/query \
  -H "Content-Type: application/json" \
  -d '{
    "context": "The quick brown fox jumps over the lazy dog.",
    "query": "What animals are mentioned?",
    "model": "openai/gpt-4o",
    "trace": true
  }'

Response:

{
  "answer": "The animals mentioned are a fox and a dog.",
  "costs": {
    "total-cost": 0.00123,
    "root-llm-calls": 2,
    "recursive-llm-calls": 0
  },
  "trace": { ... }
}

Streaming Example

curl -X POST http://localhost:8080/api/stream \
  -H "Content-Type: application/json" \
  -d '{"context": "Hello world", "query": "What is this?"}'

Returns Server-Sent Events (SSE) with real-time progress:

  • connected - Task created, returns task ID
  • start - Query execution started
  • iteration-start - New iteration beginning
  • llm-call - Root LLM request/response
  • repl-execution - Code execution result
  • recursive-llm - Recursive LLM call
  • result - Final answer with costs
  • done - Execution complete

Live Report

Open http://localhost:8080/api/task/{task-id}/live in a browser to see a real-time updating execution report.

Library Usage

(require '[loop-infer.core :as rlm])

;; Create an RLM instance
(def r (rlm/create-rlm {:model "openai/gpt-4o"
                        :trace true
                        :verbose true}))

;; Query with context
(rlm/query r {:context "Paris is the capital of France."
              :query "What is the capital of France?"})
;; => "Paris"

;; Check costs
(rlm/cost-summary r)
;; => {:total-cost 0.00123 :root-llm-cost 0.001 :recursive-llm-cost 0.00023 ...}

;; Generate HTML report (if trace was enabled)
(rlm/write-html-report r "report.html")

;; Reset for reuse
(rlm/reset-rlm! r)

CLI Options

| Flag | Default | Description | |------|---------|-------------| | -m, --model | openai/gpt-4o | LLM model (provider/model format) | | -r, --recursive-model | (same as --model) | Override model for recursive calls | | -c, --context | - | Context file path (or - for stdin) | | -C, --context-string | - | Inline context string | | -d, --directory | - | Load files from directory as context | | -p, --pattern | - | Glob pattern for directory files | | -e, --extensions | - | Comma-separated file extensions | | -n, --max-files | - | Maximum files to load | | -q, --query | - | Query to answer (required in CLI mode) | | -i, --max-iterations | 20 | Max RLM iterations | | -v, --verbose | false | Enable verbose output | | -o, --output | text | Output format: text, json, edn | | -H, --html-report | - | Generate HTML report to file | | -S, --server | false | Run as HTTP server | | -P, --port | 8080 | Server port | | --host | 0.0.0.0 | Server host to bind | | -k, --api-key | - | API key (or use env vars) |

Supported Models

Any model supported by litellm-clj using provider/model-name format:

  • OpenAI: openai/gpt-4o, openai/gpt-4o-mini
  • Anthropic: anthropic/claude-sonnet-4-20250514, anthropic/claude-3-5-haiku-20241022

Supported File Types

  • Text: txt, md, rst, html, xml, yaml, json, py, js, clj, etc.
  • Data: parquet, csv, tsv, json, jsonl, arrow

Building from Source

# Build uberjar
./scripts/build.sh
# or
clojure -X:uberjar

# Run the jar
java -jar loop-infer.jar --help

Publishing to npm

# Build uberjar first
./scripts/build.sh

# Install jdeploy and publish
npm install -g jdeploy
jdeploy publish

Running Tests

clojure -M:test

Architecture

src/loop_infer/
├── core.clj        # Public API and CLI
├── rlm.clj         # Core RLM iteration algorithm
├── repl.clj        # Sandboxed REPL execution
├── prompts.clj     # System prompts for LLMs
├── server.clj      # HTTP server with SSE streaming
├── trace.clj       # Execution tracing and HTML reports
└── file-loader.clj # File and directory loading

See implementation.md for detailed architecture documentation.

License

MIT