pi-turtle-rlm
v0.1.5
Published
pi-turtle-rlm — RLM for Pi: persistent runtime, llmQuery recursion (it's models all the way down)
Maintainers
Readme
pi-turtle-rlm
Recursive language model runtime for Pi — a persistent JS workspace inside the agent, with structured child calls via llmQuery, prompt modes, and session stats.
Turtles all the way down — each turtle is an
llmQuerycall, each shell isglobalThis, base casemaxDepth, or just run out of tokens.
Install
Recommended — Pi package manager (see Pi packages):
pi install npm:pi-turtle-rlmPin a version:
pi install npm:[email protected]From git:
pi install git:github.com/jpstrikesback/pi-rlmFrom a local clone (contributors or vendoring):
git clone https://github.com/jpstrikesback/pi-rlm.git
cd pi-rlm
npm install
npm run build
pi install ./
# or one-off during development: pi -e ./index.tsProject-local Pi config (.pi/settings.json or pi install -l):
{
"packages": ["npm:pi-turtle-rlm"]
}After install, start Pi as usual from your repo; the extension loads from Pi’s package resolution. Use /reload after upgrading the package.
Quick start
- Turn RLM on:
/rlm - Same command takes subcommands:
/rlm balanced|/rlm coordinator|/rlm aggressive— prompt mode/rlm inspect— runtime globals/rlm reset— clear runtime
When RLM is on you get a pink RLM MODE widget (with mode label) and footer stats: depth, rlm_exec count, child queries / turns, runtime variable count, and non-RLM tool calls (“leaf” count).
Why RLM?
Large refactors need more room for context than a single chat transcript. This extension gives the model a persistent workspace to keep intermediate state, recurse with llmQuery, and avoid re-deriving the same context over and over.
Tools
rlm_exec— run JS in the persistent runtimerlm_inspect— inspect runtime globalsrlm_reset— clear the runtime
Inside rlm_exec you can use:
inspectGlobals()final(value)await llmQuery(request)
Use in an extension
import rlmExtension from "pi-turtle-rlm";
export default rlmExtension;Or configure defaults:
import { createRlmExtension } from "pi-turtle-rlm";
export default createRlmExtension({
maxDepth: 3,
promptMode: "coordinator",
});Safety
The worker uses node:vm with several globals stripped. It is not a security sandbox — treat it like running code in your user account.
Advanced runtime usage
Inside rlm_exec, the runtime also exposes llmQuery(...) for recursive child calls.
await llmQuery({
prompt: "Analyze the auth module",
state: { files: globalThis.authFiles },
tools: "read-only",
budget: "medium",
});Inspiration
This project is inspired in part by AxLLM’s RLM ideas.
Development
git clone https://github.com/jpstrikesback/pi-rlm.git
cd pi-rlm
npm install
npm test
npm run build
npm run smokeLicense
Apache-2.0 — see LICENSE.
