@open-matrix/omega-lisp
v0.1.0
Published
Omega — Lisp dialect with CESK evaluator, LLM oracle protocol, and session replay
Maintainers
Readme
OmegaLLM
A REPL where LLM calls are first-class operations and state persists across sessions.
OmegaLLM is the persistent, auditable REPL for agentic LLM workflows that keeps state alive across commands, sessions, and recovery.
What This Solves
Most LLM agent frameworks have these problems:
- Context resets every call - agents lose memory between operations
- Python/Node REPLs don't persist - close the terminal, lose all state
- File I/O dominates - constant serialization instead of in-memory work
- Manual retry logic - if/else chains for every potential failure
- No debugger - can't step through LLM chains or see what went wrong
OmegaLLM addresses this with:
- Persistent sessions: Data structures stay in memory across tool calls and crashes
- Parallel operations: Write
(map function list)instead of serial loops - Auto-backtracking:
(amb ...)+(require ...)automatically tries options until one works - Full debugger: Step through execution, set breakpoints, time-travel to any step
- Deterministic replay: Same inputs produce same outputs, every LLM call has receipts
;; Traditional agent: 100 serial tool calls, loses context
for file in files:
content = read_file(file)
result = llm_analyze(content)
;; OmegaLLM: one parallel expression, state persists
(map (lambda (file)
(let ((content (effect file.read.op file)))
(effect infer.op (list "Analyze: - content))))
files)Example: Persistent State Across Tool Calls
Most frameworks lose state between calls. OmegaLLM keeps it:
# Session 1: Build up analysis, save it
$ npm run omega-fast
Omega> (define files (list "auth.ts" "db.ts" "api.ts"))
Omega> (define analysis (map analyze-file files))
Omega> :session save code-review
Omega> :quit
# Session 2: Hours later, pick up where you left off
$ npm run omega-fast
Omega> :session load code-review
Omega> :session goto 5 # Restore environment
Omega> analysis # Still there!
=> ((auth.ts issues: [...]) (db.ts issues: [...]) ...)No serialization to files. No context reconstruction. Just persistent in-memory state.
Get Running in 60 Seconds
pnpm install && pnpm run build # Install & build
pnpm run demo-instant # Proves it works - NO API key needed!
# With LLM integration (Codex = zero cost via ChatGPT Pro):
pnpm run demo # See full LLM featuresInvocation Tiers
| Tier | Command Path | Example |
|---|---|---|
| Tier 1 (preferred) | Installed omega bin | omega --cmd "(+ 1 2)" |
| Tier 2 | pnpm dlx fast path | pnpm dlx omega-llm omega --cmd "(+ 1 2)" |
| Tier 3 | npx compatibility fallback | npx omega-llm omega --cmd "(+ 1 2)" |
Reference docs:
- Install guide:
docs/install.md - Release policy:
docs/release-policy.md - Security policy:
SECURITY.md
** - Demo Index** All 39 demos organized by category Start here! ** - Demo Gallery** See 60 working demos with live LLM outputs ** - Full Manual** 37 chapters, SICP for LLMs
What Is This?
OmegaLLM is a REPL where LLM calls are first-class operations and state persists across sessions.
Instead of writing glue code that loses context every call, you write programs using SICP patterns (map, filter, streams, backtracking) over LLM operations. The runtime handles parallelization, retry, budgeting, and debugging.
Repository: github.com/hypnotranz/Omega
Table of Contents
- TL;DR
- Show Me The Cool Stuff Start here!
- Quick Start
- Common Gotchas
- Features at a Glance
- The Manual (37 chapters)
- Demo Gallery (60 demos)
- REPL Guide
- Sessions: Persistent State
- Core Primitives
- CLI Options
- Theoretical Foundations The formal vocabulary
Features at a Glance
| Category | What You Get |
|----------|--------------|
| LLM Calls | (effect infer.op "prompt") - LLM inference as a first-class operation |
| Agentic Mode | :ask "question" - LLM with tool-use that can eval code iteratively |
| Agent Stack | LLMs see full call stack - who called them and why (multi-agent orchestration) |
| Higher-Order | map, filter, fold over LLM operations |
| Backtracking | amb operator - generate candidates, validate, auto-backtrack on failure |
| Lazy Streams | Infinite sequences, only force what you need |
| Debugger | :debug, :step, :break, :state - step through execution |
| Time Travel | :goto N, :back, :trace - jump to any point in execution |
| Sessions | :session save/load/goto - persistent state across restarts |
| Snapshots | :save, :restore - checkpoint and restore evaluator state |
| Receipts | Every LLM call produces auditable provenance |
| OPR Kernels | :opr-run - run structured inference programs |
| Budget/Policy | Enforce spending limits and capability restrictions |
| Language Building | make-evaluator, eval, register-macro - coin DSLs on the fly |
| Solve Language | solve, goal, artifact - recursive problem decomposition (HETA) |
| Co-Recursive Tower | Lisp -> LLM -> Lisp -> LLM - mutual recursion between symbolic and neural |
| Fixpoint Iteration | fixpoint, fixpoint-detect-cycle, rewrite-fixpoint - convergence |
| OPR Callbacks | callback.eval_lisp - kernels call back into Lisp runtime (THE KEYSTONE) |
| Build Effect | (effect build.run.op) - run builds with structured error capture for self-healing |
| Workflow DSL | (run-pipeline p) - declarative pipelines with topological sort, on-fail handlers, fixpoint recovery |
| Package System | (require 'plm/search) - modular libraries with idempotent loading |
| PLM Document Search | (search-document doc question) - amb-based RLM-style chunking with automatic backtracking |
| Problem Recognizer | (pr.recognize+solve story) - NL to schema to solve pipeline |
| Agent Spawning | (effect agent.spawn.op ...) - multi-provider agent dispatch |
| Self-Healing Builds | (self-healing-build n) - fixpoint build error correction |
| Agent Selection amb | (amb-agent-fix error) - nondeterministic agent fallback |
| Semantic Cache | (cached-infer prompt) - LLM response caching |
| Smart Inference | (infer-with-model prompt model) - dynamic model selection/routing |
Run :help in the REPL to see all commands.
For Claude agents: See CLAUDE.md for quick reference on Emacs integration and capabilities.
- Show Me The Cool Stuff
1. Structured Data Extraction (with confidence + source citations!)
npm run omega-fast -- --cmd ':opr-run opr.extract.v1 {"text":"John Smith ([email protected]) called about order #12345 on Jan 15.","schema":{"name":"string","email":"string","order_id":"string"}}'Output:
{
"data": { "name": "John Smith", "email": "[email protected]", "order_id": "12345" },
"confidence": { "name": 0.98, "email": 0.95, "order_id": 0.90 },
"sources": {
"name": "line 1: 'John Smith'",
"email": "line 1: '([email protected])'",
"order_id": "line 1: 'order #12345'"
}
}2. Agentic Mode - LLM Writes & Runs Code
Omega> :ask "Define a fibonacci function and compute fib(10)"
; LLM writes: (define (fib n) ...)
; LLM evals: (fib 10)
; LLM sees result: 55
Answer: The result of fib(10) is 55.3. Backtracking Search with LLM Validation
;; Try tones until LLM confirms it matches "apologetic"
(let ((tone (amb "formal" "friendly" "apologetic")))
(let ((reply (effect infer.op (list "Write a - tone - response..."))))
(require (matches-tone? reply "apologetic"))
reply))
;; Auto-backtracks through options until validation passes!4. 10 Built-in OPR Kernels
Omega> :opr-list
opr.classify.v1 - Classify with confidence scores
opr.extract.v1 - Extract structured data with sources
opr.analyze.v1 - Analyze text/code
opr.transform.v1 - Transform content
opr.validate.v1 - Validate against criteria
opr.plan.v1 - Generate plans
... and more5. Full Debugger with Time Travel
Omega> :debug (+ (* 2 3) (* 4 5))
Omega> :run
=== DONE at step 22 === Result: 26
Omega> :trace
[0] Expr: Begin(1 exprs)
[5] Expr: Var(*) | stack=3
[10] Value: 3 | stack=3
[19] Value: 20 | stack=2
...
Omega> :goto 10 ;; Jump back in time!
Control: Value: 36. - Multi-Agent Stack Introspection - Agents Know Who Called Them
;; Define a multi-agent development pipeline
(define (orchestrator task)
(let* ((code (coding-agent task))
(tests (testing-agent code))
(review (review-agent code))
(deploy (deploy-agent code)))
(list :code code :tests tests :review review :deploy deploy)))
;; Each agent sees its full call context!Output - each agent knows its role in the pipeline:
=== MULTI-AGENT DEVELOPMENT PIPELINE ===
Agent Call Context Log:
1. [coding-agent] called by: orchestrator
2. [testing-agent] called by: orchestrator
3. [review-agent] called by: orchestrator
4. [deploy-agent] called by: orchestratorDeep nesting shows full chain:
[security-scanner] called by: tech-lead -> senior-dev -> code-quality-agentWhy this matters:
- Agents can adapt behavior based on who called them (urgent vs thorough)
- Enables reflexion-style self-improvement based on context
- Debug multi-agent workflows by seeing the full agent call graph
- LLMs understand their role in larger pipelines
npm test -- test/solver/agent-stack # Run the tests7. - Language Building - LLMs Coin DSLs On-The-Fly
;; Generate unique symbols for hygienic macros
(gensym "temp") ;; => temp-42
;; Register new syntax on-the-fly
(register-macro 'unless
(lambda (form)
(list 'if (list 'not (cadr form)) (caddr form))))
;; Create custom evaluators (DSLs!)
(define math-dsl (make-evaluator))
;; Reify evaluation as data - step through, fork, inspect
(define m (machine-new '(+ 1 2)))
(machine-step m) ;; Step one instruction
(machine-run m) ;; Run to completion
(machine-fork m) ;; Clone execution stateAll 7 Sussman mechanisms for building languages are first-class primitives. LLMs can literally create new programming languages during a session. See Chapter 50 '
8. - PLM Document Search - RLM-Style with Backtracking
Recursive Language Model pattern: Documents stay external, amb enables automatic strategy backtracking:
(require 'plm/search)
;; Document stays EXTERNAL - not in LLM context window!
(define doc (make-document "...10,000 word report about funding..."))
;; Search with automatic strategy backtracking
;; Tries: paragraph -> sentence -> markdown -> line-by-line
(search-document doc "What was the Series A amount?")The magic: If paragraph-level chunks don't find the answer, require #f triggers backtracking to try sentence-level, then markdown sections, etc. This is "poor man's logic programming" for document search.
npm run omega-fast -- --file demo/lisp/plm-needle-haystack.lisp9. - Co-Recursive Tower (THE KEYSTONE)
Mutual recursion between symbolic (Lisp) and neural (LLM) computation:
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" - Lisp Runtime - " - - " """- (effect infer.op ...) ""- LLM Kernel - " - - - " - -"" callback.eval_lisp -"""""" (LLM calls back) - " - - - " """- (eval result) """"""""""""""-" - " - - - " - -"" final result -"""""""""""" - """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""Proven working via Test R6 in test/oracle/reentry.spec.ts:
- LLM-A spawns LLM-B through
ReqEval("(effect infer.op 'inner')") - Inner LLM returns 100, outer adds 1 -> result is 101
- Full nested tower:
Lisp -> LLM-A -> Lisp -> LLM-B -> Lisp -> LLM-A -> result
Reentry tools (LLM can call these during inference):
omega_eval(source) // Evaluate Lisp code
omega_apply(fn, args) // Apply function to args
omega_observe(query) // Query runtime state
omega_return(value) // Return final result10. Fixpoint Iteration
Converge to stable states automatically:
;; Basic fixpoint - increment until >= 5
(fixpoint 1
(lambda (x) (if (< x 5) (+ x 1) x))
(lambda (a b) (equal? a b))
10)
;; => { kind: "success", solution: 5, cost: 5 }
;; Cycle detection - catches infinite loops
(fixpoint-detect-cycle 'a
(lambda (x) (if (equal? x 'a) 'b (if (equal? x 'b) 'c 'a)))
(lambda (x) (symbol->string x))
10)
;; => { kind: "failure", reason: "Cycle detected" }
;; Term rewriting to fixpoint
(rewrite-fixpoint rules expr 'topdown 100)
;; Applies rewrite rules until no more changesSelf-healing build pattern:
(define (self-healing-build max-attempts)
(fixpoint
(build-and-get-errors)
(lambda (state)
(if (null? (state-errors state)) state
(begin (fix-errors-with-llm state) (build-and-get-errors))))
(lambda (old new) (equal? (state-errors old) (state-errors new)))
max-attempts))11. - Package System
OmegaLLM has a modular package system for reusable libraries:
;; Load packages by name (idempotent - won't reload)
(require 'plm/search) ;; Document search with backtracking
(require 'plm/chunking) ;; Chunking strategies (paragraph, sentence, markdown)
(require 'aliases/clojure) ;; Clojure-style syntax (defn, fn, ->, ->>)
(require 'workflow/pipeline) ;; Declarative pipelines with topological sort
(require 'workflow/stage) ;; Workflow stages
(require 'self-healing) ;; Fixpoint-based self-healing builds
(require 'monad) ;; Monadic composition
(require 'solve) ;; Solve language (constraint-based problem solving)
(require 'agents/selection) ;; Agent delegation with amb-based selection
(require 'cache/cached-infer) ;; Cached LLM inference
(require 'document-search) ;; Full document search pipeline
(require 'problem-recognizer) ;; Problem recognition + algorithmic solving
(require 'oracle/smart-infer) ;; Smart inference routing
;; Use :packages in REPL to see loaded packagesAvailable packages: plm/core, plm/chunking, plm/search, aliases/clojure, workflow/stage, workflow/pipeline, self-healing, monad, solve, agents/selection, cache/cached-infer, document-search, problem-recognizer, oracle/smart-infer, help
Package Resolution (4-tier search)
Packages are resolved through a 4-tier search system:
- Local
./lib/- project-local packages - Extra user-supplied paths via
addSearchPath() - npm
node_modules/@omega/*/lib/for installed npm packages - Global
~/.omega/lib/for system-wide packages
# Show search paths in REPL
npm run omega-fast -- --cmd ":search-paths"Package Discovery (REPL commands)
# List all registered packages
:registry
# Describe a package (exports, effects, dependencies)
:describe-package workflow/pipeline
# Find which package provides a symbol
:find-provider solve
# Search packages by name, export, or effect
:search-registry inferMulti-Provider Oracle (Pi-AI)
The Pi-AI adapter supports multiple LLM providers with automatic failover:
import { PiAiMultiAdapter } from "./src/core/oracle/plugins/piai";
const adapter = new PiAiMultiAdapter([{ provider: "anthropic", model: "claude-sonnet-4-20250514" },
{ provider: "openai", model: "gpt-4o" },
]);
// If Anthropic fails, automatically tries OpenAIMonorepo Package Structure
packages/
core/ - @omega/core: AST, reader, evaluator, compiler
runtime/ - @omega/runtime: effects, governance, require, IO prims
llm/ - @omega/llm: oracle adapters (Anthropic, OpenAI, Pi-AI)
solve/ - @omega/solve: constraint solver, OPR bridge
chatgpt/ - @omega/chatgpt: ChatGPT transport for heavy inferencenpm run omega-fast -- --file demo/lisp/package-require-demo.lispBased on Recursive Language Models (MIT, 2025).
- Programmable Pipelines for Claude Agents
For Claude Code and other agents: OmegaLLM provides a programmable substrate for complex workflows.
Key Files for Agent Integration
| File | Purpose | |------|---------| | CLAUDE.md | Quick reference for Emacs integration, co-recursive tower, fixpoint | | EMACS-EXPERIMENTAL-PIPELINE.md | Deep brainstorming on programmable pipelines | | src/repl/commands/opr.ts | OPR command wiring (kernel invocation surface) | | packages/solve/src/solver/fixpoint.ts | Fixpoint iteration primitives | | test/oracle/reentry.spec.ts | Tests proving co-recursive tower works | | codesmith/providers.py | Unified provider abstraction (Claude/Codex) | | lib/workflow/ | Workflow DSL: stages, pipelines, executor, patterns | | src/core/require/resolver.ts | 4-tier package resolution with caching | | src/core/require/manifest.ts | Package registry + discovery | | packages/oracle/piai/src/piai.ts | PiAI adapter integration | | src/core/prims-io.ts | Extracted Node.js IO primitives | | packages/ | Monorepo scaffold (@omega/core, runtime, llm, solve, chatgpt) |
Agent Delegation with amb/require
;; Try different agents until one succeeds
(define (try-agents task)
(let ((agent (amb 'claude-sonnet 'codex-o3 'claude-haiku)))
(let ((result (effect agent.run task)))
(require (result-valid? result))
result)))
;; Backtracking search automatically tries next agent on failure!Heterogeneous Agent Swarm
# Via codesmith/providers.py
from providers import call_provider
# Same API, different backends
claude_result = call_provider("claude", "sonnet", prompt, context)
codex_result = call_provider("codex", "o3", prompt, context)Pipeline as Workflow DSL (Job 030)
;; Load workflow libraries
;; (require 'workflow/stage)
;; (require 'workflow/pipeline)
;; (require 'workflow/executor)
;; Define pipeline as data with dependencies and on-fail handlers
(define my-pipeline
(make-pipeline
(list
(list 'stage 'build ':run "npm run build"
':on-fail (list 'fixpoint 'fix-build 3))
(list 'stage 'test ':depends-on (list 'build) ':run "npm test"
':on-fail (list 'agent 'claude 'sonnet))
(list 'stage 'lint ':depends-on (list 'build) ':run "npm run lint"
':on-fail 'auto-fix-lint))))
;; Validate: checks for missing dependencies
(pipeline-validate my-pipeline) ; => (ok) or (error "missing: x")
;; Get execution order via topological sort (Kahn's algorithm)
(pipeline-execution-order my-pipeline) ; => (build test lint)
;; Execute with REAL effects (shell.op, infer.op)
(run-pipeline my-pipeline)On-fail handlers:
'fix-build- call function with error, retry if returns #t(list 'fixpoint 'fixer 3)- iterate fix'build until success or 3 attempts(list 'agent 'claude 'sonnet)- spawn LLM to analyze and fix
npm run omega-fast -- --file demo/lisp/ch59-workflow-dsl.lisp
### Emacs Batch Operations (92% Context Savings)
Every `claude-*` function below is available after loading `claude-helpers.el`. Load once at session start:
```bash
"C:/Program Files/Emacs/emacs-30.2/bin/emacsclient.exe" --eval "(progn (load \"c:/Users/Richa/parmenides-dev/agent-harness/OmegaLLM/elisp-prototype/claude-helpers.el\") (claude-init))"If Emacs daemon isn't running: "C:/Program Files/Emacs/emacs-30.2/bin/runemacs.exe" --daemon
Built-in Help
Run (claude-help) to list every function. For topic-specific help:
(claude-help "search") ; Search functions
(claude-help "read") ; Read functions
(claude-help "edit") ; Edit functions
(claude-help "graph") ; Call graph functions
(claude-help "treesit") ; Tree-sitter functions
(claude-help "project") ; Project detection & safety
(claude-help "status") ; Status & utility
(claude-help "scripts") ; Script runner (shell escaping solution)
(claude-help "all") ; EverythingComplete Function Reference
| Function | What it does | Speed |
|----------|--------------|-------|
| Init | | |
| (claude-init) | Build index, load project (~3s) | ~3s |
| (claude-init-multi DIR1 DIR2...) | Index multiple project directories | ~3s/dir |
| (claude-build-index) | Rebuild index without reloading helpers | ~3s |
| Search | | |
| (claude-search "pattern") | Search index, fall back to grep | instant |
| (claude-search-index "regex") | Search function/type names only | instant |
| (claude-grep "regex" nil 30) | Content grep with context | fast |
| (claude-find-definitions "name") | All definitions of a symbol | instant |
| (claude-find-imports "symbol") | Every file that imports symbol | instant |
| (claude-find-exports "symbol") | Where symbol is exported from | instant |
| (claude-list-functions "path") | List functions in a file | instant |
| Read | | |
| (claude-read PATH START END) | Read file with line numbers | instant |
| (claude-read-file PATH) | Read entire file contents | instant |
| (claude-read-lines PATH S E) | Read specific line range | instant |
| (claude-read-between PATH RE1 RE2) | Read lines between two regex patterns | instant |
| (claude-read-region PATH RE CTX) | Read matches with surrounding context | instant |
| (claude-read-function FN FULL) | Read function source (FULL=t for entire body) | instant |
| (claude-read-callstack FN) | Trace callers + source + referenced types | ~1s |
| (claude-read-batch SPECS...) | Multiple reads in one call | instant |
| (claude-source "functionName") | Get source code (lazy loads file) | ~2ms |
| (claude-file-info "path") | File metadata (size, lines, language) | instant |
| Edit | | |
| (claude-edit-file "path" "old" "new") | Edit file (via MRU cache) | fast |
| (claude-write-file "path" "content") | Write entire file | fast |
| (claude-replace-all "old" "new") | Replace across ALL files | fast |
| (claude-quick-fix "path" LINE "old" "new") | Fix specific line | fast |
| (claude-add-import "path" "import ...") | Add import statement | fast |
| Call Graph | | |
| (claude-callers "functionName") | Who calls this function? | instant |
| (claude-calls "functionName") | What does this function call? | instant |
| (claude-trace "functionName" 5) | Recursive call tree, N levels deep | instant |
| (claude-trace-full "name") | Callers + call tree (bidirectional) | instant |
| (claude-class-members CLASS) | Extract class members (nil=all) | ~1s |
| (claude-dependency-tree "module") | Module dependency tree | instant |
| Git | | |
| (claude-git-status) | Git status | fast |
| (claude-git-diff) | Git diff | fast |
| (claude-git-log) | Recent git log | fast |
| Tree-sitter (per-file, structural) | | |
| (claude-ts-query PATH QUERY) | Raw S-expression tree-sitter query on any file | instant |
| (claude-ts-functions PATH) | List all functions structurally (catches nested fns regex misses) | instant |
| (claude-ts-types PATH) | List all types, interfaces, classes | instant |
| (claude-ts-imports PATH) | Structural import extraction | instant |
| (claude-ts-class-methods PATH CLASS) | Class methods grouped by class | instant |
| (claude-ts-call-sites PATH FN) | Find call sites (won't match in comments/strings) | instant |
| (claude-ts-ast-summary PATH DEPTH) | AST node type distribution | instant |
| (claude-compare-index PATH) | Side-by-side: regex index vs tree-sitter accuracy | instant |
| Project & Safety | | |
| (claude-detect-project DIR) | Auto-detect project root via project.el | instant |
| (claude-project-files PATTERN) | List project files via project.el | fast |
| (claude-headless BODY...) | Execute with headless safety (prevents daemon hangs) | -- |
| (claude-safe-eval FORM) | Eval with inhibit-interaction | -- |
| Utility | | |
| (claude-status) | Index + cache stats | instant |
| (claude-health-check) | Full health check (index, cache, Emacs) | instant |
| (claude-ping) | Quick liveness check | instant |
| (claude-buffer-count) | Count loaded buffers by type | instant |
| (claude-gc-buffers) | Garbage collect unused buffers | fast |
| (claude-reset-state) | Reset index + cache (full restart) | fast |
| (claude-omega-eval "(+ 1 2)") | Run OmegaLLM expressions | ~1s |
| (claude-help) | Built-in help (lists everything) | instant |
| (claude-help "TOPIC") | Detailed help for a topic | instant |
| (claude-run-script "path.el" "result.txt") | Run complex elisp from file (solves escaping) | varies |
Batch Operations (the whole point)
# BAD: 5 separate tool calls (1250 tokens, 5 round-trips)
# GOOD: 1 Emacs call (100 tokens, 1 round-trip)
emacsclient --eval '(progn
(claude-edit-file "a.ts" "old" "new")
(claude-edit-file "b.ts" "old" "new")
(claude-edit-file "c.ts" "old" "new")
"done")'Shell Escaping Solution
For simple calls, inline works:
emacsclient --eval '(claude-search "pattern")'
emacsclient --eval '(claude-ts-functions "file.ts")'For complex elisp (regex, nested quotes, multi-line): write a .el file, then run it:
# Step 1: Write the .el script to scratchpad (Write tool is fine for temp scripts)
# Step 2: Run it
emacsclient --eval '(claude-run-script "scratchpad/my-script.el" "scratchpad/result.txt")'
# Step 3: Read result.txtThis solves shell escaping permanently. No more fighting with nested quotes in emacsclient --eval.
Tree-sitter Structural Analysis (instant, per-file)
The regex index handles project-wide lookups (~3s build for 775 files). Tree-sitter provides per-file structural precision with zero false positives:
# List functions structurally (catches nested fns regex misses)
emacsclient --eval '(claude-ts-functions "src/core/prims.ts")'
# Find all call sites of a function (won't match in comments/strings)
emacsclient --eval '(claude-ts-call-sites "src/core/prims.ts" "applyProcedure")'
# Raw S-expression tree-sitter query
emacsclient --eval '(claude-ts-query "src/core/prims.ts"
(quote ((throw_statement
(new_expression constructor: (identifier) @error-type)))))'
# Compare regex index vs tree-sitter accuracy
emacsclient --eval '(claude-compare-index "src/core/prims.ts")'| Layer | Use For | Speed | |-------|---------|-------| | Regex index | Project-wide lookups (across 775 files) | ~3s build | | Tree-sitter | Single-file structural precision (nested fns, call sites) | instant |
Proven Examples - Real Output From This Codebase
These are real commands run against OmegaLLM (460 files, 85K lines). Each replaces dozens or hundreds of tool calls with a single Emacs call.
How to run: Write an elisp file to the scratchpad, load it, call the function:
emacsclient --eval "(progn (load \"path/to/script.el\") (my-function))"For complex elisp with regex, always write to a file first to avoid shell escaping.
Example 1: Module Dependency Graph (0.79s - replaces 271+ Grep/Read calls)
Scans every TS file's imports, builds a full dependency graph, finds hub modules and circular dependencies.
(defun claude-module-graph ()
(let ((edges '())
(in-degree (make-hash-table :test 'equal))
(out-degree (make-hash-table :test 'equal)))
(dolist (buf (buffer-list))
(when (and (buffer-file-name buf)
(string-match "\\.ts$" (buffer-file-name buf)))
(with-current-buffer buf
(let ((from (file-name-sans-extension
(file-name-nondirectory (buffer-file-name buf)))))
(goto-char (point-min))
(while (re-search-forward "from [\"']\\.\\.?/\\([^\"']+\\)[\"']" nil t)
(let ((to (file-name-sans-extension
(file-name-nondirectory (match-string 1)))))
(unless (string= from to)
(push (cons from to) edges)
(puthash to (1+ (gethash to in-degree 0)) in-degree)
(puthash from (1+ (gethash from out-degree 0)) out-degree))))))))
(let ((hubs '()) (fanout '()) (cycles '()))
(maphash (lambda (k v) (push (cons k v) hubs)) in-degree)
(setq hubs (seq-take (sort hubs (lambda (a b) (> (cdr a) (cdr b)))) 15))
(maphash (lambda (k v) (push (cons k v) fanout)) out-degree)
(setq fanout (seq-take (sort fanout (lambda (a b) (> (cdr a) (cdr b)))) 10))
(dolist (edge edges)
(let ((rev (rassoc (car edge) edges)))
(when (and rev (string= (car rev) (cdr edge)))
(let ((cy (format "%s <-> %s" (car edge) (cdr edge))))
(unless (member cy cycles) (push cy cycles))))))
(format "HUBS:\n%s\n\nFAN-OUT:\n%s\n\nCIRCULAR: %s\nEdges: %d"
(mapconcat (lambda (h) (format - %3d <- %s" (cdr h) (car h))) hubs "\n")
(mapconcat (lambda (f) (format - %3d -> %s" (cdr f) (car f))) fanout "\n")
(mapconcat #'identity (seq-take cycles 10) ", ")
(length edges)))))Actual output (0.79s):
HUBS:
144 <- types
116 <- values
57 <- hash
41 <- machine
26 <- registry
FAN-OUT:
223 -> index
49 -> prims
48 -> types
32 -> runtime
CIRCULAR: runtimeImpl <-> runtime, run <-> runtime, expr <-> value
Edges: 1079Example 2: Semantic Grep - Pattern Search With Function Context (1.07s - replaces 50+ calls)
Finds every match of a pattern across the codebase, showing which function each match is inside plus surrounding context.
(defun claude-semantic-grep (pattern &optional context-lines)
(let ((ctx (or context-lines 2)) (matches '()))
(dolist (buf (buffer-list))
(when (and (buffer-file-name buf)
(string-match "\\.ts$" (buffer-file-name buf)))
(with-current-buffer buf
(goto-char (point-min))
(while (re-search-forward pattern nil t)
(let* ((match-line (line-number-at-pos))
(fname (file-name-nondirectory (buffer-file-name buf)))
(fn-name (save-excursion
(if (re-search-backward
"\\(?:export \\)?\\(?:async \\)?function \\([a-zA-Z_][a-zA-Z0-9_]*\\)" nil t)
(match-string 1) "<top-level>")))
(start (save-excursion (forward-line (- ctx)) (line-beginning-position)))
(end (save-excursion (forward-line (1+ ctx)) (line-end-position)))
(snippet (buffer-substring-no-properties start end)))
(push (format "--- %s:%d in %s() ---\n%s"
fname match-line fn-name (string-trim-right snippet))
matches))))))
(format "/%s/ -- %d matches\n\n%s"
pattern (length matches)
(mapconcat #'identity (reverse (seq-take (reverse matches) 25)) "\n\n"))))Command: (claude-semantic-grep "throw new.*Error")
Actual output (1.07s, 654 matches found):
/throw new.*Error/ -- 654 matches
--- prims.ts:211 in registerConditionPrims() ---
if (!helpers.isCallable(thunk)) {
throw new Error("handler-bind: expected thunk");
}
--- vm.ts:163 in peek() ---
if (state.stack.length === 0) {
throw new Error("VM stack underflow");
}Example 3: Auto-Generate API Reference (1.23s - replaces 271+ Read calls)
Scans every TS file, extracts all exported function signatures with parameter types and return types, groups by module.
(defun claude-generate-api-reference ()
(let ((modules (make-hash-table :test 'equal)))
(dolist (buf (buffer-list))
(when (and (buffer-file-name buf)
(string-match "\\.ts$" (buffer-file-name buf)))
(with-current-buffer buf
(let ((fname (file-name-nondirectory (buffer-file-name buf))))
(goto-char (point-min))
(while (re-search-forward
"^export \\(async \\)?function \\([a-zA-Z_][a-zA-Z0-9_]*\\)(\\([^)]*\\))\\(?:: \\([^\n{]+\\)\\)?" nil t)
(let ((async-p (match-string 1))
(name (match-string 2))
(params (string-trim (match-string 3)))
(ret (if (match-string 4) (string-trim (match-string 4)) "void")))
(puthash fname
(cons (format - %s%s(%s): %s"
(if async-p "async - "") name
(if (> (length params) 60)
(concat (substring params 0 57) "...")
params)
ret)
(gethash fname modules '()))
modules)))))))
(let ((sorted '()))
(maphash (lambda (k v) (when (> (length v) 2)
(push (cons k (reverse v)) sorted))) modules)
(setq sorted (sort sorted (lambda (a b) (string< (car a) (car b)))))
(format "API REFERENCE (%d modules)\n\n%s"
(length sorted)
(mapconcat (lambda (m) (format "### %s\n%s" (car m)
(mapconcat #'identity (cdr m) "\n")))
(seq-take sorted 30) "\n\n")))))Actual output (1.23s, 941 exported functions across 30+ modules):
API REFERENCE (30 modules)
### actor.ts
resetActorRegistry(): void
createActor(scheduler, initialState, ...): void
sendMessage(scheduler, actorId, message, ...): void
### anf.ts
toANF(expr, sourceLabel?): ANFProgram
countBindings(expr): number
findEffectOps(expr): Set<string>
anfToString(expr, indent = 0): stringThe Math
| Task | Without Emacs | With Emacs | Savings | |------|--------------|------------|---------| | Module dependency graph | 271+ Read/Grep calls | 1 call, 0.79s | 271x fewer calls | | Semantic grep (654 matches) | 50+ Grep+Read calls | 1 call, 1.07s | 50x fewer calls | | API reference generation | 271+ Read calls | 1 call, 1.23s | 271x fewer calls |
Pitfalls
| Problem | Solution |
|---------|----------|
| Many bash calls | Use single progn |
| Shell escaping | Write .el file + claude-run-script |
| Long output corruption | Return simple values or write to file |
| Daemon not running | runemacs.exe --daemon |
Features
"' Governed Execution
- Policy enforcement: Budget limits, security boundaries, capability restrictions
- Receipts & provenance: Every execution produces auditable evidence
- Validation boundaries: Type checking, schema validation, semantic predicates
- Replayable & Debuggable
- Step-through debugging: Inspect machine state at every step
- Time travel: Rewind execution, jump to specific steps
- Breakpoints: Stop on step count, expression type, or effect operations
- Session snapshots: Save/restore entire evaluator state
- Perfect for AI Agents
Why AI agents love OmegaLLM:
Persistent sessions Your definitions, functions, and state survive across tool calls. The agent sees exactly what programs you've already written in the session. No need to re-explain context every time.
Query the runtime The Oracle can evaluate subexpressions, inspect the environment, and get actual runtime values before responding. Not a one-shot completion"an interactive coroutine that reasons about live code.
Traceable execution Every step recorded with full provenance. The agent can inspect what happened, debug failures, and understand exactly where things went wrong. Complete audit trail of every LLM call and effect.
Deterministic replay Same inputs = same outputs. Agents can confidently retry failed operations knowing the behavior is reproducible. Save session snapshots and restore them later.
Session isolation Multiple named workspaces (
omega --session agent1,omega --session agent2). Like tmux for code"work on different tasks without interference.Effect boundaries LLM calls are reified as
(effect infer.op ...). Clean separation between computation and inference. Budget enforcement. Policy compliance.Interactive debugging Step through execution (
:step), set breakpoints, inspect state at any point (:state). Time travel (:back,:goto). Agents can diagnose issues systematically.
(c) SICP-Style Primitives
- Higher-order functions: map, filter, fold, streams
- First-class continuations: call/cc for non-local control
- Lazy evaluation: Infinite streams, delayed computation
- Backtracking search: amb operator for search problems
- Semantic Computing
- LLM-backed predicates: Use LLMs for validation/classification
- Distributive inference: Generate distributions over answers
- Repair/retry loops: Automatic validation and repair
- Recursive decomposition: Break problems into semantic subproblems
(c) The Solve Language (HETA)
OmegaLLM includes a language for recursive problem decomposition - not a framework, but a proper language with semantic closure:
;; THE ENTIRE LANGUAGE IN ONE EXAMPLE:
(solve
(goal :intent :search
:artifact (artifact :kind :text :value document)
:deliverable :hitset
:invariants '(:non-empty)
:hints '(:query "liability clause")))
;; What happens:
;; 1. artifact.kind = :text -> lookup artifact algebra
;; 2. measure(artifact) > leaf-size? -> split into parts
;; 3. solve each part RECURSIVELY
;; 4. deliverable = :hitset -> lookup result algebra
;; 5. combine(sub-results)
;; 6. verify(result, invariants)
;; 7. return MeaningKey concepts:
- Artifact: Typed data with metadata (text, code, contracts, etc.)
- Goal: Intent + artifact + deliverable + invariants + hints
- Artifact Algebra: How to decompose (
measure,split,leaf) - Result Algebra: How to combine (
identity,combine) - Semantic Closure: solve(Goal) -> Meaning
npm run manual 50 # See solve language demoSee Chapter 50: The Solve Language for full documentation.
- Language Building (The Sussman Lattice)
OmegaLLM supports all 7 Sussman mechanisms for building new languages:
| Mechanism | Primitive | What It Enables |
|-----------|-----------|-----------------|
| Syntactic Extension | register-macro, expand-macro | Define new syntax forms like unless, when-let |
| Parameterized Evaluators | make-evaluator, eval-in | Create domain-specific languages with custom primitives |
| Meta-Circular Towers | eval, make-machine | Evaluators that evaluate evaluators |
| Control Operators | call/cc, amb, require | Backtracking, generators, exceptions, coroutines |
| Semantic Reification | machine-step, machine-control | Debuggers, analyzers, time-travel |
| Symbol Generation | gensym | Hygienic macro writing |
Example: LLM coins a DSL on the fly
;; Define a custom language for contract analysis
(define contract-dsl
(make-evaluator
:extend (list
(cons 'parse-contract parse-contract-fn)
(cons 'extract-obligations extract-fn)
(cons 'check-compliance check-fn))))
;; Use the DSL
(eval-in contract-dsl
'(check-compliance
(extract-obligations (parse-contract doc))
policy))npm run demo-dsl # See language-building demoWhy OmegaLLM
Chat-based LLM vs OmegaLLM (the core difference)
Chat is a single, linear context window:
- context grows until it blows up
- "subproblems" are informal and hard to isolate
- validation is human/manual
- "try again" adds more noise and makes drift worse
- debugging is essentially log archaeology
OmegaLLM is a semantic computation plane:
- LLM calls are scoped and interruptible
- each call can be validated at a boundary (types/schemas/semantic predicates)
- failures can trigger repair/retry or backtracking search
- recursion decomposes problems into bounded subcalls
- the runtime records receipts for audit and deterministic replay
- state persists in a named workspace/session across discrete invocations
In other words: chat is conversational; OmegaLLM is executable semantics.
Quick Start
1. Install Dependencies
npm install
npm run build2. Configure Credentials
OmegaLLM auto-detects credentials in this priority order:
Option A: Codex (recommended - zero per-token cost via ChatGPT Pro)
If you have a ChatGPT Pro subscription and the Codex CLI installed, OmegaLLM will automatically find your OAuth token at ~/.openclaw/agents/main/agent/auth-profiles.json. No configuration needed - it just works.
Alternatively, copy the file locally:
mkdir -p .auth
cp ~/.openclaw/agents/main/agent/auth-profiles.json .auth/This uses gpt-5.1-codex-mini via chatgpt.com/backend-api at zero per-token cost.
Option B: Standard API keys
cp .env.example .envThen edit .env:
# For Claude (Anthropic)
ANTHROPIC_API_KEY=sk-ant-your-actual-key-here
# For GPT-4 (OpenAI) - per-token billing
OPENAI_API_KEY=sk-your-actual-key-hereAt least one credential source is required. Both .env and .auth/ are gitignored.
3. Run the REPL
npm run omega-fastFirst thing: Type :help to see what you can do:
Omega> :help
# Shows all REPL commands - debugging, sessions, execution control, etc.Try some examples:
Omega> (+ 1 2)
=> 3
Omega> (effect infer.op "Hello!")
=> "Hi there!"
Omega> :state
# Inspect the full evaluator state (control, environment, store, continuation)4. Run the Demos
Most impressive demo (start here!):
npm run demo # Showcase: Higher-order functions, backtracking search, agentic LLM!This demo shows:
- Map/filter over LLM operations
- Backtracking search with
amb+ semantic validation
- Backtracking search with
- Agentic LLM that queries your code to answer questions
More examples:
npm run manual 1 # Chapter 1: Getting Started
npm run manual 5 # Chapter 5: Backtracking search with amb
npm run manual 7 # Chapter 7: Lazy streams
npm run manual 8 # Chapter 8: The debuggerSee all 60 demos: DEMO-GALLERY.md
Common Gotchas
Things that will trip you up:
1. Run :help first!
The REPL has tons of commands. Type :help immediately to see debugging, sessions, breakpoints, time travel, etc.
2. CLI flag is --cmd, not --eval
# WRONG
npm run omega-fast -- --eval "(+ 1 2)"
# RIGHT
npm run omega-fast -- --cmd "(+ 1 2)"3. Session modes are different
--session <name> in batch mode auto-loads and auto-saves state (repl-<name>.json).
:session save/load/goto manages event-log timeline sessions (sessions/<name>.jsonl + index).
Timeline restore flow (:session commands):
# Session 1: Save before quitting
Omega> (define x 42)
Omega> :session save mywork
Omega> :quit
# Session 2: Load AND goto to restore
Omega> :session load mywork
Omega> :session goto 3 # <-- THIS restores the environment!
Omega> x
=> 42:session load only loads the trace. :session goto <seq> actually restores the environment.
4. Session files location
Batch session snapshots:
.omega-session/repl-<name>.jsonTimeline session event logs:
.omega-session/sessions/<name>.jsonl # Event log
.omega-session/sessions/<name>.index.json # Index with checkpoints5. Use the Demo Gallery!
Don't guess at syntax. The Demo Gallery has 60 working examples with actual LLM outputs.
npm run manual 5 # See amb backtracking in action
npm run manual 7 # See lazy streams"- The Manual: Structure and Interpretation of Linguistic Programs
SICP for the Age of Language Models
The complete user manual adapts the principles of Structure and Interpretation of Computer Programs (SICP) for inference programming with LLMs.
What's in the Manual
- 37 Chapters From basics to metalinguistic abstraction to agent infrastructure
- 60 Working Examples Every concept demonstrated with runnable code
- SICP Principles Applied Higher-order functions, streams, nondeterminism, metacircular evaluation
- Progressive Learning Start simple, build to advanced patterns
Start Here
- Table of Contents Navigate all 37 chapters
- Introduction Why this manual exists
- Quick Reference Cheat sheet for common operations
- Chapter 1: Getting Started Your first steps
Manual Structure
Part I: OmegaLLM Basics (Chapters 1-10)
- Getting started, LLM calls, composition, higher-order functions
- Nondeterministic search, multi-shot sampling, lazy streams
- The debugger, agentic REPL, full API reference
Part II: SICP Principles for Inference (Chapters 11-27)
- Building abstractions with semantic procedures (Ch 11-14)
- Semantic data structures (Ch 15-18)
- State and concurrency (Ch 19-23)
- Metalinguistic abstraction (Ch 24-27)
Part III: Agent Infrastructure (Chapters 50-60)
| Ch | Title | Feature | Key Primitive |
|----|-------|---------|---------------|
| 50 | The Solve Language (HETA) | Type-directed problem decomposition | (require 'solve) |
| 51 | Problem Recognizer | NL to schema to solve pipeline | (pr.recognize+solve story) |
| 53 | Build Effect | Structured build error capture | (effect build.run.op) |
| 54 | LLM Reentry Build | Co-recursive Lisp/LLM tower | omega_eval tool |
| 55 | Agent Spawning | Multi-provider agent dispatch | (effect agent.spawn.op ...) |
| 56 | Self-Healing Builds | Fixpoint build error correction | (self-healing-build n) |
| 57 | Agent Selection amb | Nondeterministic agent fallback | (amb-agent-fix error) |
| 58 | Semantic Cache | LLM response caching | (cached-infer prompt) |
| 59 | Workflow DSL | Declarative pipelines as data | (pipeline stages...) |
| 60 | Smart Inference | Dynamic model selection/routing | (infer-with-model prompt model) |
Demo Gallery
See all 60 demos with live LLM outputs: DEMO-GALLERY.md
Quick preview of key demos:
Getting Started
- ch01-getting-started.lisp - Basic syntax and primitives
- ch02-llm-calls.lisp - Your first LLM inference
- ch03-composition.lisp - Composing LLM calls
- ch04-higher-order.lisp - map, filter, fold over semantic operations
Advanced Features
- ch05-nondeterministic.lisp - Search with amb operator
- ch06-multi-shot.lisp - Generate multiple candidates
- ch07-lazy-streams.lisp - Infinite sequences (SICP Ch3)
- ch08-debugger.lisp - Step-through debugging
AI Agent Tools
- agent-security-audit.lisp - - Map security analysis over entire codebase (vs 100 serial tool calls!)
- ch09-agentic-repl.lisp - Building agents with sessions
- ch13-higher-order-inference.lisp - LLMs over LLMs
- ch19-conversational-state.lisp - Stateful conversations
Metalinguistic Abstractions
- ch24-metacircular.lisp - Evaluator in Omega
- ch27-logic-programming.lisp - Prolog-style queries
- ch22-concurrent-inference.lisp - Parallel LLM calls
System Features
- ch48-budget-management.lisp - Budget enforcement
- ch64-repl-discovery.lisp - Discoverability and help surfaces
- auto-traceability.lisp - Requirements tracing
Agent Infrastructure (Chapters 50-60)
- ch50-solve-language.lisp - The Solve Language (HETA): type-directed problem decomposition
- ch51-problem-recognizer.lisp - Problem Recognizer: NL to schema to solve pipeline
- ch53-build-effect.lisp - Build Effect: structured build error capture
- ch54-llm-reentry-build.lisp - LLM Reentry Build: co-recursive Lisp/LLM tower
- ch55-agent-spawn.lisp - Agent Spawning: multi-provider agent dispatch
- ch56-self-healing-build.lisp - Self-Healing Builds: fixpoint build error correction
- ch57-agent-amb-selection.lisp - Agent Selection amb: nondeterministic agent fallback
- ch58-semantic-cache.lisp - Semantic Cache: LLM response caching
- ch59-workflow-dsl.lisp - Workflow DSL: declarative pipelines as data
- ch60-smart-infer.lisp - Smart Inference: dynamic model selection/routing
Run any demo:
npm run manual 1 # Run chapter 1
npm run manual 8 # Run chapter 8 (debugger demo)REPL Guide
The REPL is the primary way to use OmegaLLM. Start here.
Starting the REPL
npm run omega-fast # Recommended: Fast build, no type checking
npm run omega # Full build with type checking
npm run omega-repl # REPL only (after building)First command you should run: :help
This shows ALL available REPL commands - debugging, execution control, sessions, breakpoints, time travel, and more.
REPL Commands Reference
Basic Commands
:help, :h **RUN THIS FIRST** - Shows all REPL commands
:quit, :q Exit the REPL
:env [name] Show environment bindings
:defs Show all user definitionsExecution Control
:step [n] Execute n steps (default: 1)
:run, :continue, :c Run to completion or next breakpoint
:stop Stop current execution
:state, :st Show current machine state (CEKS)
:control Show control expression
:stack Show continuation stackDebugging
:debug <expr> Start debugging an expression
:break step N Set breakpoint at step N
:break expr TAG Break when evaluating expression with TAG
:break effect OP Break when performing effect OP
:breaks, :breakpoints List all breakpoints
:delbreak <id> Delete breakpoint
:toggle <id> Enable/disable breakpointTime Travel
:trace Show execution trace
:goto <step> Jump to specific step in trace
:back [n] Rewind n steps (default: 1)
:history [n] Show recent execution historySnapshots & Persistence
:save <name> Save current state as snapshot
:restore <name> Load snapshot
:snapshots, :snaps List all snapshots
:export <name> <file> Export snapshot to fileRecording
:record on|off Toggle trace recording
:dump <file> Save trace to file
:replay <file> Load and replay trace from fileFile Loading & Packages
:loadfile <path> Load and evaluate code from file
:packages List loaded packages
(require 'pkg) Load a package (e.g., 'document-search)
(load "path.lisp") Load a specific fileAgentic LLM Mode
:ask <question> Ask LLM with tool-use (it can eval code iteratively!)
:traces List recent LLM interaction traces
:trace <id> Show trace summary
:trace <id> -v Show full trace (prompts, responses, tool calls)OPR (Omega Protocol Runtime)
:opr-list List available OPR kernels
:opr-run <kernel> <json> Run kernel with program JSON
:opr-receipts Show OPR receipt chain for session
:opr-verify [file] Verify OPR receipt chain integrityAdvanced Inspection
:stack Show call stack
:frame <n> Inspect stack frame N
:control Show current control expression/valueExample REPL Session
Omega> (define (factorial n)
(if (= n 0) 1 (* n (factorial (- n 1)))))
=> <closure>
Omega> (factorial 5)
=> 120
Omega> :defs
;; Shows all definitions including factorial
Omega> :debug (factorial 3)
;; Enters step-by-step debugger
Omega> :break step 5
;; Set breakpoint at step 5
Omega> :run
;; Runs until breakpoint
Omega> :state
;; Shows CEKS machine state
Omega> :save my-session
;; Saves entire state
Omega> :quitSessions: Persistent State for AI Agents
Why sessions matter: Most LLM agent frameworks treat each tool call as isolated. OmegaLLM gives agents persistent, named workspaces that survive across discrete invocations.
The Problem with Stateless Tool Calls
Traditional approach:
# Agent calls tool multiple times, but state is lost each time
agent.call_tool("eval", "(define x 10)") # x defined
agent.call_tool("eval", "(+ x 5)") # ERROR: x is undefined!OmegaLLM's Solution: Named Sessions
Within a single REPL session, state persists naturally:
Omega> (define x 42)
=> x
Omega> (+ x 10)
=> 52CLI Session Flag (NEW)
Run commands against a named session directly from CLI:
# Define in a session
npm run omega-fast -- --session myproject --cmd "(define x 42)"
# Later, x is still there
npm run omega-fast -- --session myproject --cmd "x" # => 42
# Sessions auto-save after each commandProject-local sessions: Set OMEGA_SESSION_DIR=./.omega-sessions in your .env.
Across separate process invocations, use :session save and :session load + :session goto:
# Session 1: Define things and save
$ npm run omega-fast
Omega> (define x 42)
=> x
Omega> (define (double n) (* n 2))
=> double
Omega> :session save mysession
Session saved as 'mysession'
Omega> :quit
# Session 2: Later (hours/days), restore and continue
$ npm run omega-fast
Omega> :session load mysession
Loaded session 'mysession' (6 events)
Omega> :session goto 5 # Jump to checkpoint to restore env
Jumped to seq 5
Omega> x # x is restored!
=> 42
Omega> (double x)
=> 84Session files are stored in: OmegaLLM/.omega-session/sessions/ (relative to project root)
When you're working from inside the OmegaLLM/ directory, they're at .omega-session/sessions/.
Example session: After running npm install, an example session getting-started.jsonl is automatically created. Try loading it:
Omega> :session load getting-started
Omega> :session goto 11
Omega> greeting
=> "Hello, OmegaLLM!"What's in a Session?
A session is not just a variable dictionary. It's a complete, persistent evaluator context with full debugging capabilities:
-> Persistent State
- Environment - All bindings (variables, functions) survive across invocations
- Store - All allocated values remain in memory
- Code - Your definitions persist between calls
- Fully Traceable
- Execution trace - Every step recorded
- Receipts - Provenance records for every LLM call and effect
- History - Complete audit trail of what happened
Debuggable & Steppable
- Step through execution - Execute one step at a time (
:step) - Breakpoints - Pause on specific steps, expressions, or effects
- Inspect state - See control, environment, store, continuation at any point (
:state) - Time travel - Rewind and replay execution (
:back,:goto)
- Replayable
- Deterministic replay - Recreate exact execution from trace
- Save/restore - Snapshot entire session state (
:save,:restore) - Export traces - Save execution history to files for analysis
Session Commands
:session list List all saved sessions
:session save <name> Save current session to disk
:session load <name> Load a session's trace (doesn't restore env yet)
:session goto <seq> Jump to sequence number, RESTORES environment
:session trace View the session's execution trace
:session fork <name> Fork current session to new nameImportant: :session load only loads the trace. You must :session goto <seq> to actually restore the environment state.
# Sessions persist on disk at (from project root):
OmegaLLM/.omega-session/sessions/<name>.jsonl # Event log
OmegaLLM/.omega-session/sessions/<name>.index.json # Index with checkpoints
# Or if you're inside OmegaLLM/ directory:
.omega-session/sessions/<name>.jsonl
.omega-session/sessions/<name>.index.jsonSessions in the REPL
# Start REPL (with optional session name for recording)
npm run omega-fast
npm run omega-fast -- --session my-workComplete example of session persistence:
# === First session ===
Omega> (define data (list 1 2 3))
=> data
Omega> (define (sum lst) (fold-left + 0 lst))
=> sum
Omega> (sum data)
=> 6
Omega> :session save mywork
Session saved as 'mywork'
Omega> :quit
# === Later: new process ===
$ npm run omega-fast
Omega> :session list
Saved sessions:
mywork (8 events, 1 checkpoints)
Omega> :session load mywork
Loaded session 'mywork' (8 events)
Use :session goto <seq> to jump, :session trace to view
Omega> :session trace
[000] REPL > (define data (list 1 2 3))
[001] EVAL ~ (define data (list 1 2 3))
[002] OUT => data
...
[007] SAVE * checkpoint (manual)
Omega> :session goto 7
Jumped to seq 7
Replayed 0 steps
Omega> data # Environment restored!
=> (1 2 3)
Omega> (sum data)
=> 6Why This Matters for AI Agents
Use case: An AI agent that maintains state across tool calls.
The agent keeps a single REPL process running (or uses :session save/:session load + :session goto to persist across restarts):
;; Agent's first tool call
Omega> (define files (list "auth.ts" "user.ts" "api.ts"))
=> files
;; Agent's second tool call (same REPL session)
Omega> (define issues (filter security-issue? files))
=> issues
;; Agent's third call
Omega> (generate-report issues)
=> "Security report: 2 issues found..."
;; Agent saves before shutdown
Omega> :session save agent-review-123Later, the agent can restore:
Omega> :session load agent-review-123
Omega> :session goto 8
Omega> issues ;; Still there!
=> ("auth.ts" "api.ts")The evaluator state is durable like tmux for code execution.
See ARCHITECTURE/28-SESSION.md for implementation details.
Core Primitives: Effects, Search, Streams
Effects: Reified Boundaries (Commands, not ad-hoc side effects)
;; LLM inference
(effect infer.op (list "Summarize in one sentence: - text))
;; Multi-shot sampling (distribution of candidates)
(effect search.op (list "Rewrite in three tones: - request))
;; Tooling (policy may gate these)
(effect file.read.op "path/to/file.txt")
(effect file.write.op "path/to/file.txt" "content")
(effect shell.op "ls -la")
;; Build effect - returns structured error info for self-healing builds
(effect build.run.op) ;; Returns: {ok, errors[], warnings[], duration_ms, raw_output}
;; Each error has: file, line, col, severity, code, messageEnvironment (LLM adapters)
The runtime auto-detects credentials in priority order: Codex > Anthropic > OpenAI.
Option 1: Codex OAuth (recommended - zero cost)
If you have the Codex CLI installed with a ChatGPT Pro subscription, OmegaLLM finds the OAuth token automatically. No .env needed.
Or copy the token file locally:
mkdir -p .auth
cp ~/.openclaw/agents/main/agent/auth-profiles.json .auth/Option 2: .env file
ANTHROPIC_API_KEY=sk-ant-your-key-here
OPENAI_API_KEY=sk-your-key-hereOverride adapter/model
OMEGA_ADAPTER=codex # codex | piai | anthropic | openai
OMEGA_MODEL=gpt-5.1-codex-mini # override model within selected providerAt least one credential source is required. Both .env and .auth/ are gitignored.
Higher-Order Semantic Functions
;; Map LLM over data
(map (lambda (text) (effect infer.op (list "Sentiment: - text)))
(list "I love this!" "This is terrible." "It's okay."))
=> ("positive" "negative" "neutral")
;; Filter with LLM predicate
(filter (lambda (code)
(effect infer.op (list "Is this code secure? yes/no: - code)))
code-samples)
;; Fold with validation
(fold-left
(lambda (acc item)
(if (valid? item) (cons item acc) acc))
(list)
items)Backtracking Search (amb)
;; Generate and test
(define (solve-puzzle)
(let ((a (amb 1 2 3))
(b (amb 4 5 6)))
(require (= (+ a b) 7))
(list a b)))
(solve-puzzle)
=> (1 6) ; or (2 5) or (3 4) - all valid solutionsLazy Streams (SICP)
;; Infinite stream
(define (integers-from n)
(stream-cons n (integers-from (+ n 1))))
(define nats (integers-from 1))
(stream-take 5 nats)
=> (1 2 3 4 5)
;; Filter infinite stream
(define evens (stream-filter even? nats))
(stream-take 3 evens)
=> (2 4 6)Advanced Documentation
Architecture Specifications
For developers who want to understand the implementation, OmegaLLM has 50+ architecture documents covering every aspect of the system:
Core Architecture
- ARCHITECTURE-EXPLANATION.md - End-to-end architecture explanation
- ARCHITECTURE-LAYERS.md - Layer boundaries and responsibilities
- ARCHITECTURE-INTEGRATION.md - Integration points and composition
Language and Semantics
- ARCHITECTURE-LANGUAGES-1.md - Language architecture, part 1
- ARCHITECTURE-LANGUAGES-2.md - Language architecture, part 2
- ARCHITECTURE-LANGUAGES-3.md - Language architecture, part 3
- ARCHITECTURE-LANGUAGES-4.md - Language architecture, part 4
- ARCHITECTURE-LANGUAGES-5.md - Language architecture, part 5
- ARCHITECTURE-LANGUAGES-6.md - Language architecture, part 6
Specs and Reference
- REFERENCE-ALGEBRA.md - Algebraic reference and primitives
- **[TRACEABILITY-MATRIX.md](docs/TRA
