@lxtdev/lxt-coding-agent
v0.52.12
Published
Coding agent CLI with read, bash, edit, write tools and session management
Readme
🏖️ OSS Vacation
Issue tracker and PRs reopen February 16, 2026.
All PRs will be auto-closed until then. Approved contributors can submit PRs after vacation without reapproval. For support, join Discord.
LxT is a minimal terminal coding harness. Adapt lxt to your workflows, not the other way around, without having to fork and modify lxt internals. Extend it with TypeScript Extensions, Skills, Prompt Templates, and Themes. Put your extensions, skills, prompt templates, and themes in LxT Packages and share them with others via npm or git.
LxT ships with powerful defaults and keeps advanced workflows extensible. Use built-in managed subagents, or extend further with third-party packages and custom extensions.
LxT runs in four modes: interactive, print or JSON, RPC for process integration, and an SDK for embedding in your own apps. See openclaw/openclaw for a real-world SDK integration.
Table of Contents
- Quick Start
- Providers & Models
- Interactive Mode
- Sessions
- Settings
- Context Files
- Customization
- Programmatic Usage
- Philosophy
- CLI Reference
Quick Start
npm install -g @mariozechner/pi-coding-agentAuthenticate with an API key:
export ANTHROPIC_API_KEY=sk-ant-...
lxtOr use your existing subscription:
lxt
/login # Then select providerThen just talk to lxt. By default, lxt gives the model four tools: read, write, edit, and bash. The model uses these to fulfill your requests. Add capabilities via skills, prompt templates, extensions, or lxt packages.
Platform notes: Windows | Termux (Android) | Terminal setup | Shell aliases
Providers & Models
For each built-in provider, lxt maintains a list of tool-capable models, updated with every release. Authenticate via subscription (/login) or API key, then select any model from that provider via /model (or Ctrl+L).
Subscriptions:
- Anthropic Claude Pro/Max
- OpenAI ChatGPT Plus/Pro (Codex)
- GitHub Copilot
- Google Gemini CLI
- Google Antigravity
API keys:
- Anthropic
- OpenAI
- Azure OpenAI
- Google Gemini
- Google Vertex
- Amazon Bedrock
- Mistral
- Groq
- Cerebras
- xAI
- OpenRouter
- Vercel AI Gateway
- ZAI
- OpenCode Zen
- Hugging Face
- Kimi For Coding
- MiniMax
See docs/providers.md for detailed setup instructions.
Custom providers & models: Add providers via ~/.lxt/agent/models.json if they speak a supported API (OpenAI, Anthropic, Google). For custom APIs or OAuth, use extensions. See docs/models.md and docs/custom-provider.md.
Interactive Mode
The interface from top to bottom:
- Startup header - Shows shortcuts (
/hotkeysfor all), loaded AGENTS.md files, prompt templates, skills, and extensions - Messages - Your messages, assistant responses, tool calls and results, notifications, errors, and extension UI
- Editor - Where you type; border color indicates thinking level
- Footer - Working directory, session name, total token/cache usage, cost, context usage, current model
The editor can be temporarily replaced by other UI, like built-in /settings or custom UI from extensions (e.g., a Q&A tool that lets the user answer model questions in a structured format). Extensions can also replace the editor, add widgets above/below it, a status line, custom footer, or overlays.
Editor
| Feature | How |
|---------|-----|
| File reference | Type @ to fuzzy-search project files |
| Path completion | Tab to complete paths |
| Multi-line | Shift+Enter (or Ctrl+Enter on Windows Terminal) |
| Images | Ctrl+V to paste, or drag onto terminal |
| Bash commands | !command runs and sends output to LLM, !!command runs without sending |
Standard editing keybindings for delete word, undo, etc. See docs/keybindings.md.
Commands
Type / in the editor to trigger commands. Extensions can register custom commands, skills are available as /skill:name, and prompt templates expand via /templatename.
| Command | Description |
|---------|-------------|
| /login, /logout | OAuth authentication |
| /model | Switch models |
| /scoped-models | Enable/disable models for Ctrl+P cycling |
| /settings | Thinking level, theme, message delivery |
| /resume | Pick from previous sessions |
| /new | Start a new session |
| /name <name> | Set session display name |
| /session | Show session info (path, tokens, cost) |
| /tree | Jump to any point in the session and continue from there |
| /fork | Create a new session from the current branch |
| /compact [prompt] | Manually compact context, optional custom instructions |
| /copy | Copy last assistant message to clipboard |
| /export [file] | Export session to HTML file |
| /share | Upload as private GitHub gist with shareable HTML link |
| /reload | Reload extensions, skills, prompts, context files (themes hot-reload automatically) |
| /hotkeys | Show all keyboard shortcuts |
| /changelog | Display version history |
| /quit, /exit | Quit lxt |
Keyboard Shortcuts
See /hotkeys for the full list. Customize via ~/.lxt/agent/keybindings.json. See docs/keybindings.md.
Commonly used:
| Key | Action |
|-----|--------|
| Ctrl+C | Clear editor |
| Ctrl+C twice | Quit |
| Escape | Cancel/abort |
| Escape twice | Open /tree |
| Ctrl+L | Open model selector |
| Ctrl+P / Shift+Ctrl+P | Cycle scoped models forward/backward |
| Shift+Tab | Cycle thinking level |
| Ctrl+O | Collapse/expand tool output |
| Ctrl+T | Collapse/expand thinking blocks |
Message Queue
Submit messages while the agent is working:
- Enter queues a steering message, delivered after current tool execution (interrupts remaining tools)
- Alt+Enter queues a follow-up message, delivered only after the agent finishes all work
- Escape aborts and restores queued messages to editor
- Alt+Up retrieves queued messages back to editor
Configure delivery in settings: steeringMode and followUpMode can be "one-at-a-time" (default, waits for response) or "all" (delivers all queued at once).
Sessions
Sessions are stored as JSONL files with a tree structure. Each entry has an id and parentId, enabling in-place branching without creating new files. See docs/session.md for file format.
Management
Sessions auto-save to ~/.lxt/agent/sessions/ organized by working directory.
lxt -c # Continue most recent session
lxt -r # Browse and select from past sessions
lxt --no-session # Ephemeral mode (don't save)
lxt --session <path> # Use specific session file or IDBranching
/tree - Navigate the session tree in-place. Select any previous point, continue from there, and switch between branches. All history preserved in a single file.
- Search by typing, page with ←/→
- Filter modes (Ctrl+O): default → no-tools → user-only → labeled-only → all
- Press
lto label entries as bookmarks
/fork - Create a new session file from the current branch. Opens a selector, copies history up to the selected point, and places that message in the editor for modification.
Compaction
Long sessions can exhaust context windows. Compaction summarizes older messages while keeping recent ones.
Manual: /compact or /compact <custom instructions>
Automatic: Enabled by default. Triggers on context overflow (recovers and retries) or when approaching the limit (proactive). Configure via /settings or settings.json.
Compaction is lossy. The full history remains in the JSONL file; use /tree to revisit. Customize compaction behavior via extensions. See docs/compaction.md for internals.
Settings
Use /settings to modify common options, or edit JSON files directly:
| Location | Scope |
|----------|-------|
| ~/.lxt/agent/settings.json | Global (all projects) |
| .lxt/settings.json | Project (overrides global) |
See docs/settings.md for all options.
Context Files
LxT loads AGENTS.md (or CLAUDE.md) at startup from:
~/.lxt/agent/AGENTS.md(global)- Parent directories (walking up from cwd)
- Current directory
Use for project instructions, conventions, common commands. All matching files are concatenated.
System Prompt
Replace the default system prompt with .lxt/SYSTEM.md (project) or ~/.lxt/agent/SYSTEM.md (global). Append without replacing via APPEND_SYSTEM.md.
Customization
Prompt Templates
Reusable prompts as Markdown files. Type /name to expand.
<!-- ~/.lxt/agent/prompts/review.md -->
Review this code for bugs, security issues, and performance problems.
Focus on: {{focus}}Place in ~/.lxt/agent/prompts/, .lxt/prompts/, or a lxt package to share with others. See docs/prompt-templates.md.
Skills
On-demand capability packages following the Agent Skills standard. Invoke via /skill:name or let the agent load them automatically.
<!-- ~/.lxt/agent/skills/my-skill/SKILL.md -->
# My Skill
Use this skill when the user asks about X.
## Steps
1. Do this
2. Then thatPlace in ~/.lxt/agent/skills/, .lxt/skills/, or a lxt package to share with others. See docs/skills.md.
Extensions
TypeScript modules that extend lxt with custom tools, commands, keyboard shortcuts, event handlers, and UI components.
export default function (lxt: ExtensionAPI) {
lxt.registerTool({ name: "deploy", ... });
lxt.registerCommand("stats", { ... });
lxt.on("tool_call", async (event, ctx) => { ... });
}What's possible:
- Custom tools (or replace built-in tools entirely)
- Plan mode
- Custom compaction and summarization
- Permission gates and path protection
- Custom editors and UI components
- Status lines, headers, footers
- Git checkpointing and auto-commit
- SSH and sandbox execution
- MCP server integration
- Make lxt look like Claude Code
- Games while waiting (yes, Doom runs)
- ...anything you can dream up
Place in ~/.lxt/agent/extensions/, .lxt/extensions/, or a lxt package to share with others. See docs/extensions.md and examples/extensions/.
Built-in Subagents
LxT includes a built-in subagents tool (enabled by default when extensions are enabled). It manages child agents by ID and runs them as isolated RPC workers.
| Action | Description |
|--------|-------------|
| spawn | Start one subagent (task) or multiple in parallel (tasks[]); optionally set modelProvider + modelId to override the parent model |
| send | Retask an existing subagent with delivery mode: auto, prompt, steer, follow_up, restart |
| stop | Abort current run but keep subagent alive |
| terminate | Stop worker and remove subagent from registry |
| watch | Pull incremental activity/events with cursor + maxEvents |
| wait | Wait for completion with timeout, optional progress updates (watch=true) |
| list | List all managed subagents with status and metadata |
Defaults and limits:
- Max concurrent running subagents per parent session:
4 - Recursive spawning allowed with depth cap (default
2, configured viaLXT_SUBAGENT_MAX_DEPTH) - Default
waittimeout:300000ms (5 minutes), override withLXT_SUBAGENT_WAIT_TIMEOUT_MS - Parent/child depth is tracked with
LXT_SUBAGENT_DEPTH - Subagents inherit parent model, thinking level, and active tools at spawn time by default
spawnmodel overrides require bothmodelProviderandmodelId, and the requested model must be configured with available auth- Event visibility is on-demand (
watch/wait), not always-on streaming - Lifetime is session-runtime only (not persisted across restart/resume)
- Disabled by
--no-extensions
Themes
Built-in: dark, light. Themes hot-reload: modify the active theme file and lxt immediately applies changes.
Place in ~/.lxt/agent/themes/, .lxt/themes/, or a lxt package to share with others. See docs/themes.md.
LxT Packages
Bundle and share extensions, skills, prompts, and themes via npm or git. Find packages on npmjs.com or Discord.
Security: LxT packages run with full system access. Extensions execute arbitrary code, and skills can instruct the model to perform any action including running executables. Review source code before installing third-party packages.
lxt install npm:@foo/lxt-tools
lxt install npm:@foo/[email protected] # pinned version
lxt install git:github.com/user/repo
lxt install git:github.com/user/repo@v1 # tag or commit
lxt install git:[email protected]:user/repo
lxt install git:[email protected]:user/repo@v1 # tag or commit
lxt install https://github.com/user/repo
lxt install https://github.com/user/repo@v1 # tag or commit
lxt install ssh://[email protected]/user/repo
lxt install ssh://[email protected]/user/repo@v1 # tag or commit
lxt remove npm:@foo/lxt-tools
lxt list
lxt update # skips pinned packages
lxt config # enable/disable extensions, skills, prompts, themesPackages install to ~/.lxt/agent/git/ (git) or global npm. Use -l for project-local installs (.lxt/git/, .lxt/npm/).
Create a package by adding a lxt key to package.json:
{
"name": "my-lxt-package",
"keywords": ["pi-package"],
"lxt": {
"extensions": ["./extensions"],
"skills": ["./skills"],
"prompts": ["./prompts"],
"themes": ["./themes"]
}
}Without a lxt manifest, lxt auto-discovers from conventional directories (extensions/, skills/, prompts/, themes/).
See docs/packages.md.
Programmatic Usage
SDK
import { AuthStorage, createAgentSession, ModelRegistry, SessionManager } from "@mariozechner/pi-coding-agent";
const { session } = await createAgentSession({
sessionManager: SessionManager.inMemory(),
authStorage: new AuthStorage(),
modelRegistry: new ModelRegistry(authStorage),
});
await session.prompt("What files are in the current directory?");See docs/sdk.md and examples/sdk/.
RPC Mode
For non-Node.js integrations, use RPC mode over stdin/stdout:
lxt --mode rpcSee docs/rpc.md for the protocol.
Philosophy
LxT is aggressively extensible so it doesn't have to dictate your workflow. Features that other tools bake in can be built with extensions, skills, or installed from third-party lxt packages. This keeps the core minimal while letting you shape lxt to fit how you work.
No MCP. Build CLI tools with READMEs (see Skills), or build an extension that adds MCP support. Why?
Managed sub-agents are built in. The default tool is intentionally minimal and RPC-based. If you need different orchestration semantics, replace or extend it with extensions.
No permission popups. Run in a container, or build your own confirmation flow with extensions inline with your environment and security requirements.
No plan mode. Write plans to files, or build it with extensions, or install a package.
No built-in to-dos. They confuse models. Use a TODO.md file, or build your own with extensions.
No background bash. Use tmux. Full observability, direct interaction.
Read the blog post for the full rationale.
CLI Reference
lxt [options] [@files...] [messages...]Package Commands
lxt install <source> [-l] # Install package, -l for project-local
lxt remove <source> [-l] # Remove package
lxt update [source] # Update packages (skips pinned)
lxt list # List installed packages
lxt config # Enable/disable package resourcesModes
| Flag | Description |
|------|-------------|
| (default) | Interactive mode |
| -p, --print | Print response and exit |
| --mode json | Output all events as JSON lines (see docs/json.md) |
| --mode rpc | RPC mode for process integration (see docs/rpc.md) |
| --export <in> [out] | Export session to HTML |
Model Options
| Option | Description |
|--------|-------------|
| --provider <name> | Provider (anthropic, openai, google, etc.) |
| --model <pattern> | Model pattern or ID (supports provider/id and optional :<thinking>) |
| --api-key <key> | API key (overrides env vars) |
| --thinking <level> | off, minimal, low, medium, high, xhigh |
| --models <patterns> | Comma-separated patterns for Ctrl+P cycling |
| --list-models [search] | List available models |
Session Options
| Option | Description |
|--------|-------------|
| -c, --continue | Continue most recent session |
| -r, --resume | Browse and select session |
| --session <path> | Use specific session file or partial UUID |
| --session-dir <dir> | Custom session storage directory |
| --no-session | Ephemeral mode (don't save) |
Tool Options
| Option | Description |
|--------|-------------|
| --tools <list> | Enable specific built-in tools (default: read,bash,edit,write) |
| --no-tools | Disable all built-in tools (extension tools still work) |
Available built-in tools: read, bash, edit, write, grep, find, ls, colgrep
Search guidance: use semantic colgrep for vague discovery, colgrep pattern/hybrid mode for exact token lookup, and grep for deterministic exact checks.
Resource Options
| Option | Description |
|--------|-------------|
| -e, --extension <source> | Load extension from path, npm, or git (repeatable) |
| --no-extensions | Disable extension discovery |
| --skill <path> | Load skill (repeatable) |
| --no-skills | Disable skill discovery |
| --prompt-template <path> | Load prompt template (repeatable) |
| --no-prompt-templates | Disable prompt template discovery |
| --theme <path> | Load theme (repeatable) |
| --no-themes | Disable theme discovery |
Combine --no-* with explicit flags to load exactly what you need, ignoring settings.json (e.g., --no-extensions -e ./my-ext.ts).
Other Options
| Option | Description |
|--------|-------------|
| --system-prompt <text> | Replace default prompt (context files and skills still appended) |
| --append-system-prompt <text> | Append to system prompt |
| --verbose | Force verbose startup |
| -h, --help | Show help |
| -v, --version | Show version |
File Arguments
Prefix files with @ to include in the message:
lxt @prompt.md "Answer this"
lxt -p @screenshot.png "What's in this image?"
lxt @code.ts @test.ts "Review these files"Examples
# Interactive with initial prompt
lxt "List all .ts files in src/"
# Non-interactive
lxt -p "Summarize this codebase"
# Different model
lxt --provider openai --model gpt-4o "Help me refactor"
# Model with provider prefix (no --provider needed)
lxt --model openai/gpt-4o "Help me refactor"
# Model with thinking level shorthand
lxt --model sonnet:high "Solve this complex problem"
# Limit model cycling
lxt --models "claude-*,gpt-4o"
# Read-only mode
lxt --tools read,colgrep,grep,find,ls -p "Review the code"
# High thinking level
lxt --thinking high "Solve this complex problem"Environment Variables
| Variable | Description |
|----------|-------------|
| LXT_CODING_AGENT_DIR | Override config directory (default: ~/.lxt/agent) |
| LXT_PACKAGE_DIR | Override package directory (useful for Nix/Guix where store paths tokenize poorly) |
| LXT_SKIP_VERSION_CHECK | Skip version check at startup |
| LXT_CACHE_RETENTION | Set to long for extended prompt cache (Anthropic: 1h, OpenAI: 24h) |
| VISUAL, EDITOR | External editor for Ctrl+G |
Contributing & Development
See CONTRIBUTING.md for guidelines and docs/development.md for setup, forking, and debugging.
License
MIT
See Also
- @mariozechner/pi-ai: Core LLM toolkit
- @mariozechner/pi-agent: Agent framework
- @mariozechner/pi-tui: Terminal UI components
