@aadithya2112/pcode
v1.0.5
Published
A coding agent CLI powered by Claude
Maintainers
Readme
Coding Agent CLI
A lightweight coding agent prototype built with Bun that uses an orchestrator pattern to expose tools to an LLM (OpenRouter). The agent can read, write, search, and execute code while being guided by an AI model to accomplish tasks.
Features
- 12 Built-in Tools: Create, read, edit, delete files; list directories; search; run commands; git operations; environment variables
- LLM Integration: Uses OpenRouter API (supports free models like Mistral 7B)
- Safety Constraints: Path validation, command whitelisting, file size limits, timeouts
- Interactive & Single-task Modes: REPL for continuous interaction or one-off task execution
- TypeScript: Fully typed for better development experience
Prerequisites
- Bun (v1.0+)
- OpenRouter API Key (free tier available)
Installation
Install from npm
npm install -g @aadithya2112/pcodeAfter installation, the pcode command will be available globally.
Set up your API key
Get your free API key from OpenRouter.
Option 1: Global config (recommended)
mkdir -p ~/.config/pcode
echo '{"OPENROUTER_API_KEY":"your-api-key-here"}' > ~/.config/pcode/config.jsonOption 2: Environment variable
export OPENROUTER_API_KEY=your-api-key-hereAdd to your ~/.zshrc or ~/.bashrc to make it permanent.
Option 3: Local .env file
echo "OPENROUTER_API_KEY=your-api-key-here" > .envDevelopment Setup
- Clone this repository
- Install dependencies:
bun install - Copy
.env.exampleto.envand add your OpenRouter API key
Usage
Single Task Execution
pcode "Create a TypeScript file that exports sayHello function"After completing the task, the agent will ask: "Anything else you'd like me to do?"
- Type another task to continue working
- Type "exit" or "no" to quit
Interactive Mode
pcode --interactiveThen type your tasks:
> Create a file src/hello.ts
> Read the file and show me
> exitProject-Specific Tasks
pcode -p /path/to/project "Build the TypeScript project"Run Example Scenario
bun run scripts/scenario.tsHow It Works
- User Input → User provides a task description
- LLM Processing → Sends task + available tools to LLM via OpenRouter
- Tool Calling → LLM decides which tools to use and with what arguments
- Tool Execution → Agent executes the tools safely and validates results
- Feedback Loop → Results sent back to LLM for next iteration
- Repeat → Until LLM finishes or max iterations reached
Available Tools
createFile- Create a new file with contentreadFile- Read file contents (with optional line ranges)editFile- Replace text in a filedeleteFile- Delete a filelistDirectory- List files in a directorysearchInFiles- Search for patterns in filesrunCommand- Execute shell commands (whitelisted)getFileInfo- Get file metadataappendToFile- Append content to end of filegitStatus- Get git statusgetDiff- Get git diffgetEnvVar- Read environment variables
Configuration
Environment variables in .env:
OPENROUTER_API_KEY(required) - Your OpenRouter API keyLLM_MODEL(optional) - Model to use (default:mistralai/mistral-7b-instruct:free)
Free Models on OpenRouter
mistralai/mistral-7b-instruct:free(recommended)meta-llama/llama-2-7b-chat:freeopenrouter/auto
Safety Features
- Path Validation: Only allows operations within project root or /tmp
- Command Whitelisting: Only safe commands like
npm,git,node,bun - Blocked Commands:
rm,sudo,chmod,reboot, etc. - File Size Limits: 10MB max read, 100MB max write
- Timeouts: 30s default command timeout
Example Tasks
# Create a TypeScript project structure
bun run src/cli.ts "Create a TypeScript project with src/, dist/, and package.json"
# Add documentation
bun run src/cli.ts "Create a detailed README.md for my project"
# Search and modify
bun run src/cli.ts "Search for all TODOs in the codebase and show me what needs to be fixed"
# Build and test
bun run src/cli.ts "Build the project with npm and show me any errors"Project Structure
coding-agent-cli/
├── .env # API keys (GITIGNORED)
├── .env.example # Template for .env
├── README.md # This file
├── package.json
├── tsconfig.json
├── PLAN.md # Detailed implementation plan
├── src/
│ ├── types.ts # TypeScript interfaces
│ ├── cli.ts # Entry point
│ ├── orchestrator.ts # Main orchestration loop
│ ├── tools/
│ │ ├── registry.ts # Tool definitions & schemas
│ │ └── executor.ts # Tool implementations
│ └── llm/
│ └── client.ts # OpenRouter LLM client
├── scripts/
│ └── scenario.ts # Example scenario
└── tests/
└── (future tests)Architecture
Core Components
- Types - TypeScript interfaces for tools, LLM, execution context
- Tool Registry - JSON schemas for all 12 tools (for LLM consumption)
- Tool Executor - Implements each tool with safety constraints
- LLM Client - OpenRouter integration with tool calling
- Orchestrator - Main loop: LLM → tools → results → repeat
- CLI - Command-line interface with REPL support
Data Flow
User Task
↓
CLI (parse args, load env)
↓
Orchestrator (initialize)
↓
LLM Client (send to OpenRouter with tools)
↓
LLM Response (with tool_calls)
↓
Tool Executor (validate + execute)
↓
Tool Results
↓
LLM Client (send back for next iteration)
↓
Repeat until done
↓
Final Response → UserDevelopment
Scripts
# Run CLI with task
bun run src/cli.ts "Your task here"
# Interactive mode
bun run src/cli.ts --interactive
# Run scenario example
bun run scripts/scenario.ts
# Type checking (built into bun)
bun checkAdding New Tools
- Define in
src/types.ts(TOOL_NAMES enum) - Add schema to
src/tools/registry.ts - Implement in
src/tools/executor.ts - Add to allowed commands if needed
Limitations & Future Work
Current Limitations
- Single-threaded execution (no parallel tool calls yet)
- In-memory conversation history only
- No RAG for large codebases
- Basic error messages
- No input confirmation for destructive operations
Future Enhancements
- Persistent conversation history (SQLite)
- RAG for codebase context
- Multi-tool parallelization
- Streaming LLM responses
- Better prompt engineering
- Support for multiple LLM providers
- Tool result caching
Troubleshooting
"OPENROUTER_API_KEY not set"
Make sure you've created a .env file with your API key:
cp .env.example .env
# Edit .env and add your key"Command not allowed"
The command you tried to run is blocked for safety. Check src/types.ts for allowed commands.
"Path outside allowed directories"
File operations are restricted to:
- Your project root
- /tmp directory
"LLM API error"
Check:
- Your API key is valid
- You have free credits on OpenRouter
- Your internet connection is working
License
MIT
Resources
- OpenRouter - LLM provider
- Bun - Runtime
- OpenAI API Docs - Tool calling spec
