clarissa
v1.4.1
Published
An AI-powered terminal assistant with tool execution capabilities
Maintainers
Readme
Clarissa is a command-line AI agent built with Bun and Ink. It supports multiple LLM providers including cloud services like OpenRouter, OpenAI, and Anthropic, as well as local inference via Apple Intelligence, LM Studio, and local GGUF models. The agent can execute tools, manage files, run shell commands, and integrate with external services via the Model Context Protocol (MCP).
Features
- Multi-provider support - Use cloud providers (OpenRouter, OpenAI, Anthropic) or run completely offline with local models
- Apple Intelligence - On-device AI using Apple Foundation Models with full tool calling support (macOS 26+)
- Local model inference - Run GGUF models locally via LM Studio or node-llama-cpp with GPU acceleration
- Streaming responses - Real-time token streaming for responsive conversations
- Built-in tools - File operations, Git integration, shell commands, web fetching, and more
- MCP integration - Connect to external MCP servers to extend functionality
- Session management - Auto-save on exit, quick resume with
/last, and named sessions - Memory persistence - Remember facts across sessions with
/rememberand/memories - Context management - Automatic token tracking and context truncation
- Tool confirmation - Approve or reject potentially dangerous operations
- One-shot mode - Run single commands directly from your shell with query history
- Auto-updates - Get notified of new versions and upgrade easily with
clarissa upgrade
How It Works
Clarissa implements the ReAct (Reasoning + Acting) agent pattern, where an LLM reasons about tasks and takes actions through tool execution in an iterative loop.
Architecture Overview
flowchart LR
subgraph Input
A[User Message]
B[Piped Content]
end
subgraph Clarissa
C[Agent Loop]
D[LLM Client]
E[Tool Registry]
F[Context Manager]
end
subgraph Providers
G[Cloud: OpenRouter / OpenAI / Anthropic]
H[Local: Apple AI / LM Studio / GGUF]
end
subgraph External
I[MCP Servers]
end
A --> C
B --> C
C <--> D
C <--> E
C <--> F
D <--> G
D <--> H
E <-.-> IThe system connects your terminal to various LLM providers. When you ask Clarissa to perform a task, it:
- Sends your message to the LLM along with available tool definitions
- Receives a response that may include tool calls (e.g., read a file, run a command)
- Executes the requested tools and feeds results back to the LLM
- Repeats until the LLM provides a final answer
The ReAct Loop
flowchart TD
A[User Input] --> B[Add to Conversation]
B --> C[Send to LLM]
C --> D{Response Type?}
D -->|Tool Calls| E[Execute Tools]
E --> F[Add Results to History]
F --> C
D -->|Final Answer| G[Display Response]This loop continues until the LLM responds without requesting any tools, indicating it has completed the task. A maximum iteration limit prevents infinite loops.
Key Concepts
| Concept | Description |
|---------|-------------|
| Tool Confirmation | Potentially dangerous tools (file writes, shell commands) require approval before execution. Use /yolo to auto-approve. |
| Context Management | Clarissa tracks token usage and automatically truncates older messages when approaching the model's context limit. |
| Session Persistence | Conversations can be saved to ~/.clarissa/sessions/ and restored later with /save and /load. |
| Memory System | Use /remember to store facts that persist across sessions and are included in every conversation. |
| MCP Extensibility | Connect to Model Context Protocol servers to add custom tools without modifying Clarissa's code. |
For detailed architecture documentation, see the Architecture Guide.
Requirements
- Bun v1.0 or later (for running from source or npm install)
- For cloud providers: API key for OpenRouter, OpenAI, or Anthropic
- For Apple Intelligence: macOS 26+ with Apple Silicon and Apple Intelligence enabled
- For local models: LM Studio or download GGUF models with
clarissa download
Installation
From npm (recommended)
# Using bun
bun install -g clarissa
# Using npm
npm install -g clarissaFrom source
git clone https://github.com/cameronrye/clarissa.git
cd clarissa
bun install
bun linkStandalone binary
Download a pre-built binary from the releases page and add it to your PATH:
# Example for macOS ARM
chmod +x clarissa-macos-arm64
mv clarissa-macos-arm64 /usr/local/bin/clarissaConfiguration
Create a config file at ~/.clarissa/config.json or run clarissa init for interactive setup.
API Keys (Cloud Providers)
Set one or more API keys for cloud providers:
# Environment variables
export OPENROUTER_API_KEY=your_key_here
export OPENAI_API_KEY=your_key_here
export ANTHROPIC_API_KEY=your_key_hereOr in ~/.clarissa/config.json:
{
"apiKey": "your_openrouter_key",
"openaiApiKey": "your_openai_key",
"anthropicApiKey": "your_anthropic_key"
}Local Providers (No API Key Required)
- Apple Intelligence: Automatically detected on macOS 26+ with Apple Intelligence enabled
- LM Studio: Start LM Studio and load a model - Clarissa auto-detects the local server
- Local GGUF: Download models with
clarissa downloadand run offline
Configuration Options
| Config Key | Env Variable | Default | Description |
|------------|--------------|---------|-------------|
| apiKey | OPENROUTER_API_KEY | - | OpenRouter API key |
| openaiApiKey | OPENAI_API_KEY | - | OpenAI API key |
| anthropicApiKey | ANTHROPIC_API_KEY | - | Anthropic API key |
| model | - | (auto) | Preferred model |
| preferredProvider | - | (auto) | Preferred provider ID |
| maxIterations | MAX_ITERATIONS | 10 | Maximum tool execution iterations |
| debug | DEBUG | false | Enable debug logging |
| mcpServers | - | {} | MCP servers to auto-load (see below) |
MCP Server Configuration
Add MCP servers to your config file to auto-load them on startup:
{
"apiKey": "your_api_key_here",
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_TOKEN": "your_token" }
}
}
}Use /mcp to view connected servers and /tools to see available tools.
Usage
Interactive Mode
Start Clarissa in interactive mode:
clarissaOne-Shot Mode
Run a single command and exit:
clarissa "What files are in this directory?"Piped Input
Pipe content from other commands:
cat error.log | clarissa "Explain this error"
git diff | clarissa "Write a commit message for these changes"CLI Options
| Option | Description |
|--------|-------------|
| -m, --model MODEL | Use a specific model |
| --list-models | List available models |
| --debug | Enable debug output |
| -h, --help | Show help |
| -v, --version | Show version |
Commands
| Command | Description |
|---------|-------------|
| /help | Show available commands |
| /clear | Clear conversation history |
| /save [NAME] | Save current session |
| /sessions | List saved sessions |
| /load ID | Load a saved session |
| /last | Resume last session |
| /delete ID | Delete a saved session |
| /remember <fact> | Save a memory |
| /memories | List saved memories |
| /forget <#\|ID> | Forget a memory |
| /model [NAME] | Show or switch the current model |
| /provider [ID] | Show or switch the LLM provider |
| /mcp | Show connected MCP servers |
| /tools | List available tools |
| /context | Show context window usage and breakdown |
| /yolo | Toggle auto-approve mode (skip tool confirmations) |
| /exit | Exit Clarissa |
Keyboard Shortcuts
| Shortcut | Action |
|----------|--------|
| Ctrl+C | Cancel current operation / Exit |
| Ctrl+P | Enhance prompt with AI |
| Up/Down | Navigate input history |
Built-in Tools
File Operations
read_file- Read file contentswrite_file- Write or create filespatch_file- Apply patches to fileslist_directory- List directory contentssearch_files- Search for files by pattern
Git Integration
git_status- Show repository statusgit_diff- Show changesgit_log- View commit historygit_add- Stage filesgit_commit- Commit changesgit_branch- Manage branches
System
bash- Execute shell commandscalculator- Perform calculations
Web
web_fetch- Fetch and parse web pages
MCP Integration
Connect to Model Context Protocol servers to extend Clarissa with additional tools:
/mcp npx -y @modelcontextprotocol/server-filesystem /path/to/directoryDevelopment
Run with hot reloading:
bun run devRun tests:
bun testBuilding Binaries
Build for your current platform:
bun run build:currentBuild for all platforms:
bun run build:allBinaries are output to the dist/ directory.
Publishing to npm
npm publishProject Structure
src/
index.tsx # Entry point
agent.ts # ReAct agent loop implementation
config/ # Environment configuration
llm/ # LLM client and context management
mcp/ # MCP client integration
session/ # Session persistence
tools/ # Tool definitions
ui/ # Ink UI componentsLicense
MIT
Made with ❤️ by Cameron Rye
