@xyne/xyne-cli
v0.0.43
Published
Xyne CLI - A powerful AI assistant in your terminal with file operations, bash mode, drag-and-drop support, and multi-provider AI integration
Readme
Xyne CLI
A powerful AI assistant in your terminal with file operations, bash mode, drag-and-drop support, and multi-provider AI integration

Xyne CLI brings the power of modern AI assistants directly to your terminal with an intuitive interface, rich file handling capabilities, and seamless integration with multiple AI providers.
✨ Features
🎯 Core Capabilities
- Interactive Chat Interface - Beautiful terminal UI powered by Ink
- Multi-Provider AI Support - Works with Vertex AI and LiteLLM
- File Operations - Read, write, edit, and search files with AI assistance
- Conversation Management - Save, load, and resume conversations
- Smart Context Management - Automatic conversation compacting when context limits are reached
🛠️ Advanced Features
- Bash Mode - Execute shell commands with
!prefix - File Attachments - Drag & drop files or use
@filenamesyntax - Long Paste Support - Intelligent handling of large text pastes
- Word Navigation - Option+Arrow keys for word jumping and deletion
- Command System - Built-in
/help,/clear,/exportand more - MCP Integration - Model Context Protocol for extensible tools
📁 File Handling
- Drag & Drop Support - Drop files directly into the terminal
- File Type Detection - Automatic detection of images, PDFs, code, and text files
- Multiple Format Support - Images (PNG, JPG, GIF, WebP), PDFs, and text files up to 25MB
- Smart File Paths - Use
@filepathto reference files in conversations
⌨️ Productivity Features
- Bash Mode - Type
!to execute shell commands directly - Input History - Use ↑/↓ arrows to navigate through previous inputs
- File Browser - Type
@to browse and select files interactively - Message Queue - Queue messages while AI is processing
- Interruption Support - Double ESC to interrupt AI processing
📦 Installation
Global Installation (Recommended)
npm install -g @xyne/xyne-cliVerify Installation
xyne --versionQuick Start
# Start interactive session
xyne
# Start with debug logging
xyne --debug
# Load a previous conversation
xyne --load=path/to/conversation.json
# Show help
xyne --help
# Perform update
xyne update🔧 Configuration
AI Provider Setup
Vertex AI (Default)
# Set up Vertex AI (requires Google Cloud SDK)
export VERTEX_PROJECT_ID="your-project-id"
export VERTEX_REGION="us-east5"
export VERTEX_MODEL="claude-sonnet-4@20250514"
export VERTEX=1Note: Vertex AI is the default provider. If no environment variables are set, the system will use:
- Project ID:
dev-ai-epsilon(default) - Region:
us-east5(default) - Model:
claude-sonnet-4@20250514(default)
Supported Vertex AI Models:
claude-sonnet-4@20250514- Claude Sonnet 4 with thinking capabilitiesgemini-2.5-pro- Google Gemini 2.5 Pro with enhanced reasoning
LiteLLM (Alternative)
# Set up LiteLLM for other providers
export LITE_LLM_API_KEY="your-api-key"
export LITE_LLM_URL="https://api.openai.com/v1"
export LITE_LLM_MODEL="gpt-4"
export LITE_LLM=1Supported LiteLLM Models:
- Hugging Face:
glm-45-fp8,glm-46-fp8 - Anthropic:
claude-sonnet-4,claude-sonnet-4-20250514,claude-sonnet-4-5 - Google:
gemini-2.5-pro
MCP Servers
# Add command-based MCP servers
xyne mcp add context7 sh -c "npx -y @upstash/context7-mcp 2>/dev/null"
xyne mcp add filesystem npx @modelcontextprotocol/server-filesystem /path/to/directory
# Add HTTP transport MCP server
xyne mcp add deepwiki --transport=http --url=https://mcp.deepwiki.com/mcp
# Add with environment variables
xyne mcp add myserver npx my-mcp-server --env=API_KEY=secret --env=DEBUG=true
# Add from JSON configuration
xyne mcp add-json github '{"command":"docker","args":["run","-i","--rm","ghcr.io/github/github-mcp-server"]}'
# Add to global configuration
xyne mcp add context7 sh -c "npx -y @upstash/context7-mcp 2>/dev/null" --global
# List configured MCP servers
xyne mcp list
# Get details about a server
xyne mcp get filesystem
# Remove MCP server
xyne mcp remove filesystem🚀 Usage
Interactive Chat
# Start a conversation
xyne
> Hello! How can I help you today?
# Ask questions
> What files are in my current directory?
# Get help with commands
> /helpOne-Shot Prompts
# Basic prompt
xyne prompt "What is the capital of France?"
# Prompt with custom system prompt
xyne prompt "Help me code" --system-prompt="You are a senior software engineer"
xyne prompt "Analyze this" --system="You are concise and direct"
# Prompt with specific tools only
xyne prompt "Read and analyze files" --tools=read,grep,ls
xyne prompt "File operations only" --tools="read,write,edit"
# Combine system prompt and tools
xyne prompt "Help me debug" --system="You are helpful" --tools=read,grep,bash
# Prompt from piped input
echo "Analyze this code" | xyne prompt
cat file.txt | xyne prompt "Summarize this content"
# Flexible argument order (all equivalent)
xyne prompt "Hello world" --system="Be helpful"
xyne prompt --system="Be helpful" "Hello world"
xyne prompt --tools=ls "List files" --system="Be concise"Available Tools
When using --tools, you can specify any combination of:
read- Read fileswrite- Write filesedit- Edit filesmultiedit- Multiple file editsgrep- Search patterns in filesglob- File pattern matchingls- Directory listingbash- Execute shell commandstodo-write- Task management
Bash Mode
# Execute shell commands with ! prefix
> !ls -la
> !git status
> !npm install express
# Or use the bash command
> /bash ls -laFile Operations
# Reference files in conversation
> Can you read @README.md and summarize it?
# Drag and drop files (automatically detected)
> # Drop a file into terminal
> I just attached an image, what do you see?
# Create and edit files
> Create a new Python script that calculates fibonacci numbers
> Edit @script.py to add error handlingAdvanced Features
# Load previous conversation
xyne --load=conversation.json
# Export conversation
> /export conversation.md
> /export conversation.json
# Clear conversation
> /clear
# Enable debug mode
xyne --debugKeyboard Shortcuts
- ↑/↓ Arrows - Navigate input history
- Option + ←/→ - Jump between words
- Option + Backspace/Delete - Delete words
- Ctrl + C - Interrupt (press twice to exit)
- Double ESC - Interrupt AI processing or show message selection
- @ - Open file browser
- ! - Enter bash mode
🎨 File Support
Supported File Types
| Type | Extensions | Max Size | Features |
|------|------------|----------|----------|
| Images | .png, .jpg, .jpeg, .gif, .webp, .svg, .bmp, .ico | 10MB | Visual analysis, OCR |
| PDFs | .pdf | 25MB | Text extraction, analysis |
| Text | .txt, .md, .json, .yaml, .yml, .html, .css, .xml | 5MB | Full content analysis |
| Code | .js, .ts, .jsx, .tsx, .py, .go, .rs, .java, .cpp, .c, .php, .rb, .swift, .kt | 5MB | Syntax highlighting, analysis |
Additional Support:
- Files without extensions (e.g.,
.gitignore,Dockerfile,Makefile) - Common configuration files automatically detected as text
- Content-based detection for unknown file types
File Attachment Methods
- Drag & Drop - Drop files directly into the terminal
- File References - Use
@filenameor@path/to/file - File Browser - Type
@to browse and select files - Clipboard - Paste file paths automatically detected
Areas for Contribution
- New AI Providers - Add support for additional AI providers
- File Handlers - Support for new file types and formats
- UI Improvements - Enhance the terminal interface
- Performance - Optimize conversation handling and file processing
- Documentation - Improve guides and examples
- Testing - Add comprehensive test coverage
Bug Reports
When reporting bugs, please include:
- Your operating system and Node.js version
- Steps to reproduce the issue
- Expected vs actual behavior
- Any error messages or logs (use
--debugflag)
Feature Requests
For feature requests, please:
- Search existing issues first
- Provide a clear description of the feature
- Explain the use case and benefits
- Consider contributing the feature yourself!
📚 Documentation
Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| VERTEX_PROJECT_ID | Google Cloud Project ID | dev-ai-epsilon |
| VERTEX_REGION | Vertex AI region | us-east5 |
| VERTEX_MODEL | Vertex AI model | claude-sonnet-4@20250514 |
| LITE_LLM_API_KEY | LiteLLM API key | - |
| LITE_LLM_URL | LiteLLM base URL | - |
| LITE_LLM_MODEL | LiteLLM model name | - |
| LOG_LEVEL | Logging level | off |
Interactive Commands
| Command | Description |
|---------|-------------|
| /help | Show available commands |
| /clear | Clear conversation history |
| /export | Export conversation |
| /mcp | List MCP servers |
| /exit | Exit the application |
CLI Commands
| Command | Description |
|---------|-------------|
| xyne | Start interactive chat |
| xyne prompt <text> | Execute one-shot prompt |
| xyne mcp add <name> [options...] | Add MCP server |
| xyne mcp remove <name> | Remove MCP server |
| xyne mcp list | List all MCP servers |
| xyne mcp get <name> | Get MCP server details |
| xyne mcp add-json <name> <json> | Add MCP server from JSON |
| xyne config | Show current configuration |
| xyne update | Update to latest version |
| xyne --help | Show help information |
| xyne --version | Show version information |
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- Built with Ink for beautiful terminal UIs
- Powered by React for component architecture
- Supports Vertex AI and LiteLLM
- Inspired by the needs of developers who live in the terminal
Made with ❤️ by the Xyne Team
