@xalia/agent
v0.6.12
Published
A TypeScript-based AI agent system with MCP (Model Context Protocol) support, multi-user chat capabilities, and extensive tool integration.
Readme
Xalia Agent
A TypeScript-based AI agent system with MCP (Model Context Protocol) support, multi-user chat capabilities, and extensive tool integration.
Overview
This agent provides two primary interfaces:
- Agent Mode: Single-user conversational AI with tool support
- Chat Mode: Multi-user chat server with shared AI agent sessions
Quick Start
Setup
# From project root
yarn install
yarn workspaces run buildBasic Usage
Example of running agent can be found in test script.
Architecture
For a detailed explanation of the context system architecture, including how context is formed, compressed, and managed during agent execution, see:
📚 Context System Architecture Documentation
This document covers:
- Context window structure and composition
- System prompt fragment injection
- Automatic context compression
- Session file handling
- Transaction-based message flow
Core Components
Agent (src/agent/)
Agent: Main orchestrator class managing conversations, tools, and LLM interactionsMcpServerManager: Manages MCP tool servers, enables/disables tools dynamicallySkillManager: Interfaces with SudoMCP backend to discover and connect to hosted MCP servers- LLM Implementations:
OpenAILLM: Standard OpenAI API integrationOpenAILLMStreaming: Streaming response supportDummyLLM: Mock implementation for testing
Chat System (src/chat/)
ChatClient/runServer: WebSocket-based real-time communicationConversationManager: Orchestrates multi-user sessions with shared AI agentDatabase: Supabase integration for user management, sessions, and agent profilesApiKeyManager: Authentication and authorization
CLI Tools (src/tool/)
main.ts: Primary entry point with subcommandsagentMain.ts: Single-user agent mode implementationchatMain.ts: Multi-user chat server/client implementation
Usage Examples
Agent Mode
Basic conversation:
cli/agent-cliOne-shot with specific prompt:
echo "Explain quantum computing" > prompt.txt
cli/agent-cli -1 --prompt prompt.txtWith image analysis:
echo "Describe this image" > prompt.txt
cli/agent-cli --image photo.jpg --prompt prompt.txtUsing agent profile:
cli/agent-cli --agent-profile agent/test_data/simplecalc_profile.jsonAuto-approve tools:
echo "Calculate 15 * 23" > prompt.txt
cli/agent-cli --approve-tools --prompt prompt.txtChat Mode
Start server:
node dist/agent/src/tool/main.js chat server --port 5003Connect client:
node dist/agent/src/tool/main.js chat client \
--session "project_discussion" \
--agent-profile "helpful_assistant"Run scripted conversation:
node dist/agent/src/tool/main.js chat client \
--session "test" \
--script conversation_script.txtInteractive Commands (Agent Mode)
While in agent mode, use these commands:
Tool Management:
/ls- List available MCP servers/lt- List current tools (enabled marked with *)/as <server>- Add and enable all tools from server/e <server> <tool>- Enable specific tool/d <server> <tool>- Disable specific tool
Media and Data:
:i image.jpg- Include image with next message/wc conversation.json- Save conversation to file/wa profile.json- Save agent profile to file
General:
/h- Show help menu/q- Quit
Configuration
Environment Variables
# LLM Configuration
LLM_URL=http://localhost:5001/v1 # LLM API endpoint
LLM_API_KEY=your_openai_key # API key for LLM
LLM_MODEL=gpt-4o # Model name
# SudoMCP Integration
XMCP_URL=http://localhost:5001/ # SudoMCP backend URL
API_KEY=your_sudomcp_key # SudoMCP API key
# Database (Chat Mode)
SUPABASE_URL=http://127.0.0.1:54321 # Supabase URL
SUPABASE_KEY=your_supabase_key # Supabase service key
# Chat Server
CHAT_SERVER_PORT=5003 # WebSocket server portAgent Profiles
Agent profiles define the AI's behavior and available tools:
{
"model": "gpt-4o",
"system_prompt": "You are a helpful coding assistant.",
"mcp_settings": {
"github": ["create_issue", "list_repos"]
}
}Development
Project Structure
src/
├── agent/ # Core AI agent implementation
│ ├── agent.ts # Main Agent orchestrator
│ ├── mcpServerManager.ts # MCP tool management
│ ├── sudoMcpServerManager.ts # SudoMCP integration
│ └── *LLM.ts # LLM provider implementations
├── chat/ # Multi-user chat system
│ ├── server.ts # WebSocket chat server
│ ├── client.ts # Chat client implementation
│ ├── db.ts # Database models and queries
│ └── conversationManager.ts # Session orchestration
├── tool/ # CLI interfaces
│ ├── main.ts # Primary entry point
│ ├── agentMain.ts # Single-user mode
│ └── chatMain.ts # Multi-user mode
└── test/ # Test suitesTesting
# Run test suite
yarn test
# Test MCP server integration (requires local backend)
yarn test -- --grep "MCP"
# Test database operations (requires Supabase)
yarn test -- --grep "DB"Utility Scripts
# Git commit message generation
./scripts/git_message
# PR description generation
./scripts/pr_message
# Code review assistance
./scripts/pr_review
# Multi-user chat testing
./scripts/test_chatAdvanced Features
Dummy LLM for Testing
Use mock responses for development:
cli/agent \
--agent-profile test_data/test_script_profile.json \
--prompt "Test prompt"Conversation Restoration
Save and restore conversation state:
# Save conversation
cli/agent agent --conversation-output saved_conversation.json
# Restore conversation
cli/agent --conversation saved_conversation.json