ollama-agent
v0.1.2
Published
Ollama-based agent library with extensible architecture for AI-powered development assistance
Maintainers
Readme
Ollama Agent Library
A comprehensive TypeScript framework for building AI-powered development agents with Ollama integration. Inspired by VS Code Copilot Chat architecture, this library provides a complete foundation for creating intelligent coding assistants and automation tools.
🚀 Key Features
🤖 Advanced Agent Architecture
- Extensible Agent System: Modular base classes with specialized agent implementations
- Service-Oriented Design: Dependency injection with pluggable services
- Multiple Agent Support: Register and manage multiple specialized agents
- Agent Capabilities: Fine-grained capability system for agent specialization
🛠️ Powerful Tool System
- File Operations: Complete file system manipulation (read, write, search, replace)
- Workspace Management: Project-wide operations and context awareness
- Content Search: Advanced text search across codebases
- Tool Registry: Extensible tool registration and execution system
- Type-Safe Parameters: Full TypeScript support for tool parameters
🎯 Intelligent Intent Detection
- Rule-Based Routing: Smart classification of user requests
- Context-Aware: Intent detection considers workspace context
- Multi-Intent Support: Handle complex, multi-part requests
- Confidence Scoring: Probabilistic intent matching with fallbacks
💬 Conversation Management
- History Tracking: Persistent conversation state and context
- Context Preservation: Maintain workspace and file context across interactions
- Memory Management: Configurable history limits and cleanup
- Multi-Session Support: Handle concurrent conversation sessions
📊 Advanced Visualization & Debugging
- Real-Time Dashboard: Web-based monitoring interface
- Performance Metrics: Token usage, response times, success rates
- Interaction Timeline: Visual representation of agent activities
- Debug Console: Real-time logging and error tracking
- Export Capabilities: JSON/CSV data export for analysis
🏗️ Architecture Overview
Core Components
┌─────────────────────────────────────────────────────┐
│ Agent Library │
├─────────────────┬─────────────────┬─────────────────┤
│ Agents │ Platform │ Tools │
│ │ │ │
│ • Base Agent │ • Intent Router │ • File Tools │
│ • Ollama Agent │ • Model Manager │ • Search Tools │
│ • Custom Agents │ • Services │ • Content Tools │
│ │ • Context Mgmt │ • Custom Tools │
├─────────────────┼─────────────────┼─────────────────┤
│ Visualization Layer │
│ • Web Dashboard • Debug Console • Metrics │
└─────────────────────────────────────────────────────┘Service Layer
- LogService: Structured logging with configurable levels
- ConfigurationService: Centralized configuration management
- FileService: File system operations abstraction
- WorkspaceService: Project workspace management
- AgentService: Agent lifecycle and registry management
Tool System
- ReadFileTool: Intelligent file reading with encoding detection
- WriteFileTool: Safe file writing with backup and validation
- SearchFilesTool: Pattern-based file discovery
- SearchContentTool: Text search across files with regex support
- ReplaceStringTool: Precise text replacement with context
🔧 Technical Specifications
Requirements
- Node.js: 18.0+ (ESM support required)
- TypeScript: 5.0+ (strict mode enabled)
- Ollama: Any compatible version with function calling support
- Memory: Minimum 4GB RAM (8GB+ recommended for large workspaces)
Compatible Models
- Primary:
qwen2.5-coder:7b(recommended for code tasks) - Alternative:
qwen3:8b,llama3.1:8b,deepseek-coder:6.7b - Function Calling: Models with JSON tool calling support preferred
Performance Characteristics
- Startup Time: ~500ms (with model preloading)
- Memory Usage: ~100-200MB base + model memory
- Concurrent Requests: Supports 10+ simultaneous conversations
- Tool Execution: Sub-second response for most file operations
Quick Start
Prerequisites
Before using the library, ensure you have:
- Node.js 18+ installed
- Ollama running locally:
# Install Ollama (if not already installed) curl -fsSL https://ollama.com/install.sh | sh # Start Ollama server ollama serve # Pull a compatible model ollama pull qwen2.5-coder:7b # or try other models like: # ollama pull llama2 # ollama pull codellama
Installation
pnpm install ollama-agentBasic Usage
import { createOllamaAgentLibrary, AgentLocation } from 'ollama-agent';
// Initialize the library
const library = await createOllamaAgentLibrary({
logging: { level: 'info' },
workspace: { root: process.cwd() },
visualization: {
enabled: true,
webPort: 3001,
},
});
// Send a request to an agent
const response = await library.handleRequest('Explain TypeScript interfaces', {
location: AgentLocation.Panel,
});
console.log(response.content[0]?.content);📖 Examples
Basic Agent Usage
import { OllamaAgent, services } from './src';
// Initialize with default configuration
const agent = new OllamaAgent({
model: 'qwen2.5-coder:7b',
baseURL: 'http://localhost:11434',
workspaceRoot: process.cwd()
});
// Simple conversation
const response = await agent.processMessage(
"Please read the package.json file and tell me about the project dependencies"
);
console.log(response);Advanced Tool Usage
// Enable specific tools for enhanced capabilities
const agent = new OllamaAgent({
model: 'qwen2.5-coder:7b',
tools: ['read_file', 'write_file', 'search_files', 'replace_string'],
maxTokens: 8192,
temperature: 0.1
});
// Complex task with multiple tool calls
const response = await agent.processMessage(`
Please analyze the codebase structure, find all TypeScript files,
and create a summary report of the main classes and interfaces.
Save this report to 'analysis-report.md'.
`);Service Integration
import { services, tools } from './src';
// Configure logging
services.log.configure({
level: 'debug',
enableConsole: true,
enableFile: true,
logFile: './agent.log'
});
// Workspace management
const workspace = services.workspace;
await workspace.initialize('/path/to/project');
// Custom tool registration
tools.registry.register('custom-analyzer', {
name: 'analyze_code_complexity',
description: 'Analyze code complexity metrics',
parameters: {
type: 'object',
properties: {
filePath: { type: 'string', description: 'Path to analyze' }
},
required: ['filePath']
},
handler: async (params) => {
// Custom complexity analysis logic
return { complexity: 'high', metrics: {} };
}
});Web Dashboard
import { WebVisualization } from './src/visualization';
// Start development server with dashboard
const viz = new WebVisualization({
port: 3000,
enableMetrics: true,
enableDebugConsole: true
});
await viz.start();
console.log('Dashboard available at http://localhost:3000');📚 API Reference
OllamaAgent Class
Constructor Options
interface OllamaAgentConfig {
model: string; // Model name (e.g., 'qwen2.5-coder:7b')
baseURL?: string; // Ollama server URL (default: localhost:11434)
workspaceRoot?: string; // Project root directory
tools?: string[]; // Enabled tools list
maxTokens?: number; // Maximum response tokens
temperature?: number; // Generation temperature (0-1)
systemPrompt?: string; // Custom system prompt
enableFunctionCalling?: boolean; // Enable tool calling (default: true)
}Methods
processMessage(message: string): Promise<string>- Process user message with tool callingconversation(messages: Message[]): Promise<string>- Multi-turn conversationsetSystemPrompt(prompt: string): void- Update system promptenableTool(toolName: string): void- Enable specific tooldisableTool(toolName: string): void- Disable specific toolgetAvailableTools(): string[]- List available tools
Services API
LogService
services.log.configure(options: LogConfig);
services.log.info(message: string, meta?: object);
services.log.error(message: string, error?: Error);
services.log.debug(message: string, data?: any);ConfigurationService
services.config.set(key: string, value: any);
services.config.get(key: string, defaultValue?: any);
services.config.load(configPath: string);
services.config.save(configPath: string);WorkspaceService
await services.workspace.initialize(rootPath: string);
services.workspace.getFiles(pattern?: string): Promise<string[]>;
services.workspace.isInWorkspace(filePath: string): boolean;
services.workspace.relativePath(filePath: string): string;Tools API
Built-in Tools
- read_file: Read file contents with encoding detection
- write_file: Write content to file with backup
- search_files: Find files matching patterns
- search_content: Search text within files
- replace_string: Replace text with context validation
Custom Tool Registration
tools.registry.register(name: string, config: ToolConfig);
tools.registry.unregister(name: string);
tools.registry.get(name: string): Tool | undefined;
tools.registry.list(): string[];🤝 Contributing
Development Setup
Fork and Clone
git clone https://github.com/yourusername/ollama-agent.git cd ollama-agentInstall Dependencies
pnpm installDevelopment Mode
# Run with auto-reload pnpm dev # Run tests in watch mode pnpm test --watch # Type checking pnpm type-checkTesting
# Run all tests pnpm test # Integration tests pnpm test:integration # Coverage report pnpm test:coverage
Code Standards
- TypeScript: Strict mode enabled, no
anytypes - ESLint: Airbnb configuration with custom rules
- Prettier: Consistent code formatting
- Commits: Conventional commits format
- Testing: Minimum 80% coverage requirement
Pull Request Process
- Branch Naming:
feature/descriptionorfix/description - Commit Messages: Follow conventional commits
- Tests: Add tests for new functionality
- Documentation: Update README and JSDoc comments
- Review: All PRs require review and passing CI
Architecture Guidelines
- Single Responsibility: Each class/function has one clear purpose
- Dependency Injection: Use service container pattern
- Error Handling: Comprehensive error handling with context
- Logging: Structured logging for debugging
- Configuration: Centralized configuration management
🗺️ Roadmap
Version 0.2.0 (Q1 2025)
Enhanced Model Support
- Support for Anthropic Claude integration
- OpenAI GPT-4 compatibility layer
- Local model switching without restart
- Model performance benchmarking
Advanced Tool System
- Plugin architecture for third-party tools
- Tool composition and chaining
- Custom tool validation schemas
- Tool execution sandboxing
Improved Visualization
- Real-time conversation monitoring
- Performance metrics dashboard
- Tool usage analytics
- Export conversation histories
Version 0.3.0 (Q2 2025)
Multi-Agent Orchestration
- Agent-to-agent communication
- Hierarchical agent structures
- Task delegation and coordination
- Collaborative problem solving
Enterprise Features
- Role-based access control
- Audit logging and compliance
- Multi-tenant workspaces
- API rate limiting and quotas
Performance Optimizations
- Streaming response support
- Connection pooling
- Caching layer for frequent requests
- Memory usage optimizations
Version 0.4.0 (Q3 2025)
Cloud Integration
- Docker containerization
- Kubernetes deployment configs
- Cloud storage backends
- Distributed execution support
Developer Experience
- VS Code extension
- CLI tool with interactive mode
- Project templates and generators
- Automated testing frameworks
Advanced AI Capabilities
- Code generation and refactoring
- Automated documentation generation
- Intelligent error detection
- Performance suggestion engine
Long-term Vision (2025+)
Ecosystem Expansion
- Marketplace for custom tools and agents
- Community plugin repository
- Integration with popular development tools
- Support for multiple programming languages
Research Integration
- Latest NLP and ML techniques
- Adaptive learning from user interactions
- Predictive assistance capabilities
- Advanced code understanding models
📄 License
MIT License - see LICENSE file for details.
🙏 Acknowledgments
- Ollama Team - For the excellent local LLM platform
- TypeScript Community - For robust tooling and ecosystem
- Contributors - Thank you to all contributors and testers
pnpm devThis launches an interactive console with:
- Real-time agent interaction
- Performance monitoring
- Web-based visualization dashboard at http://localhost:3001
- Debug commands and metrics
Examples
The library includes comprehensive examples demonstrating various usage patterns:
Basic Usage Example
# Simple example showing basic library usage
pnpm run example:basicAdvanced Custom Agent Example
# Advanced example with custom ProjectAnalyzerAgent
pnpm run example:project-analyzerThis demonstrates:
- Creating custom agents with specialized capabilities
- Project analysis, code review, security audit
- Performance analysis and dependency review
- Interactive mode for custom queries
- Visualization dashboard integration
See the examples/ directory for:
- Sample project structure for analysis
- Custom agent implementations
- Usage patterns and best practices
- Interactive demonstrations
Project Structure
src/
├── agents/ # Agent system (base classes, implementations)
│ ├── base.ts # BaseAgent, AgentService, foundational classes
│ ├── ollama.ts # OllamaAgent implementation
│ └── index.ts # Agent exports
├── platform/ # Core services and integrations
│ ├── services.ts # LogService, FileService, WorkspaceService
│ ├── ollama.ts # OllamaClient, ModelManager
│ ├── intent.ts # Intent detection and routing
│ └── index.ts # Platform exports
├── tools/ # Tool registry and built-in tools
│ ├── index.ts # Tool implementations and registry
│ └── visualizer.ts # Visualization tools
├── types/ # TypeScript definitions
│ └── index.ts # All type definitions
├── visualization/ # Monitoring and debugging
│ ├── index.ts # Interaction tracking and analytics
│ ├── web.ts # Web dashboard server
│ └── exports.ts # Visualization exports
├── index.ts # Main library export and factory
└── dev.ts # Interactive development console
examples/ # Usage examples and demonstrations
tests/ # Test suitesArchitecture
Core Components
1. Agent System (src/agents/)
Base Classes:
BaseAgent: Abstract foundation for all agentsAgentWithTools: Agent with tool calling capabilitiesConversationalAgent: Agent with conversation history management
Service Layer:
AgentService: Agent registration and management- Intent-based agent selection and routing
2. Tool System (src/tools/)
Built-in Tools:
ReadFileTool: Read file contentsWriteFileTool: Write files to workspaceReplaceStringTool: Find and replace in filesSearchFilesTool: Find files by glob patternsSearchContentTool: Search content across files
Tool Registry:
- Extensible tool registration system
- Parameter validation and execution
- Error handling and result formatting
3. Platform Services (src/platform/)
Core Services:
LogService: Configurable logging with levelsFileService: File system operationsWorkspaceService: Workspace analysis and managementConfigurationService: Application configuration
Ollama Integration:
OllamaClient: HTTP client for Ollama APIOllamaModelManager: Model discovery and management- Streaming and tool calling support
Intent System:
RuleBasedIntentDetector: Pattern-based intent detectionIntentRouter: Request routing logicAgentSelector: Agent selection strategies
4. Visualization System (src/visualization/)
Monitoring & Analytics:
AgentInteractionTracker: Real-time interaction trackingConversationVisualizer: Flow diagrams and metrics reportsDebugConsole: Command-line debugging interface
Web Dashboard:
WebVisualizationServer: HTTP server for web UIVisualizationManager: Unified visualization management- Real-time updates and data export
Type System (src/types/)
Comprehensive TypeScript interfaces covering:
- Agent contracts and capabilities
- Request/response structures
- Tool definitions and parameters
- Service interfaces
- Visualization data models
Agent Capabilities & Intents
Available Agent Capabilities
enum AgentCapability {
CodeEditing = 'code-editing', // Code modification and generation
FileOperations = 'file-operations', // File system operations
ContextAnalysis = 'context-analysis', // Understanding project context
ToolCalling = 'tool-calling', // Using external tools
ConversationHistory = 'conversation-history', // Multi-turn conversations
WorkspaceAnalysis = 'workspace-analysis', // Project structure analysis
TerminalOperations = 'terminal-operations', // Command execution
Documentation = 'documentation', // Documentation generation/analysis
Testing = 'testing', // Test creation and execution
Debugging = 'debugging' // Debug assistance
}Intent Classification System
The library automatically detects user intent and routes requests appropriately:
enum Intent {
Explain = 'explain', // "What does this code do?"
Review = 'review', // "Review this code for issues"
Tests = 'tests', // "Generate tests for this function"
Fix = 'fix', // "Fix this bug"
New = 'new', // "Create a new component"
Edit = 'edit', // "Modify this function"
Generate = 'generate', // "Generate boilerplate code"
Search = 'search', // "Find all TODO comments"
Terminal = 'terminal', // "Run the build command"
Workspace = 'workspace', // "Analyze project structure"
Unknown = 'unknown' // Fallback for unclear requests
}Example Intent Detection:
- "Explain how this function works" →
Intent.Explain - "Find and fix the memory leak" →
Intent.Fix - "Create tests for the API endpoints" →
Intent.Tests - "Search for all TODO comments" →
Intent.Search
Configuration
Library Configuration
interface OllamaAgentLibraryConfig {
ollama: {
baseUrl: string; // Ollama server URL
timeout: number; // Request timeout
maxRetries: number; // Retry attempts
defaultModel: string; // Default model name
};
logging: {
level: 'trace' | 'debug' | 'info' | 'warn' | 'error';
};
workspace?: {
root: string; // Workspace root directory
};
visualization?: {
enabled: boolean; // Enable visualization
webPort?: number; // Web dashboard port
maxHistorySize?: number; // Max interaction history
};
}Default Configuration
const defaultConfig = {
ollama: {
baseUrl: 'http://localhost:11434',
timeout: 60000,
maxRetries: 3,
defaultModel: 'qwen2.5-coder:7b',
},
logging: { level: 'info' },
workspace: { root: process.cwd() },
visualization: {
enabled: false,
webPort: 3001,
maxHistorySize: 1000,
},
};Usage Examples
Basic Agent Interaction
// Create library instance
const library = await createOllamaAgentLibrary();
// Handle different types of requests
const examples = [
'Explain what TypeScript is',
'Fix the compilation errors in main.ts',
'Search for all test files',
'Generate a new React component',
];
for (const prompt of examples) {
const response = await library.handleRequest(prompt, {
location: AgentLocation.Panel,
});
console.log(`Request: ${prompt}`);
console.log(`Response: ${response.content[0]?.content}`);
console.log(`Model: ${response.metadata?.model}`);
console.log('---');
}Streaming Responses
const response = await library.streamRequest(
'Write a detailed explanation of async/await',
{ location: AgentLocation.Panel },
(chunk) => {
process.stdout.write(chunk); // Real-time output
}
);Tool-Based Operations
// Get available tools
const tools = library.getToolRegistry().getAllTools();
console.log('Available tools:', tools.map(t => t.name));
// Execute tool directly
const toolRegistry = library.getToolRegistry();
const result = await toolRegistry.executeTool('readFile', {
path: './package.json'
});
console.log('File content:', result.content);Model Management
// List available models
const models = await library.getAvailableModels();
models.forEach(model => {
console.log(`${model.name}: ${model.capabilities.join(', ')}`);
});
// Get specific model
const qwenModel = await library.getModelByName('qwen2.5-coder:7b');
if (qwenModel) {
console.log(`Model size: ${qwenModel.size}`);
}Visualization & Monitoring
Web Dashboard
Enable the web visualization dashboard:
const library = await createOllamaAgentLibrary({
visualization: {
enabled: true,
webPort: 3001,
},
});Visit http://localhost:3001 to access:
- Real-time interaction monitoring
- Performance metrics and analytics
- Agent usage statistics
- Error pattern analysis
- Conversation flow diagrams
Debug Console
const debugConsole = library.getDebugConsole();
// Print metrics for all agents
debugConsole?.printMetrics();
// Print metrics for specific agent
debugConsole?.printMetrics('ollama-agent');
// Show interaction timeline
debugConsole?.printTimeline();
// Display performance insights
debugConsole?.printInsights();
// Start real-time monitoring
debugConsole?.startRealTimeMonitoring();Interaction Tracking
const tracker = library.getInteractionTracker();
if (tracker) {
// Get interaction history
const history = tracker.getInteractionHistory();
// Get performance metrics
const metrics = tracker.getMetrics();
// Export data
const data = tracker.exportVisualizationData();
// Clear history
tracker.clearHistory();
}Custom Agents
Creating a Custom Agent
import { BaseAgent, AgentRequest, AgentResponse, AgentCapability, Intent } from 'ollama-agent';
class CustomAgent extends BaseAgent {
public readonly id = 'custom-agent';
public readonly name = 'Custom Agent';
public readonly description = 'A specialized agent for custom tasks';
public readonly capabilities = [
AgentCapability.CodeEditing,
AgentCapability.FileOperations,
];
async handle(request: AgentRequest): Promise<AgentResponse> {
// Custom logic here
return {
requestId: request.id,
content: [{
type: 'text',
content: `Custom response for: ${request.prompt}`,
}],
};
}
protected getSupportedIntents(): Intent[] {
return [Intent.Generate, Intent.Edit];
}
}
// Register the custom agent
const customAgent = new CustomAgent(logService);
library.getServices().agents.registerAgent(customAgent);Custom Tools
import { ITool, ToolParameter, ToolContext, ToolResult } from 'ollama-agent';
class CustomTool implements ITool {
readonly name = 'customTool';
readonly description = 'Performs custom operations';
readonly parameters: ToolParameter[] = [
{
name: 'input',
type: 'string',
description: 'Input parameter',
required: true,
},
];
async invoke(
parameters: Record<string, unknown>,
context: ToolContext
): Promise<ToolResult> {
try {
const input = parameters.input as string;
// Custom tool logic
return {
success: true,
content: `Processed: ${input}`,
};
} catch (error) {
return {
success: false,
content: 'Tool execution failed',
error: String(error),
};
}
}
}
// Register the custom tool
const customTool = new CustomTool();
library.getToolRegistry().registerTool(customTool);Advanced Usage Patterns
Error Handling and Recovery
try {
const response = await library.handleRequest(prompt, context);
// Handle successful response
} catch (error) {
if (error.message.includes('model not found')) {
// Handle model availability issues
const models = await library.getAvailableModels();
console.log('Available models:', models.map(m => m.name));
} else if (error.message.includes('timeout')) {
// Handle timeout issues
console.log('Request timed out, try with simpler prompt');
} else {
// Handle other errors
console.error('Request failed:', error);
}
}Batch Processing
const prompts = [
'Analyze package.json dependencies',
'Review code quality in src/',
'Check for security vulnerabilities',
];
const results = await Promise.all(
prompts.map(async (prompt) => {
try {
return await library.handleRequest(prompt, {
location: AgentLocation.Panel,
});
} catch (error) {
return { error: error.message, prompt };
}
})
);
// Process results
results.forEach((result, index) => {
if ('error' in result) {
console.log(`❌ ${prompts[index]}: ${result.error}`);
} else {
console.log(`✅ ${prompts[index]}: Success`);
}
});Custom Configuration Profiles
// Development profile
const devConfig = {
ollama: { defaultModel: 'codellama:7b' },
logging: { level: 'debug' as const },
visualization: { enabled: true, webPort: 3001 },
};
// Production profile
const prodConfig = {
ollama: { defaultModel: 'qwen2.5-coder:7b', timeout: 30000 },
logging: { level: 'warn' as const },
visualization: { enabled: false },
};
const library = await createOllamaAgentLibrary(
process.env.NODE_ENV === 'production' ? prodConfig : devConfig
);Troubleshooting
Common Issues
Ollama Connection Failed
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Start Ollama if not running
ollama serve
# Verify models are available
ollama listModel Not Found
# Pull the required model
ollama pull qwen2.5-coder:7b
# Or try alternative models
ollama pull llama2:7b
ollama pull codellama:7bMemory Issues
- Use smaller models for limited memory systems
- Adjust
num_ctxin model options to reduce context window - Enable streaming for large responses
Timeout Errors
- Increase timeout in configuration
- Use simpler, more focused prompts
- Check network connectivity to Ollama
Port Conflicts (Visualization)
const library = await createOllamaAgentLibrary({
visualization: {
enabled: true,
webPort: 3005, // Use different port
},
});Debug Mode
Enable detailed logging for troubleshooting:
const library = await createOllamaAgentLibrary({
logging: { level: 'trace' }, // Most verbose
visualization: { enabled: true },
});
// Use debug console for real-time monitoring
const debugConsole = library.getDebugConsole();
debugConsole?.startRealTimeMonitoring();Development
Setup
# Clone the repository
git clone <repository-url>
cd ollama-agent
# Install dependencies
pnpm install
# Ensure Ollama is running and has models
ollama serve
ollama pull qwen2.5-coder:7b
# Start development
pnpm dev
# Run examples
pnpm run example:basic
pnpm run example:project-analyzerTesting
# Run all tests
pnpm test
# Run specific test files
pnpm test tests/agents.test.ts
pnpm test tests/tools.test.ts
# Run tests with coverage
pnpm test --coverage
# Run tests in watch mode
pnpm test --watchBuilding
# Build for production
pnpm build
# Type checking
pnpm type-check
# Linting
pnpm lint
pnpm lint:fix
# Formatting
pnpm format
# Run examples
pnpm run example:basic # Basic usage demonstration
pnpm run example:project-analyzer # Advanced custom agent example
# Visualization tools
pnpm visualize # Run visualization toolsAPI Reference
Core Library
createOllamaAgentLibrary(config?)
Factory function to create and initialize the library.
OllamaAgentLibrary
Main library class with methods:
initialize(): Initialize the libraryhandleRequest(prompt, context?): Process user requestsstreamRequest(prompt, context?, onChunk?): Stream responsesgetAvailableModels(): List available modelsgetModelByName(name): Get specific modelgetServices(): Access internal servicesgetToolRegistry(): Access tool systemgetVisualizationManager(): Access visualizationdispose(): Clean up resources
Agent System
BaseAgent
Abstract base class for all agents.
AgentService
Service for managing agent registration and selection.
Tool System
ToolRegistry
Central registry for tools with registration and execution.
Built-in Tools
ReadFileToolWriteFileToolReplaceStringToolSearchFilesToolSearchContentTool
Visualization
AgentInteractionTracker
Real-time interaction monitoring and metrics collection.
DebugConsole
Command-line interface for debugging and metrics.
WebVisualizationServer
Web-based dashboard for visualization.
FAQ
General Questions
Q: What models are supported? A: Any Ollama-compatible model. Popular choices include:
qwen2.5-coder:7b(default, good balance)llama2:7b(lighter, faster)codellama:7b(code-focused)mistral:7b(efficient)
Q: Can I use this with remote Ollama instances?
A: Yes, configure the baseUrl in the configuration:
const library = await createOllamaAgentLibrary({
ollama: { baseUrl: 'http://remote-server:11434' }
});Q: How do I create agents for specific domains?
A: Extend BaseAgent or ConversationalAgent and implement domain-specific logic. See the ProjectAnalyzerAgent example.
Q: Is streaming supported?
A: Yes, use streamRequest() for real-time response streaming.
Development Questions
Q: How do I add new tools?
A: Implement the ITool interface and register with ToolRegistry:
class MyTool implements ITool {
readonly name = 'myTool';
// ... implementation
}
toolRegistry.registerTool(new MyTool());Q: Can I customize the intent detection?
A: Yes, implement your own IntentDetector or extend RuleBasedIntentDetector.
Q: How do I handle different file types?
A: The WorkspaceService includes language detection. You can extend it or create custom tools for specific file types.
Q: What about error handling? A: The library includes comprehensive error handling. Wrap calls in try-catch blocks and check tool results for failures.
Performance Questions
Q: How can I optimize response times? A:
- Use smaller models for simple tasks
- Reduce context window size (
num_ctx) - Enable streaming for long responses
- Cache frequently used results
Q: Memory usage is high, what can I do? A:
- Use lighter models (7b vs 13b)
- Limit conversation history
- Adjust visualization history size
- Monitor using the debug console
Contributing
Contributing
We welcome contributions! Here's how to get started:
Development Setup
# Fork and clone the repository
git clone https://github.com/yourusername/ollama-agent.git
cd ollama-agent
# Install dependencies
pnpm install
# Set up Ollama
ollama serve
ollama pull qwen2.5-coder:7b
# Run tests to ensure everything works
pnpm test
# Start development environment
pnpm devContribution Guidelines
- Fork the repository and create a feature branch
- Follow TypeScript best practices - the project uses strict mode
- Write tests for new functionality
- Update documentation for API changes
- Follow the existing code style - run
pnpm lintandpnpm format - Test your changes thoroughly:
pnpm test pnpm run example:basic pnpm run example:project-analyzer
What to Contribute
High Priority:
- New agent implementations for specific domains
- Additional tools for workspace operations
- Performance optimizations
- Better error handling and recovery
- Documentation improvements
Medium Priority:
- Web dashboard enhancements
- Additional model integrations
- More visualization features
- Test coverage improvements
Examples of Good Contributions:
- A
DatabaseAgentfor SQL operations - A
DockerToolfor container operations - Performance benchmarking tools
- Integration with other AI services
Code Standards
- TypeScript strict mode - all code must type-check
- Interface-based design - prefer interfaces over concrete types
- Comprehensive error handling - always handle potential failures
- Detailed logging - use the LogService for debugging
- Unit tests required - maintain >80% coverage
- Documentation - update README and JSDoc comments
Submitting Changes
- Create a descriptive pull request title
- Include a detailed description of changes
- Reference any related issues
- Ensure all tests pass
- Update relevant documentation
- Add examples if introducing new features
Getting Help
- Check existing issues for similar problems
- Create a new issue with detailed reproduction steps
- Join discussions in pull requests
- Follow the project for updates
License
MIT License - see LICENSE file for details.
Acknowledgments
This library is inspired by the architecture of the VS Code Copilot Chat extension and follows similar design patterns for extensibility and modularity.
