@fsfalmansour/neohub-core
v1.0.1
Published
Core functionality for NeoHub AI assistant
Readme
@neohub/core
Core functionality for NeoHub AI assistant - Ollama integration, model management, and intelligent task routing.
Features
- OllamaClient: Full-featured client for Ollama API with streaming support
- ModelSupervisor: Intelligent model selection based on task type and context
- ConversationManager: Manages chat history and context windows
- CodeAnalyzer: Analyzes code complexity and generates context for AI models
Installation
npm install @neohub/coreUsage
import { OllamaClient, ModelSupervisor } from '@neohub/core';
// Initialize Ollama client
const client = new OllamaClient({
baseUrl: 'http://localhost:11434',
model: 'deepseek-coder:33b',
});
// Use Model Supervisor for intelligent model selection
const supervisor = new ModelSupervisor(client);
const model = await supervisor.selectModel({
taskType: 'code_generation',
codeComplexity: 'medium',
userPreference: 'auto',
});Testing
# Run unit tests (fast, no external dependencies)
npm test
# Run integration tests (requires Ollama with large models)
npm run test:integrationNote: Integration tests require:
- Ollama running on
http://localhost:11434 - At least one chat-capable model installed (e.g.,
deepseek-coder:33b,codellama:34b) - Sufficient resources (large models need significant RAM and CPU)
Integration tests are excluded from the default test suite and CI pipeline due to resource requirements and execution time (30+ seconds per test).
Development
# Build TypeScript
npm run build
# Watch mode for development
npm run dev
# Clean build artifacts
npm run cleanLicense
MIT
