npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ollama-multi-cli

v1.1.0

Published

CLI tool for simultaneous chat sessions with multiple Ollama models

Readme

🦙 Ollama Multi-Chat CLI

A powerful command-line interface for simultaneous conversations with multiple Ollama AI models. Compare responses, leverage different model strengths, and manage your AI interactions with profiles, templates, and chat persistence.

Version Node License

✨ Features

🚀 Multi-Model Chat

  • Chat with multiple Ollama models simultaneously
  • Side-by-side response comparison
  • Real-time streaming responses
  • Optimized performance with reduced screen flickering

👤 Profiles System

  • Pre-built personas (Senior Architect, Developer, Code Reviewer, Teacher)
  • Custom profile creation with system prompts
  • Automatic context injection into conversations
  • Profile management (create, view, delete)

📋 Templates

  • Pre-configured model collections (General, Coding, Creative, Analysis)
  • Custom template creation
  • Intelligent model availability detection
  • Quick-start with curated model sets

⬇️ Model Management

  • Install new Ollama models directly from CLI
  • Progress tracking for downloads
  • Popular model suggestions
  • Error handling and retry logic

💾 Chat Persistence

  • Export conversations with full metadata
  • Import previous chat sessions
  • Preserve profiles and templates in exports
  • Organized file storage system

🔧 Prerequisites

  • Node.js 22+ (required for ESM support)
  • Ollama installed and running locally
  • At least one Ollama model downloaded

Installing Ollama

# macOS
brew install ollama

# Linux
curl -fsSL https://ollama.ai/install.sh | sh

# Windows
# Download from https://ollama.ai/download

Installing Models

# Popular lightweight models
ollama pull llama3.2:3b
ollama pull phi3:mini
ollama pull qwen2.5:7b

# Coding-focused models
ollama pull deepseek-coder:6.7b
ollama pull codellama:7b

📦 Installation

Option 1: Install from npm (Recommended)

# Install globally
npm install -g ollama-multi-chat

# Run the CLI
olcli

Option 2: Clone and Build

git clone <repository-url>
cd ollama-cli
npm install
npm run build
npm link  # Optional: for global installation (enables 'olcli' command)

Option 3: Direct Usage

npm install
npm run dev  # Development mode with hot reload

🚀 Quick Start

# Option 1: Install from npm (Recommended)
npm install -g ollama-multi-chat
olcli

# Option 2: Local development
git clone <repository-url>
cd ollama-cli
npm install
npm run build
npm link  # Enables 'olcli' command globally
olcli

🚀 Usage

Starting the Application

# If globally installed (recommended)
olcli

# Or run directly from project directory
npm start

# Development mode with hot reload
npm run dev

# Get help
olcli --help

# Show version
olcli --version

# List available models
olcli --list-models

Main Menu Navigation

The application starts with a main menu offering these options:

🦙 Ollama Multi-Chat CLI
Chat with multiple AI models simultaneously

Current Settings:
Profile: Senior Solutions Architect
Template: Code Assistant

Main Menu:
→ 🚀 Start Chat Session
  👤 Manage Profiles  
  📋 Manage Templates
  ⬇️ Install Models
  💾 Export/Import Chats

📚 Feature Guide

🚀 Starting a Chat Session

  1. Select "Start Chat Session" from main menu
  2. Choose models using arrow keys and spacebar
  3. Press Enter to start chatting
  4. Type messages and see responses from all selected models
  5. Use ESC to return to model selection

Chat Controls:

  • ESC - Back to model selection
  • Enter - Send message
  • Real-time streaming responses

👤 Managing Profiles

Profiles add context and personality to your AI interactions:

Built-in Profiles:

  • Senior Solutions Architect - System design expertise
  • Full-Stack Developer - Practical coding solutions
  • Code Reviewer - Focus on quality and best practices
  • Technical Teacher - Clear explanations and examples

Creating Custom Profiles:

  1. Select "Manage Profiles"
  2. Choose "+ Create New Profile"
  3. Enter name, description, and system prompt
  4. Profile automatically applies to new conversations

📋 Using Templates

Templates provide curated model collections for specific use cases:

Built-in Templates:

  • General Purpose - llama3.2:3b, phi3:mini, qwen2.5:7b
  • Code Assistant - deepseek-coder:6.7b, codellama:7b, starcoder2:3b
  • Creative Writing - llama3.2:3b, mistral:7b, gemma2:9b
  • Data Analysis - qwen2.5:7b, llama3.1:8b, phi3:medium

Creating Custom Templates:

  1. Select "Manage Templates"
  2. Choose "+ Create New Template"
  3. Enter name and description
  4. Select models for the template
  5. Template appears in future selections

⬇️ Installing Models

Add new models without leaving the application:

Model Name: llama3.2:3b
█████████████░░░░░░░ 65%
Downloading...

Popular Models:
• llama3.2:3b - Fast, lightweight general-purpose model
• phi3:mini - Microsoft's compact model  
• qwen2.5:7b - Alibaba's advanced model
• deepseek-coder:6.7b - Specialized for coding

💾 Chat Export/Import

Preserve and share your conversations:

Exporting:

  1. During or after a chat session
  2. Select "Export/Import Chats""Export Current Chat"
  3. Enter a descriptive name
  4. File saved to ~/.ollama-multi-chat/exports/

Importing:

  1. Select "Import Chat" from export menu
  2. Choose from available exported chats
  3. Conversation, profile, and template restored
  4. Continue where you left off

⚙️ Configuration

Storage Locations

All data stored in ~/.ollama-multi-chat/:

~/.ollama-multi-chat/
├── profiles.json          # User profiles
├── templates.json         # Model templates  
└── exports/               # Exported chats
    ├── chat-session-1.json
    └── chat-session-2.json

Model Timeouts

Automatic timeouts based on model size:

  • Small models (3B params): 2 minutes
  • Medium models (7-13B): 3 minutes
  • Large models (24B+): 5 minutes

🔧 Development

Scripts

npm run build      # Compile TypeScript
npm run dev        # Development with hot reload
npm run typecheck  # Type checking only
npm run lint       # ESLint checking
npm test          # Run tests

Project Structure

src/
├── components/           # React components
│   ├── ChatInterface.tsx
│   ├── ModelSelector.tsx
│   ├── ProfileManager.tsx
│   ├── TemplateManager.tsx
│   └── ...
├── services/            # Business logic
│   ├── ollama.ts       # Ollama API client
│   └── storage.ts      # Data persistence
├── types.ts            # TypeScript definitions
└── App.tsx            # Main application

🐛 Troubleshooting

Common Issues

"Failed to connect to Ollama"

# Check if Ollama is running
ollama list

# Start Ollama service
ollama serve

"No models available"

# Install a model first
ollama pull llama3.2:3b

# Verify installation
ollama list

"Model timeout errors"

  • Large models need time to load initially
  • Subsequent requests are faster
  • Consider using smaller models for testing

"Screen flickering during responses"

  • Updated with performance optimizations
  • Reduced update frequency and debouncing
  • Contact support if issues persist

Performance Tips

  1. Start with smaller models (3B parameters) for faster responses
  2. Use templates to quickly switch between model sets
  3. Profile system reduces repetitive context setting
  4. Export important chats to preserve valuable conversations

📦 Publishing to npm

This project uses semantic versioning and automated publishing:

Version Management

# Patch version (1.0.0 -> 1.0.1)
npm version patch

# Minor version (1.0.0 -> 1.1.0)
npm version minor

# Major version (1.0.0 -> 2.0.0)
npm version major

Publishing Process

# 1. Ensure you're logged in to npm
npm login

# 2. Update version (automatically builds and commits)
npm version patch

# 3. Publish to npm
npm publish

# Or publish with tag
npm publish --tag beta

Pre-publish Checks

The following happens automatically before publishing:

  • TypeScript type checking (npm run typecheck)
  • Build process (npm run build)
  • Git commit and tag creation

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Ollama team for the excellent local AI model runtime
  • Ink for the beautiful terminal UI framework
  • React for the component architecture
  • Contributors and testers

Built with ❤️ for the AI development community

For issues, feature requests, or questions, please open an issue on GitHub.