q-ollama
v1.0.0
Published
A Node.js package for interacting with Ollama and Baichuan AI models with flexible API and CLI support
Downloads
43
Maintainers
Readme
Q-Ollama
A Node.js package for interacting with Ollama and Baichuan AI models with flexible API and CLI support.
Features
- 🤖 Support for Ollama and Baichuan AI models
- 🔄 Dynamic provider switching
- 💬 Interactive chat and single message support
- 🔧 Full TypeScript support
- 🚀 Command-line tool support
- 🐛 Debug mode with detailed logging
- 📚 Comprehensive test cases
Installation
npm install q-ollamaQuick Start
1. Using Ollama
Make sure Ollama service is running (default: http://localhost:11434)
const { QOllama, ProviderType } = require('q-ollama');
// Create instance
const qollama = new QOllama({
provider: ProviderType.OLLAMA,
ollamaBaseUrl: 'http://localhost:11434',
defaultModel: 'qwen3:8b',
debug: true
});
// Quick chat
async function chat() {
const response = await qollama.quickChat('Hello, please introduce yourself');
console.log('AI Response:', response.content);
}
chat();2. Using Baichuan Model
Set environment variable BAICHUAN_API_KEY or provide API key directly:
const { QOllama, ProviderType } = require('q-ollama');
const qollama = new QOllama({
provider: ProviderType.BAICHUAN,
baichuanApiKey: 'your-api-key-here', // or use environment variable
defaultModel: 'Baichuan2-Turbo'
});
async function chat() {
const response = await qollama.quickChat('Hello');
console.log('Baichuan Response:', response.content);
}
chat();3. Dynamic Provider Switching
// Start with Ollama
const qollama = new QOllama({
provider: ProviderType.OLLAMA,
defaultModel: 'qwen3:8b'
});
console.log('Current provider:', qollama.getCurrentProvider()); // ollama
// Switch to Baichuan
qollama.switchProvider({
provider: ProviderType.BAICHUAN,
baichuanApiKey: process.env.BAICHUAN_API_KEY
});
console.log('Switched provider:', qollama.getCurrentProvider()); // baichuanAPI Reference
QOllama Class
Constructor
new QOllama(config: QOllamaConfig)Configuration options:
interface QOllamaConfig {
provider: ProviderType; // Model provider
ollamaBaseUrl?: string; // Ollama service URL
baichuanApiKey?: string; // Baichuan API key
defaultModel?: string; // Default model
debug?: boolean; // Debug mode
}Methods
chat(messages: ChatMessage[], options?: ChatOptions): Promise<ChatResponse>- Send chat messagesquickChat(prompt: string, options?: ChatOptions): Promise<ChatResponse>- Quick single messageswitchProvider(newConfig: QOllamaConfig): void- Switch model providergetCurrentProvider(): string- Get current providersupportsStreaming(): boolean- Check if streaming is supportedlistModels(): Promise<string[]>- List available modelssetDebug(debug: boolean): void- Set debug mode
Helper Functions
const { createQOllama, createOllamaProvider, createBaichuanProvider } = require('q-ollama');
// Quick instance creation
const qollama1 = createQOllama(config);
const qollama2 = createOllamaProvider('http://localhost:11434', true);
const qollama3 = createBaichuanProvider('your-api-key', true);Command Line Tool
After installation, use the q-ollama command:
Interactive Chat
# Using Ollama
q-ollama chat --provider ollama --model qwen3:8b
# Using Baichuan
q-ollama chat --provider baichuan --model Baichuan2-Turbo --key YOUR_API_KEYSingle Message
q-ollama message "Hello world" --provider ollama --model qwen3:8bList Available Models
q-ollama list-models --provider ollamaFull Command Help
q-ollama --helpDebug Mode
Enable debug mode to see detailed request and response information:
const qollama = new QOllama({
provider: ProviderType.OLLAMA,
debug: true // Enable debug
});
// Or enable at runtime
qollama.setDebug(true);Debug output includes:
- Method call parameters
- API request details
- Response data
- Error information
Environment Variables
BAICHUAN_API_KEY- Baichuan model API key
Development
Build Project
npm run buildRun Tests
npm testDevelopment Mode
npm run devExamples
Check the examples/ directory for complete examples:
node examples/basic-usage.jsLicense
MIT
Contributing
Issues and Pull Requests are welcome!
Support
If you encounter issues:
- Check debug mode output
- Ensure related services are running properly
- Check test cases for correct usage
