tzamuncode
v0.1.15
Published
TzamunCode - AI Coding Assistant powered by local models (npm installer wrapper)
Downloads
3,454
Maintainers
Readme
TzamunCode CLI 🚀
AI Coding Assistant powered by local models - Built in Saudi Arabia 🇸🇦
TzamunCode is a privacy-first AI coding assistant that runs entirely on your local infrastructure using Ollama and vLLM. No cloud dependencies, no API costs, complete control.
✨ Features
- 🤖 Agentic AI - Multi-step planning and execution
- 📝 Code Generation - Create files, functions, and entire projects
- ✏️ Multi-file Editing - Edit multiple files in one operation
- 🔧 Tool Calling - Git operations, file search, command execution
- 🎯 Context Aware - Understands your project structure
- 🔒 Privacy First - Everything runs locally
- ⚡ Fast - Powered by vLLM for optimized inference
- 🌍 Multi-model - Use any Ollama model (15+ available)
🚀 Quick Start
Installation
# Clone the repository
git clone https://github.com/tzamun/tzamuncode-cli.git
cd tzamuncode-cli
# Install
pip install -e .
# Or install from PyPI (when published)
pip install tzamuncodePrerequisites
- Python 3.9+
- Ollama running locally (http://localhost:11434)
- At least one Ollama model installed
Basic Usage
# Start interactive chat
tzamuncode chat
# Generate code
tzamuncode generate "Create a Flask REST API with authentication"
# Edit a file
tzamuncode edit app.py "Add error handling to all routes"
# Explain code
tzamuncode explain main.py
# Quick alias
tzc chat📖 Documentation
Commands
chat - Interactive Chat
tzamuncode chat
tzamuncode chat --model qwen2.5:32bgenerate - Code Generation
tzamuncode generate "Create a Python web scraper"
tzamuncode generate "Add unit tests for user.py" --output tests/edit - File Editing
tzamuncode edit app.py "Refactor to use async/await"
tzamuncode edit . "Add type hints to all functions"explain - Code Explanation
tzamuncode explain complex_function.py
tzamuncode explain --detailed auth.pyreview - Code Review
tzamuncode review pull_request.diff
tzamuncode review --strict src/Configuration
Create ~/.tzamuncode/config.yaml:
# Default model
model: qwen2.5:32b
# Ollama settings
ollama:
base_url: http://localhost:11434
timeout: 120
# vLLM settings (optional, for faster inference)
vllm:
enabled: true
base_url: http://localhost:8000
model: deepseek-coder-7b
# Preferences
preferences:
show_diff: true
auto_apply: false
max_context: 64000
temperature: 0.7🏗️ Architecture
┌─────────────────────────────────────┐
│ TzamunCode CLI │
│ (Typer + Rich UI) │
└─────────────────────────────────────┘
↓
┌─────────────────────────────────────┐
│ Agentic Layer (LangChain) │
│ - Multi-step planning │
│ - Tool calling │
│ - Context management │
└─────────────────────────────────────┘
↓
┌─────────────────────────────────────┐
│ AI Backend │
│ - Ollama (15+ models) │
│ - vLLM (fast inference) │
└─────────────────────────────────────┘🤝 Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
📄 License
MIT License - see LICENSE for details.
🌟 Built by Tzamun Arabia IT Co.
TzamunCode is part of the Tzamun AI ecosystem:
- TzamunAI - AI platform with 15+ models
- TzamunERP - ERPNext + AI integration
- Auxly - AI coding assistant for IDEs
- AccessHub - Privileged Access Management
Visit tzamun.com to learn more.
Made with ❤️ in Saudi Arabia 🇸🇦
