@tosi-n/hybrie-cli
v0.3.1
Published
HybrIE CLI - Hybrid Inference Engine for Heterogeneous Hardware to Power Generative AI Models
Downloads
15
Maintainers
Readme
HybrIE CLI
HybrIE CLI - Command-line interface for HybrIE (Hybrid Inference Engine for Heterogeneous Hardware to Power Generative AI Models)
🚀 Quick Start
Installation
Install globally via npm:
npm install -g @tosi-n/hybrie-cliUsage
# Show help
hybrie --help
# Generate an image
hybrie img-gen "a beautiful sunset over mountains"
# Start the server
hybrie-server --help📋 Features
- Cross-platform: Works on macOS and Linux
- Multiple architectures: Supports x64 and ARM64
- FLUX model support: Generate high-quality images with FLUX models
- Local inference: Run models locally on your hardware
- Server mode: Run as a server for remote access
- Model management: Download and manage models automatically
- Beautiful UI: Rich terminal interface with progress bars and colors
🖥️ Supported Platforms
| Platform | Architecture | Status | |----------|-------------|--------| | macOS | ARM64 | ✅ | | Linux | x64 | ✅ |
📖 Commands
Image Generation
# Basic generation
hybrie img-gen "a cat sitting on a rainbow"
# With specific model
hybrie img-gen "a futuristic city" --model flux.1-dev
# With custom settings
hybrie img-gen "a dragon" --steps 50 --size 1024x1024
# Save to specific directory
hybrie img-gen "a sunset" --output ./my-imagesModel Management
# List available models
hybrie models list
# Download a model
hybrie models download flux.1-dev
# Show model info
hybrie models info flux.1-schnellServer Mode
# Start server
hybrie-server start
# Start with custom config
hybrie-server start --config ./my-config.toml
# Check server status
hybrie-server status⚙️ Configuration
HybrIE CLI automatically creates configuration files in:
- macOS/Linux:
~/.config/hybrie/
Configuration File (hybrie-config.toml)
[server]
host = "127.0.0.1"
port = 8080
max_concurrent_requests = 4
request_timeout_seconds = 300
enable_reflection = true
[models]
cache_dir = "~/.cache/hybrie/models"
auto_download = ["flux.1-schnell"]
max_cache_size_gb = 50.0
preload_models = []
[inference]
device_preference = "auto" # auto, cpu, metal, cuda
batch_size = 1
max_steps = 50
[api]
enable_cors = true
api_key_required = false
rate_limit_per_minute = 60🔧 Advanced Usage
Environment Variables
# Set log level
export HYBRIE_LOG_LEVEL=debug
# Set custom cache directory
export HYBRIE_CACHE_DIR=/path/to/cache
# Set device preference
export HYBRIE_DEVICE=metal # or cuda, cpuCustom Models
# Use local model directory
hybrie img-gen "a test image" --model-path /path/to/model
# Use HuggingFace model
hybrie img-gen "a test image" --model black-forest-labs/FLUX.1-dev🐛 Troubleshooting
Common Issues
Binary not found after installation:
# Re-run the installer
npm install -g @tosi-n/hybrie-cli --forcePermission denied on macOS/Linux:
# Make sure the binary is executable
chmod +x $(which hybrie)Model download fails:
# Check internet connection and try again
hybrie models download flux.1-schnell --forceGetting Help
- Documentation: https://github.com/hybrie/hybrie-cli
- Issues: https://github.com/hybrie/hybrie-cli/issues
- Discussions: https://github.com/hybrie/hybrie-cli/discussions
🔄 Updates
Update to the latest version:
npm update -g @hybrie/cli📄 License
This project is licensed under the MIT OR Apache-2.0 License - see the LICENSE file for details.
🤝 Contributing
Contributions are welcome! Please see our Contributing Guide for details.
HybrIE CLI - Making AI inference accessible across all platforms 🚀
