fluxorigin-tui-pkg
v0.1.7
Published
Terminal-based Chinese novel translator (FluxOrigin-TUI)
Readme
FluxOrigin-TUI
📚 A beautiful Terminal User Interface (TUI) application for translating Chinese novels to Vietnamese, inspired by OpenCode/Claude Code
╔═══════════════════════════════════════════════════════════════════════════════╗
║ FluxOrigin-TUI v0.1.0 gemini-2.0 ║
╠═══════════════════════════════════════════════════════════════════════════════╣
║ ║
║ ╭─ Quick Actions ───────────────────────────────────────────────────────────╮║
║ │ │║
║ │ [t] Translate Start translating a new book │║
║ │ [o] Open File Browse and select a book file │║
║ │ [r] Resume Continue previous translation │║
║ │ [s] Settings Configure providers and options │║
║ │ [?] Help Show keyboard shortcuts │║
║ │ │║
║ ╰────────────────────────────────────────────────────────────────────────────╯║
║ ║
║ ╭─ Recent Sessions ──────────────────────────────────────────────────────────╮║
║ │ │║
║ │ 📖 倚天屠龙记.txt ████████░░░░ 65% In Progress 2 hours ago │║
║ │ 📖 射雕英雄传.txt ████████████ 100% Done 1 day ago │║
║ │ 📖 天龙八部.txt ████░░░░░░░░ 32% Paused 3 days ago │║
║ │ │║
║ ╰────────────────────────────────────────────────────────────────────────────╯║
║ ║
╠═══════════════════════════════════════════════════════════════════════════════╣
║ [t]ranslate [o]pen [r]esume [s]ettings [?]help [q]uit ║
╚═══════════════════════════════════════════════════════════════════════════════╝✨ Features
| Feature | Description | Icon | |---------|-------------|------| | Smart Chunking | Intelligent text splitting at sentence boundaries for optimal translation | 🧩 | | Genre Detection | Auto-detect novel genre (武侠/Kiếm Hiệp, 言情/Ngôn Tình, 玄幻/Tiên Hiệp, etc.) | 🎭 | | Auto Glossary | Extract and maintain consistent translations for names, places, and terms | 📖 | | Save & Resume | Pause anytime and continue later from exact position | 💾 | | Real-time Preview | Watch translation progress with side-by-side source/target view | 👁️ | | Multi-Provider | Support FluxProxyAPI, Ollama, LM Studio, and OpenAI-compatible APIs | 🔌 | | Streaming Output | See translation appear character-by-character in real-time | ⚡ | | Session Management | Track all translations with progress, history, and statistics | 📊 | | Beautiful TUI | Modern terminal interface using Bubbletea + Lipgloss | 🎨 | | Cross-Platform | Works on Windows, macOS, and Linux | 🖥️ |
🚀 Installation
Prerequisites
- Go 1.22+ (for building from source)
- AI Provider: FluxProxyAPI, Ollama, or OpenAI-compatible API
Install from Source
# Clone the repository
git clone https://github.com/d-init-d/FluxOrigin-TUI.git
cd FluxOrigin-TUI
# Build
go build -o fluxtui ./cmd/fluxtui
# Install to PATH (optional)
# Linux/macOS:
sudo mv fluxtui /usr/local/bin/
# Windows (PowerShell as Admin):
Move-Item fluxtui.exe C:\Windows\System32\Download Pre-built Binaries
Coming Soon - Pre-built binaries will be available in GitHub Releases
# Linux/macOS
curl -L https://github.com/d-init-d/FluxOrigin-TUI/releases/latest/download/fluxtui-$(uname -s)-$(uname -m) -o fluxtui
chmod +x fluxtui
# Windows (PowerShell)
Invoke-WebRequest -Uri "https://github.com/d-init-d/FluxOrigin-TUI/releases/latest/download/fluxtui-Windows-x86_64.exe" -OutFile "fluxtui.exe"🎯 Quick Start
1. Configure Your AI Provider
# Copy example config
cp config.example.yaml ~/.fluxtui/config.yaml
# Edit with your favorite editor
nano ~/.fluxtui/config.yaml2. Run FluxOrigin-TUI
./fluxtui3. First Translation Walkthrough
┌───────────────────────────────────────────────────────────────────────────────┐
│ Step 1: Press [o] to open file picker │
│ Navigate to your Chinese novel (.txt or .epub) │
│ │
│ Step 2: Select file and press [Enter] │
│ The app will analyze the file and detect genre automatically │
│ │
│ Step 3: Watch the magic happen! │
│ • Genre detection (武侠, 言情, 玄幻...) │
│ • Smart chunking (~2000 chars per chunk) │
│ • Real-time translation with streaming output │
│ • Progress automatically saved every chunk │
│ │
│ Step 4: Press [Space] to pause anytime │
│ Press [p] to resume later from Home screen │
│ │
│ Step 5: Find your translation │
│ Output saved as: yourbook_vi.txt │
└───────────────────────────────────────────────────────────────────────────────┘⚙️ Configuration
config.yaml Example
# ═══════════════════════════════════════════════════════════════════════════════
# FluxOrigin-TUI Configuration
# ═══════════════════════════════════════════════════════════════════════════════
# ───────────────────────────────────────────────────────────────────────────────
# AI Provider Settings
# ───────────────────────────────────────────────────────────────────────────────
providers:
# OpenAI / OpenAI-Compatible (FluxProxyAPI, etc.)
openai:
enabled: true
api_key: "your-api-key-here" # Your API key
model: "gemini-2.0-flash" # Model name
base_url: "http://localhost:8317/v1" # Custom endpoint (optional)
# Google AI (Gemini) - Direct API
google:
enabled: false
api_key: "your-google-api-key"
model: "gemini-2.0-flash"
# Anthropic Claude
anthropic:
enabled: false
api_key: "sk-ant-your-anthropic-key"
model: "claude-3-5-sonnet-20241022"
# Local LLM (Ollama, LM Studio)
custom:
enabled: false
base_url: "http://localhost:11434/v1" # Ollama default
api_key: "" # Usually not required for local
model: "qwen2.5:14b" # Your local model
# Default provider to use (must match one of the keys above)
default_provider: "openai"
# ───────────────────────────────────────────────────────────────────────────────
# Translation Settings
# ───────────────────────────────────────────────────────────────────────────────
translation:
source_lang: "zh" # Source language: zh (Chinese)
target_lang: "vi" # Target language: vi (Vietnamese)
chunk_size: 2000 # Characters per chunk (~2000 recommended)
lookback_size: 300 # Context from previous chunk
max_retries: 3 # Retry failed chunks
retry_delay: 2 # Seconds between retries
preserve_formatting: true # Keep original line breaks
temperature: 0.3 # AI creativity (0.0-1.0)
max_tokens: 4096 # Max tokens per request
# ───────────────────────────────────────────────────────────────────────────────
# Session & Progress Settings
# ───────────────────────────────────────────────────────────────────────────────
session:
auto_save_interval: 60 # Auto-save every N seconds
storage_dir: "~/.fluxtui/sessions" # Session storage path
max_sessions: 50 # Keep last N sessions
# ───────────────────────────────────────────────────────────────────────────────
# UI Settings
# ───────────────────────────────────────────────────────────────────────────────
ui:
theme: "dark" # Theme: "dark", "light", "auto"
show_line_numbers: true # Show line numbers in preview
syntax_highlighting: true # Highlight code blocks
animation_speed: "normal" # "slow", "normal", "fast", "none"
show_preview: true # Show translation preview
confirm_exit: true # Confirm before quitting
# ───────────────────────────────────────────────────────────────────────────────
# Advanced Settings
# ───────────────────────────────────────────────────────────────────────────────
advanced:
timeout: 120 # Request timeout (seconds)
max_retries: 3 # Max retry attempts
debug: false # Enable debug logging
data_dir: "~/.fluxtui" # Data directory path
# ───────────────────────────────────────────────────────────────────────────────
# Logging Settings
# ───────────────────────────────────────────────────────────────────────────────
logging:
level: "info" # "debug", "info", "warn", "error"
file: "" # Log file path (empty = stdout only)
max_size: 10 # Max log file size (MB)Environment Variables
| Variable | Description | Example |
|----------|-------------|---------|
| FLUXTUI_CONFIG | Path to config file | ~/.fluxtui/config.yaml |
| FLUXTUI_DATA_DIR | Data directory | ~/.fluxtui |
| FLUXTUI_LOG_LEVEL | Logging level | debug |
| OPENAI_API_KEY | OpenAI API key | sk-... |
| GOOGLE_API_KEY | Google AI API key | ... |
| ANTHROPIC_API_KEY | Anthropic API key | sk-ant-... |
Provider Setup
Option 1: FluxProxyAPI (Recommended)
FluxProxyAPI provides access to Gemini models through an OpenAI-compatible API.
# 1. Setup FluxProxyAPI (see FluxProxyAPI docs)
git clone https://github.com/d-init-d/FluxProxyAPI.git
cd FluxProxyAPI
# 2. Configure and run
# Edit config.yaml with your Google Antigravity credentials
go run ./cmd/server
# 3. FluxOrigin-TUI connects to localhost:8317 automaticallyOption 2: Ollama (Local LLM)
# 1. Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# 2. Pull a Chinese-capable model
ollama pull qwen2.5:14b
# 3. Start Ollama server
ollama serve
# 4. Configure FluxOrigin-TUI
# Set providers.custom.enabled: true
# Set providers.custom.base_url: "http://localhost:11434/v1"Option 3: LM Studio
# 1. Download LM Studio from https://lmstudio.ai
# 2. Load a Chinese-capable model (Qwen, Yi, etc.)
# 3. Start local server (default: http://localhost:1234/v1)
# 4. Configure FluxOrigin-TUI
# Set providers.custom.enabled: true
# Set providers.custom.base_url: "http://localhost:1234/v1"⌨️ Keyboard Shortcuts
Global Navigation
| Key | Action | Context |
|-----|--------|---------|
| h | Go to Home | Anywhere |
| t | Go to Translate | Anywhere |
| s | Go to Settings | Anywhere |
| ? | Toggle Help | Anywhere |
| q | Quit / Go back | Anywhere |
| Ctrl+C | Force quit | Anywhere |
Home Screen
| Key | Action |
|-----|--------|
| ↑ / k | Move up |
| ↓ / j | Move down |
| Enter / Space | Select action |
| t | Quick: Start translation |
| o | Quick: Open file |
| r | Quick: Resume session |
File Picker
| Key | Action |
|-----|--------|
| ↑ / ↓ | Navigate files |
| Enter | Select / Open directory |
| Backspace | Go to parent directory |
| . | Toggle hidden files |
| ~ | Go to home directory |
| Esc | Cancel and close |
Translation Screen
| Key | Action |
|-----|--------|
| Space | Pause / Resume translation |
| Esc | Stop (saves progress) |
| p | Pause / Resume |
| [ | View previous chunk |
| ] | View next chunk |
| r | Retry current chunk |
| s | Skip current chunk |
| g | View/Edit glossary |
| v | Toggle preview mode |
Settings Screen
| Key | Action |
|-----|--------|
| ↑ / ↓ | Navigate settings |
| ← / → / Tab | Switch section |
| Enter | Edit setting |
| Ctrl+S | Save changes |
| Ctrl+T | Test provider connection |
| Esc | Discard and go back |
Help Screen
| Key | Action |
|-----|--------|
| ↑ / k | Scroll up |
| ↓ / j | Scroll down |
| Home / g | Go to top |
| End / G | Go to bottom |
| Esc / q / ? | Close help |
🏗️ Architecture
FluxOrigin-TUI follows a clean architecture with separation of concerns:
┌─────────────────────────────────────────────────────────────────────────────┐
│ FluxOrigin-TUI │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ TUI Layer (Bubbletea) │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │ Home │ │ Translate│ │ Settings │ │ Help │ │ │
│ │ │ Screen │ │ Screen │ │ Screen │ │ Screen │ │ │
│ │ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ │ │
│ └────────┼─────────────┼─────────────┼─────────────┼──────────────────┘ │
│ │ │ │ │ │
│ ┌────────┴─────────────┴─────────────┴─────────────┴──────────────────┐ │
│ │ Core Services │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │ File │ │ Translate│ │ Session │ │ Chunker │ │ │
│ │ │ Service │ │ Service │ │ Service │ │ │ │ │
│ │ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ │ │
│ └────────┼─────────────┼─────────────┼─────────────┼──────────────────┘ │
│ │ │ │ │ │
│ ┌────────┴─────────────┴─────────────┴─────────────┴──────────────────┐ │
│ │ Provider Layer │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │FluxProxy │ │ OpenAI │ │ Ollama │ │ Custom │ │ │
│ │ │ API │ │ Client │ │ Client │ │ Client │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘Translation Pipeline
Input File (.txt/.epub)
│
▼
┌─────────────────┐
│ 1. Parse File │ → Detect encoding, extract text
└────────┬────────┘
│
▼
┌─────────────────┐
│ 2. Sample Text │ → HEAD(4000) + MID(3000) + TAIL(3000)
└────────┬────────┘
│
▼
┌─────────────────┐
│ 3. Detect Genre │ → KIEMHIEP, NGONTINH, TIENHIEP, etc.
└────────┬────────┘
│
▼
┌─────────────────┐
│ 4. Auto Glossary│ → Extract names, places, terms
└────────┬────────┘
│
▼
┌─────────────────┐
│ 5. Smart Chunk │ → ~2000 chars, split at sentence boundaries
└────────┬────────┘
│
▼
┌─────────────────────────────────────────┐
│ 6. Translate Chunks (Sequential) │
│ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ │
│ │Chunk│→ │Chunk│→ │Chunk│→ │Chunk│ │
│ │ 1 │ │ 2 │ │ 3 │ │ N │ │
│ └─────┘ └─────┘ └─────┘ └─────┘ │
│ (with context from previous chunks) │
└──────────────────┬──────────────────────┘
│
▼
┌─────────────────┐
│ 7. Merge Output │ → output_vi.txt
└─────────────────┘For detailed technical documentation, see BLUEPRINT.md.
🛠️ Development
Project Structure
FluxOrigin-TUI/
├── cmd/
│ └── fluxtui/
│ └── main.go # Application entry point
├── internal/
│ ├── app/
│ │ └── app.go # Main Bubbletea model
│ ├── ui/
│ │ ├── screens/ # Screen implementations
│ │ │ ├── home.go
│ │ │ ├── translate.go
│ │ │ ├── settings.go
│ │ │ ├── help.go
│ │ │ └── filepicker.go
│ │ ├── components/ # Reusable UI components
│ │ │ ├── menu.go
│ │ │ ├── layout.go
│ │ │ └── ...
│ │ └── styles/
│ │ └── theme.go # Lipgloss styles
│ ├── core/ # Core business logic
│ │ ├── translator.go # Main translation orchestrator
│ │ ├── chunker.go # Text chunking algorithm
│ │ ├── genre.go # Genre detection
│ │ ├── glossary.go # Glossary management
│ │ └── file_service.go # File parsing
│ ├── provider/ # AI provider implementations
│ │ ├── provider.go # Interface definition
│ │ ├── openai.go # OpenAI-compatible client
│ │ ├── ollama.go # Ollama client
│ │ └── factory.go # Provider factory
│ ├── session/ # Session management
│ │ ├── session.go # Session model
│ │ └── store.go # Persistence
│ └── config/
│ └── config.go # Configuration management
├── config.example.yaml # Example configuration
├── go.mod
├── go.sum
├── Makefile
├── README.md
├── BLUEPRINT.md # Technical architecture
└── ROADMAP.md # Development roadmapBuild Instructions
# Development build
go build -o fluxtui ./cmd/fluxtui
# Production build (optimized)
go build -ldflags="-s -w" -o fluxtui ./cmd/fluxtui
# Build for all platforms
make build-all
# Or manually:
# Linux
GOOS=linux GOARCH=amd64 go build -o fluxtui-linux-amd64 ./cmd/fluxtui
# macOS
GOOS=darwin GOARCH=amd64 go build -o fluxtui-darwin-amd64 ./cmd/fluxtui
GOOS=darwin GOARCH=arm64 go build -o fluxtui-darwin-arm64 ./cmd/fluxtui
# Windows
GOOS=windows GOARCH=amd64 go build -o fluxtui-windows-amd64.exe ./cmd/fluxtuiRun Tests
# Run all tests
go test ./...
# Run with coverage
go test -cover ./...
# Run specific package tests
go test ./internal/core/...
go test ./internal/provider/...
# Verbose output
go test -v ./...
# Race detection
go test -race ./...Development Commands
# Run in development mode
go run ./cmd/fluxtui
# With custom config
go run ./cmd/fluxtui --config /path/to/config.yaml
# Enable debug logging
FLUXTUI_LOG_LEVEL=debug go run ./cmd/fluxtui🔧 Troubleshooting
Common Issues
Issue: "Provider not reachable"
Symptoms: Status shows "Provider not reachable (check API key)"
Solutions:
- Check if your AI provider is running:
# For FluxProxyAPI curl http://localhost:8317/v1/models # For Ollama curl http://localhost:11434/api/tags - Verify API key in
config.yaml - Check firewall/network settings
Issue: "Failed to read file"
Symptoms: Error when selecting a book file
Solutions:
- Check file permissions:
ls -la yourfile.txt - Verify file encoding (UTF-8, GB2312, Big5 supported)
- Ensure file is not corrupted:
file yourfile.txt
Issue: Translation is slow
Solutions:
- Reduce
chunk_sizein config (try 1500 instead of 2000) - Check your AI provider's response time
- Use a faster model (e.g., gemini-2.0-flash instead of gemini-2.5-pro)
- Enable concurrent chunks if supported by provider
Issue: "Out of memory" on large files
Solutions:
- Increase system swap space
- Reduce
chunk_sizeto process smaller chunks - Close other applications
Issue: Session not saving
Solutions:
- Check disk space:
df -h - Verify write permissions to
~/.fluxtui/sessions/ - Check config:
session.storage_dirpath is valid
FAQ
Q: What file formats are supported?
A: Currently .txt and .epub files are supported. More formats coming soon.
Q: Can I translate to languages other than Vietnamese?
A: Yes! Change translation.target_lang in config. However, prompts are optimized for Vietnamese.
Q: How do I update the glossary during translation?
A: Press g during translation to view/edit glossary. Changes apply to future chunks.
Q: Can I use multiple AI providers?
A: Yes, configure multiple providers in config.yaml and switch with default_provider.
Q: Is my data sent to external servers? A: Only if using cloud providers (OpenAI, Google, Anthropic). Local providers (Ollama, LM Studio) keep data local.
Q: How do I backup my sessions?
A: Copy ~/.fluxtui/sessions/ directory to your backup location.
🤝 Contributing
We welcome contributions! Here's how to get started:
Getting Started
- Fork the repository on GitHub
- Clone your fork:
git clone https://github.com/YOUR_USERNAME/FluxOrigin-TUI.git cd FluxOrigin-TUI - Create a branch:
git checkout -b feature/your-feature-name
Development Workflow
- Make your changes
- Run tests:
go test ./... - Format code:
go fmt ./... - Commit with clear messages:
git commit -m "feat: add support for PDF files" - Push and create a Pull Request
Contribution Guidelines
- Follow Go best practices and idioms
- Add tests for new features
- Update documentation as needed
- Ensure
go vetandgolintpass - Keep commits atomic and well-described
Reporting Issues
When reporting bugs, please include:
- FluxOrigin-TUI version
- Go version (
go version) - Operating system
- Steps to reproduce
- Expected vs actual behavior
- Relevant logs (with
debug: truein config)
📄 License
MIT License - See LICENSE for details.
Copyright (c) 2025 FluxOrigin Contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.🔗 Related Projects
- FluxProxyAPI - Proxy server for Google Antigravity OAuth
- FluxOrigin - Original Flutter desktop app
- n8n_book_translator - n8n workflow version
💬 Support
- GitHub Issues: Report bugs or request features
- Discussions: Ask questions and share ideas
