term-v0
v0.1.7
Published
Generate terminal UIs (TUIs) with simple text prompts - AI-powered terminal interface builder
Maintainers
Readme

✨ Features
- Natural Language to TUI: Describe what you want, get a working terminal app instantly
- Multiple AI Providers: Anthropic Claude, OpenAI GPT, Google Gemini, and local Ollama
- App Library: Save, refine, and reuse your generated apps
- Interactive Refinement: Iterate on your apps with follow-up prompts
- Zero Config: Works out of the box if you have API keys in your environment
🎯 Perfect For
- Quick Prototypes: "Make me a file browser with search"
- System Monitoring: "Show me which ports are active"
- Data Visualization: "Create a dashboard for my server stats"
- Task Management: "Build a todo list with categories"
- Database Tools: "Diff these two SQL tables"
- Container Management: "Help me manage my Docker containers"
📦 Installation
Install Globally (Recommended)
npm install -g term-v0Use with npx (No Installation)
npx term-v0 "create a system monitor"Install from Source
# Clone the repository
cd term-v0
npm install
npm link # Make it available globally🔑 Setup API Keys
term-v0 requires an LLM provider. Choose one of the following:
Option 1: Anthropic Claude (Recommended)
Models: claude-sonnet-4-20250514, claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022
# Get your API key from https://console.anthropic.com
export ANTHROPIC_API_KEY="sk-ant-..."
# Optional: Choose a specific model
export ANTHROPIC_MODEL="claude-3-5-sonnet-20241022"Option 2: OpenAI GPT
Models: gpt-4o, gpt-4o-mini, gpt-4-turbo, o1, o1-mini, o3-mini
# Get your API key from https://platform.openai.com
export OPENAI_API_KEY="sk-..."
# Optional: Choose a specific model
export OPENAI_MODEL="gpt-4o"
# Optional: Use a custom API endpoint
export OPENAI_BASE_URL="https://..."Option 3: Google Gemini
Models: gemini-2.0-flash-exp, gemini-1.5-pro, gemini-1.5-flash
# Get your API key from https://aistudio.google.com
export GEMINI_API_KEY="your-api-key"
# Optional: Choose a specific model
export GEMINI_MODEL="gemini-1.5-pro"Option 4: Ollama (Local & Free)
Run models locally without API costs
# Install Ollama from https://ollama.ai
# Then pull a model:
ollama pull llama3.2
# Set the model
export OLLAMA_MODEL="llama3.2"Setting Environment Variables Permanently
macOS/Linux:
# Add to ~/.bashrc, ~/.zshrc, or ~/.profile
echo 'export ANTHROPIC_API_KEY="your-key"' >> ~/.zshrc
source ~/.zshrcWindows:
# Run in PowerShell as Administrator
[System.Environment]::SetEnvironmentVariable('ANTHROPIC_API_KEY','your-key','User')🚀 Usage
Basic Usage
# Generate a new app
term-v0 "create a file explorer with preview"
# The app will be generated, saved, and launched automaticallyConfigure Your Provider
# Set up or change your LLM provider
term-v0 --configure
# List available providers
term-v0 --list-providers
# Override for a single run
TERM_V0_PROVIDER=google TERM_V0_MODEL=gemini-1.5-pro term-v0 "build a dashboard"
# Override via flags
term-v0 --provider openai --model gpt-4o "build a todo app"Manage Your Apps
# List all your generated apps
term-v0 --list
# Run a previously generated app
term-v0 --run-app my-file-explorer
# Refine an existing app
term-v0 --refine my-file-explorerAdvanced Options
# Use a specific UI library (blessed or ink)
term-v0 --library blessed "create a dashboard"
# Enable iterative refinement
term-v0 --refine "make the UI more colorful"
# Control refinement iterations
term-v0 --refine-iters 3 "optimize the layout"
# Set max error fix attempts
term-v0 --fix-iters 5 "build a complex app"This provides a visual interface for:
- Managing your app library
- Running and testing apps
- Refining existing apps
- Configuring providers
📁 Where Apps Are Saved
Your generated apps are saved in:
- macOS/Linux:
~/.term-v0/apps/ - Windows:
%USERPROFILE%\.term-v0\apps\
Each app includes:
app.js- The generated application codedesign.txt- The design documentmetadata.json- App information and prompt history
⚠️ Security Notice
term-v0 generates and executes code based on your prompts. While we implement safety measures:
- Review generated code before running in production
- Use in isolated environments when testing
- Be specific with your prompts to get expected behavior
- Don't run on systems with sensitive data without review
🤝 Contributing
Contributions are welcome! Please feel free to submit issues and pull requests.
📄 License
MIT License - see LICENSE file for details
