langterm
v1.3.2
Published
Secure CLI tool that translates natural language to shell commands using local AI models via Ollama, with project memory system, reusable command templates (hooks), MCP (Model Context Protocol) support, and dangerous command detection
Maintainers
Keywords
Readme

Langterm
Langterm translates natural language to executable shell commands using local AI models through Ollama. It works on Windows, macOS, and Linux.
🚀 New in v1.2.0: Project Memory System for location-aware context enhancement, User-Defined Hooks system for reusable command templates, plus MCP (Model Context Protocol) support with intelligent routing that can execute MCP tools directly, generate terminal commands, or use hybrid approaches for optimal results.
✨ v1.0.1: Enhanced security with dangerous command detection and confirmation system to keep your system safe.
Prerequisites
- Node.js 18+ - Required to run langterm
- Ollama - Install from https://ollama.com
- An Ollama model - Pull one before first use:
ollama pull codestral:22b # or for a lighter/faster model: ollama pull deepseek-coder:6.7b
Docker/Remote Ollama Setup
If Ollama is running in Docker or on a remote server:
# Set the Ollama URL environment variable
export OLLAMA_URL=http://host.docker.internal:11434 # For Docker
export OLLAMA_URL=http://192.168.1.100:11434 # For remote serverInstallation
Using npx (no installation needed)
npx langtermGlobal installation
npm install -g langtermUsage
First Run
When you run langterm for the first time, it will:
- Check if Ollama is running
- Show available models
- Let you select your preferred model
- Save your choice for future use
Basic Usage
# Interactive mode
langterm
# Direct command
langterm list all files larger than 100MB
# With quotes for complex commands
langterm "find all Python files modified in the last week"Examples
langterm show disk usage sorted by size
langterm find process running on port 8080
langterm create tar archive excluding node_modules
langterm "convert all PNG images to JPG in current directory"
langterm extract audio from video.mp4 as mp3Options
langterm --setup- Reconfigure model selectionlangterm --model <name>- Use a specific model for this runlangterm --mcp-setup- Configure MCP servers for enhanced contextlangterm --mcp-status- Show MCP connection statuslangterm --mcp-enable/--mcp-disable- Toggle MCP integrationlangterm --hooks-create <name>- Create a new hook templatelangterm --hooks-list- List all available hookslangterm --hooks-edit <name>- Edit an existing hooklangterm --hooks-delete <name>- Delete a hooklangterm --hooks-search <term>- Search hooks by name or contentlangterm /hookname- Use a hook templatelangterm --remember <info>- Save project informationlangterm --recall- Show saved project memorylangterm --forget- Clear project memorylangterm --memory-status- Show memory statuslangterm --help- Show help messagelangterm --version- Show version
How It Works
- You describe what you want in plain English
- Project Memory (automatic): Langterm loads project-specific context and learning patterns
- Enhanced Context (if MCP enabled): Langterm gathers relevant context from connected servers
- Langterm sends your request + context to your local Ollama model
- The AI generates the appropriate shell command with enhanced accuracy
- Security check: Langterm analyzes the command for dangerous patterns
- You review the command and confirm execution based on the security level
- Learning (automatic): Successful commands are recorded for future reference
MCP (Model Context Protocol) Support
What is MCP?
MCP allows Langterm to connect to external servers for enhanced context awareness. NEW in v1.2.0: MCP-first intelligent routing that prioritizes MCP tools when available:
- MCP-First Approach: Always attempts to match user requests with available MCP tools first
- Intelligent Matching: Analyzes tool capabilities against user input without hardcoded patterns
- Terminal Fallback: Only generates terminal commands when no suitable MCP tools are found
- Smart Tool Selection: Automatically selects the most relevant MCP tools based on request
Quick Start with MCP
Setup MCP servers:
langterm --mcp-setupCheck connection status:
langterm --mcp-statusExperience intelligent routing:
# File operations → Direct MCP tool execution langterm "read the package.json file" # Result: Uses MCP filesystem tool directly (with confirmation) # System operations → Terminal command generation langterm "find files larger than 100MB" # Result: Generates optimized find command # Complex tasks → Hybrid approach langterm "analyze Python imports in project files" # Result: MCP finds files + generates analysis commands
Example MCP Benefits
Before MCP:
$ langterm "show me the main configuration file"
# Result: ls *.conf *.configWith MCP (filesystem server providing project context):
$ langterm "show me the main configuration file"
# Enhanced context: "Project uses .env, package.json, docker-compose.yml"
# Result: cat package.jsonSupported MCP Server Types
- Local servers (
stdio): Filesystem access, Git integration, project tools - Remote servers (
HTTP/SSE): Web APIs, cloud services, databases
Security with MCP
- ✅ Same security validation: All MCP-enhanced commands go through identical security checks
- ✅ No privilege escalation: MCP provides context, not execution permissions
- ✅ User control: You configure which servers to connect to
- ✅ Graceful fallback: If MCP fails, commands work without enhanced context
Location-Based Memory System
NEW in v1.2.0: Langterm saves context for any directory, enhancing commands with location-specific information.
How Memory Works
Langterm:
- Saves memory per directory - Each location can have its own
.langterm-memory.json - Works anywhere - Not limited to "projects", works in any directory
- Learns from successful commands - Improves future suggestions
- Auto-detects context - Recognizes project types, languages, and build systems
Quick Start with Memory
Save information for current directory:
langterm --remember "This is a React TypeScript project using Vite and Tailwind"Use enhanced commands:
# Memory automatically provides context about your React/TypeScript setup langterm "run tests" # Result: npm test (or yarn test, based on your project) langterm "build for production" # Result: npm run build (optimized for your Vite setup)View your project memory:
langterm --recall
Automatic Learning
Langterm learns from your usage patterns:
- Successful commands are recorded for future reference
- Similar requests get suggestions from your history
- Project preferences are detected and remembered
Example Memory Benefits
Before Memory:
$ langterm "run tests"
# Generic result: npm testWith Memory (React TypeScript project):
$ langterm "run tests"
# Enhanced context: "React TypeScript project using Vite, prefers npm"
# Smart result: npm run test:coverageMemory Commands
# Save information about your project
langterm --remember "Uses microservices architecture with Docker"
# View all saved memory for current project
langterm --recall
# Clear memory for current project
langterm --forget
# Check memory status across projects
langterm --memory-statusWhat Gets Detected Automatically
- Project Type: Node.js, Python, Rust, Go, Java, PHP, Ruby, Docker, etc.
- Primary Language: Based on file extensions in your project
- Build System: npm, yarn, cargo, make, gradle, maven, etc.
- Framework Patterns: React, Vue, Express, FastAPI, etc. (from file structure)
Memory Safety
- ✅ Local storage only: Memory files stay in your project directories
- ✅ Size limits: Maximum 50KB per project to prevent bloat
- ✅ Security scanning: All memory content is validated for safety
- ✅ Easy cleanup: Delete
.langterm-memory.jsonto reset
User-Defined Hooks
NEW in v1.2.0: Create reusable command templates stored as markdown files that you can invoke with simple /hookname syntax.
How Hooks Work
Instead of typing the same long descriptions repeatedly, create hooks that store your frequently-used commands:
- Create a hook with a descriptive name
- Store natural language in a
.mdfile - Invoke with
/hooknamesyntax - Langterm processes the hook content as if you typed it
Quick Start with Hooks
Create your first hook:
langterm --hooks-create backup # Enter: "create a compressed backup of the src directory with today's date"Use your hook:
langterm /backup # Same as: langterm "create a compressed backup of the src directory with today's date"List all hooks:
langterm --hooks-list
Hook Management
# Create a new hook template
langterm --hooks-create deploy
langterm --hooks-create git-status
langterm --hooks-create cleanup
# Edit existing hooks
langterm --hooks-edit backup
# Delete hooks you no longer need
langterm --hooks-delete old-hook
# Search through your hooks
langterm --hooks-search gitHook Examples
backup.md:
create a compressed backup of the src/ directory with today's date in the filenamedeploy.md:
build the project, run tests, and deploy to production server using rsyncgit-status.md:
show git status with information about uncommitted changes and current branchcleanup.md:
remove all .tmp files, clear npm cache, and delete node_modules in nested directoriesmonitor.md:
show system resource usage including CPU, memory, and disk space with human-readable formatUsage Examples
# Instead of typing this every time:
langterm "create a compressed backup of the src/ directory with today's date in the filename"
# Create a hook and use it:
langterm --hooks-create backup
langterm /backup
# More examples:
langterm /deploy # Deploy your application
langterm /git-status # Check git status with details
langterm /cleanup # Clean up temporary files
langterm /monitor # Check system resourcesHook Storage
Hooks are stored as markdown files in ~/.langterm/hooks/:
~/.langterm/hooks/
├── backup.md
├── deploy.md
├── git-status.md
└── cleanup.mdHook Safety
- ✅ Name validation: Only alphanumeric, dashes, and underscores allowed
- ✅ Content validation: Basic checks for dangerous patterns in templates
- ✅ File size limits: Maximum 10,000 characters per hook
- ✅ Security scanning: Hook content is scanned for potentially dangerous patterns
Security Features
Langterm includes comprehensive safety measures to protect against dangerous commands:
🔴 Dangerous Commands
Commands that could seriously damage your system require typing "YES I AM SURE" exactly:
- File system destruction (
rm -rf /,format C:) - Fork bombs and infinite loops
- System file corruption
- Permission changes that compromise security
- Network security bypasses
- System shutdown commands
🟡 Warning Commands
Potentially risky commands require typing "yes" to confirm:
- Commands requiring elevated privileges (
sudo) - Recursive force deletions (
rm -rf) - File overwrites (
>redirections) - Executing remote scripts (
curl | sh)
🟢 Safe Commands
Normal commands only require pressing Enter to execute:
- File listings, searches, and reads
- Process management
- Archive operations
- Safe file operations
Example security prompts:
$ langterm "delete everything on my computer"
⚠️ DANGEROUS COMMAND DETECTED ⚠️
Risk: Recursive deletion of root directory
Command: rm -rf /
This command could cause serious damage to your system!
Type "YES I AM SURE" to continue, or anything else to cancel:Configuration
Langterm saves your model preference in ~/.langtermrc. You can:
- Run
langterm --setupto change models - Edit the file directly
- Use
--modelflag to temporarily use a different model
Troubleshooting
"Ollama is not running"
Langterm will now provide helpful hints when Ollama isn't running:
❌ Ollama is not running!
💡 Hint: Is Ollama running? Try: ollama serveStart Ollama with:
ollama serve"No models found"
Pull a model first:
ollama pull codestral:22bCommand generation issues
- Try being more specific in your description
- Some models work better than others for command generation
- Codestral and DeepSeek Coder models are recommended
Security warnings
- If you see dangerous command warnings, double-check what you're asking for
- The AI might misinterpret your request - review the generated command carefully
- You can always cancel with Ctrl+C if something doesn't look right
Privacy
All processing happens locally on your machine. No data is sent to external servers.
License
MIT
