@enfuseio/sidecar
v0.5.2
Published
AI-powered local-first app builder - Build apps with natural language
Maintainers
Readme
Enfuse Sidecar - AI-Powered App Builder
A local-first AI development assistant that generates and executes code using your on-premises LLM infrastructure.
🚀 Quick Start
# Install globally from npm
npm install -g @enfuseio/sidecar
# Or install from source
cd apps/sidecar
npm install
npm link
# Initialize a project
mkdir myproject && cd myproject
enfuse init
# Describe what you want to build
enfuse "Create an Express API with user authentication"
# Review and approve the plan
enfuse plan approve -a
# Execute and auto-commit
enfuse plan execute --commit✨ Features
Core Features
- 🤖 Local LLM Integration - Uses your local LLM (default: localhost:8000)
- 📋 Plan Generation - AI generates structured, executable plans
- ✅ Step Approval - Review diffs before any changes
- 📁 File Operations - Creates, modifies files automatically
- 🔧 Command Execution - Runs npm install, tests, etc.
- 📝 Git Integration - Auto-commit changes after execution
Nice-to-Have Features
- 💬 Interactive Mode - REPL with continuous chat
- 🔄 Plan Refinement - Iterate on plans with feedback
- 🧪 Test Running - Integrated test execution
- 🖥️ Live Preview - Start dev servers automatically
📖 Commands
Task Execution
# Generate a plan from task description
enfuse "Create a React dashboard with charts"
# Run in interactive mode (REPL)
enfusePlan Management
# View current plan
enfuse plan
# Show detailed plan with diffs
enfuse plan show -v
# Approve all pending steps
enfuse plan approve -a
# Approve specific step
enfuse plan approve 2
# Refine plan with feedback
enfuse plan refine "Use TypeScript instead of JavaScript"
# Execute approved steps
enfuse plan execute
# Execute with auto-commit
enfuse plan execute --commit
# Clear current plan
enfuse plan clearDevelopment
# Start dev server
enfuse dev --preview
# Run tests
enfuse test
# Run tests in watch mode
enfuse test --watch
# Run tests with coverage
enfuse test --coverageSystem
# Check system health
enfuse status
# Debug health check
enfuse debug health -v
# View budget/usage
enfuse budget
# Initialize project
enfuse init🎮 Interactive Mode
Start interactive mode with just enfuse:
$ enfuse
✓ Connected to openai/gpt-oss-20b
enfuse ➜ How do I add authentication to Express?
[AI responds with explanation]
enfuse ➜ do Create JWT middleware
[Generates executable plan]
enfuse ➜ approve
✓ Approved 3 steps.
enfuse ➜ execute
✓ Execution complete!
enfuse ➜ help
━━━ Interactive Commands ━━━
▸ Chat
• Just type to chat with the AI
• "reset" - Clear conversation history
▸ Tasks
• "do <task>" or "create <task>" - Generate executable plan
• "plan" - View current plan
• "approve" - Approve all pending steps
• "execute" or "run" - Execute approved steps
▸ System
• "status" - Check system health
• "clear" - Clear screen
• "exit" or "quit" - Exit📁 Project Structure
After enfuse init, your project will have:
myproject/
├── .enfuse/
│ ├── plans/ # Saved plans
│ │ └── current.json
│ ├── cache/ # LLM response cache
│ ├── logs/ # Execution logs
│ └── checkpoints/ # Rollback checkpoints
├── enfuse.config.json # Configuration
└── .gitignore⚙️ Configuration
enfuse.config.json:
{
"environment": "dev",
"platformApiUrl": "https://platform-dev.enfuse.ai",
"localLlmUrl": "http://localhost:8000/v1",
"loraLlmUrl": "http://localhost:8001/v1",
"logLevel": "info",
"autoCommit": true,
"autoRunTests": true,
"dailyBudgetLimit": 10,
"sessionBudgetLimit": 5
}Environment Variables
ENFUSE_LLM_URL=http://localhost:8000/v1
ENFUSE_LORA_URL=http://localhost:8001/v1
ENFUSE_API_URL=https://platform-dev.enfuse.ai
ENFUSE_API_KEY=your-api-key🔄 Workflow Example
Create a TypeScript Express API
# 1. Start fresh
mkdir my-api && cd my-api && git init
# 2. Initialize Enfuse
enfuse init -y
# 3. Describe your goal
enfuse "Create an Express API with GET /users endpoint"
# Output:
# ▸ Generated Plan:
# ○ 1. [+] Create package.json
# ○ 2. [cmd] Install Express
# ○ 3. [+] Create index.js
# 4. Want TypeScript? Refine the plan
enfuse plan refine "Use TypeScript with proper types"
# Output:
# ✓ Plan refined!
# ▸ Updated Steps:
# ○ 1. [+] Create package.json (with TS deps)
# ○ 2. [cmd] Install dependencies
# ○ 3. [+] Create tsconfig.json
# ○ 4. [+] Create src/index.ts
# 5. Review the plan
enfuse plan show -v
# 6. Approve all steps
enfuse plan approve -a
# 7. Execute with git commit
enfuse plan execute --commit
# 8. Start the dev server
enfuse dev --preview
# 9. Test your API
curl http://localhost:3000/users
# [{"id":1,"name":"Alice"},{"id":2,"name":"Bob"}]🧪 Supported Test Frameworks
- vitest (recommended)
- jest
- mocha
- ava
- tap
Auto-detected from package.json.
🖥️ Supported Project Types
For enfuse dev --preview:
- Next.js -
npm run devon port 3000 - Vite -
npm run devon port 5173 - Create React App -
npm starton port 3000 - Express -
npm starton port 3000 - FastAPI -
uvicornon port 8000 - Static -
serveon port 3000
🏗️ Architecture
┌─────────────────────────────────────────────────────────────┐
│ enfuse CLI │
├─────────────┬─────────────┬─────────────┬──────────────────┤
│ init │ plan │ test │ dev │
│ status │ refine │ debug │ budget │
└─────────────┴──────┬──────┴─────────────┴──────────────────┘
│
┌────────────┴────────────┐
│ │
┌───────▼───────┐ ┌────────────▼────────────┐
│ Plan Manager │ │ Plan Generator (LLM) │
│ (.enfuse/ │ │ → Structured JSON │
│ plans/) │ │ → File content │
└───────┬───────┘ └────────────────────────┘
│
┌───────▼───────┐ ┌─────────────────────────┐
│ Plan Executor │───▶│ Local LLM (localhost) │
│ → Files │ │ openai/gpt-oss-20b │
│ → Commands │ │ ~19-50ms latency │
│ → Git commit │ └─────────────────────────┘
└───────────────┘📊 Plan Step Types
| Type | Icon | Description |
|------|------|-------------|
| create | [+] | Create a new file |
| modify | [~] | Modify existing file |
| delete | [-] | Delete a file |
| command | [cmd] | Run shell command |
| test | [test] | Run tests |
🔒 Security
- Local-first: Your code never leaves your network
- On-prem LLM: Uses your local LLM infrastructure
- Approval required: No changes without explicit approval
- Diff preview: See exactly what will change
- Git integration: Every change is trackable
🐛 Troubleshooting
LLM not responding
enfuse status
# Check "Local LLM" status
enfuse debug health -v
# Detailed health checkPlan execution fails
enfuse plan
# Check step statuses
enfuse plan show -v
# See detailed step infoReset everything
enfuse plan clear -f
# Clear current plan
rm -rf .enfuse
enfuse init
# Reinitialize📚 More Resources
🏷️ Version
v0.5.0 - Published to npm as @enfuseio/sidecar
