indirecttek-vibe-engine
v2.7.13
Published
Autonomous Local AI Agent. Vibe Coding v1.0.
Readme
IndirectTek Vibe Engine 🤖⚡
"Autonomous Local AI Agent. Maximum Badassery."
IndirectTek Vibe Engine turns your VS Code into an autonomous development powerhouse. It writes code, creates files, and executes commands—running entirely on your infrastructure.
🚀 Quick Start (Direct Mode)
Prerequisites:
- Ollama installed and running (
http://localhost:11434). - Model:
qwen2.5-coder:14b(Recommended). Pull it viaollama pull qwen2.5-coder:14b.
Step 1: Install & Go
- Install "IndirectTek Vibe Engine" from VS Code Marketplace.
- Open the IndirectTek AI chat panel.
- Type: "Create a hello world python script".
- Important: Check your Settings! The default configuration might point to a specific IP (
192.168...). ChangepartnerBot.ollamaUrltohttp://localhost:11434(or your server IP) if running locally.
Note: The agent connects to http://localhost:11434 by default.
🛠️ Advanced: Vibe Controller (Optional)
For Telemetry, Dashboards, and Enterprise Features, run the standalone Vibe Controller:
- Run:
npx indirecttek-vibe-engine@latest - Copy the Token.
- In VS Code Settings:
- Set Use Controller to
true. - Paste the Controller Token.
- Set Use Controller to
✨ Key Features (v2.6)
- 100% Data Privacy: Your code never leaves your network.
- Permission System (Yellow Card): Interactive approval for sensitive actions:
- Shell:
run_commandrequires manual allow (Always/Once/Deny). - Visual:
generate_imagetriggers approval. - Files:
create_file,edit_file,delete_filecheck permissions. - Capabilities: Command/install/delete and file/delete are gated separately.
- Shell:
- Cybernetic Vision: The agent can now Scan Directories (
LIST_DIR) to understand your project structure instantly. - Studio Mode: Generate assets using
generate_image(requires Fooocus API). - Intent Router: Automatically switches models based on task (Chat vs Refactor vs Plan).
- Autonomous Editing: Strict "Hunk Only" editing to prevent creating massive diffs.
- Working Memory: Keeps goal/last tool/next step to maintain momentum.
⚙️ Configuration (The "Router")
You can map distinct models to different "Intents" in your settings.json:
"partnerBot.models.fast": "qwen2.5:7b", // Chat, Explanations
"partnerBot.models.default": "qwen2.5-coder:14b", // Refactoring, Editing
"partnerBot.models.deep": "qwen2.5:32b", // Planning, Complex Logic
"partnerBot.allowedCommands": ["npm install", "cd", "ls", "grep"], // Whitelist
"partnerBot.allowDangerousCommands": false, // Override hard-deny list (rm/sudo/etc)
"partnerBot.toolEnforcement": "soft" // off | soft | strictThe agent will automatically route "Fix this" requests to the Default (Coder) model and casual chat to the Fast model.
🚨 Troubleshooting
- Command 'cd' not in allowed list: Update to v2.6.39+.
- Image Generation fails: Ensure Fooocus is running and
partnerBot.fooocusUrlis correct. - Connection Refused: Check your
ollamaUrl. If using WSL or Docker, use the host IP. - New VSIX not taking effect: Uninstall and reinstall the VSIX (reload alone may keep the old extension host).
📜 License
MIT License. Built with ❤️ and ☕ by IndirectTek.
