@dzackgarza/prompt-router
v0.1.0
Published
[](https://ko-fi.com/I2I57UKJ8)
Downloads
18
Readme
prompt-router
Classify user prompts into routing tiers and rewrite messages with this OpenCode plugin. It uses chat.message to transform user text.
Install
Run these commands to install:
cd /home/dzack/opencode-plugins/prompt-router
just installRegister the plugin via file: in your OpenCode config:
{
"plugin": [
"file:///home/dzack/opencode-plugins/prompt-router/src/index.ts"
]
}View a sample configuration here: prompt-router/.config/opencode.json
MCP: None. This package provides a chat-transform hook rather than a tool server.
Agent Surface
This plugin intercepts chat messages without exposing tool names. It performs these actions:
- Reads the latest user text.
- Classifies input into tiers:
model-self,knowledge,C,B,A, orS. - Injects instructions from the canonical response template.
Dependencies:
- Runtime: Bun,
@opencode-ai/plugin,yaml - External local assets:
~/ai/prompts/... - External local runtime:
~/ai/opencode/.venv, which providesllm-runandllm-template-renderfrom the GitHub-backedllm-runnerandllm-templating-enginedependencies declared in~/ai/opencode/pyproject.toml
LLM Integration
prompt-router does not call the legacy ~/ai/scripts/llm bridge. It shells into the standalone JSON CLIs instead:
llm-runfor prompt execution and structured classifier outputllm-template-renderfor response-template rendering
That means the local OpenCode environment must be synced first:
cd /home/dzack/ai/opencode
uv sync --devFor ad hoc smoke tests outside the project environment, the canonical upstream CLIs are also available directly from GitHub via uvx --from:
uvx --from git+https://github.com/dzackgarza/llm-runner.git llm-run --help
uvx --from git+https://github.com/dzackgarza/llm-templating-engine.git llm-template-render --helpChecks
Run checks with just:
just typecheck
just test