opencode-fastmode
v0.1.0
Published
Fast mode toggle for OpenCode GPT-5.4 requests
Maintainers
Readme
opencode-fastmode
Workaround plugin for GPT-5.4 fast mode in OpenCode.
This package avoids slash-command hacks. It uses:
- an OpenCode plugin that applies
serviceTier: "priority"inchat.params - a small CLI that updates the persisted fast mode state
It is not first-class OpenCode fast mode support.
It does not add /fast, prompt status UI, or model-level controls metadata inside OpenCode itself.
It only applies the request option for supported model calls.
Because the toggle happens outside the chat flow, it does not require a model reply and does not add transcript noise.
Quick start
Local development
- Install the CLI from this repo:
npm install -g /absolute/path/to/opencode-fastmode- Load the plugin from a global OpenCode plugin shim:
export { FastmodePlugin, default } from "/absolute/path/to/opencode-fastmode/index.js"Save that as ~/.config/opencode/plugins/fastmode.js.
Restart OpenCode.
Toggle and verify:
oc-fast on
oc-fast statusAfter publishing to npm
- Install:
npm install -g opencode-fastmode- Add the plugin to
~/.config/opencode/opencode.jsonc:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-fastmode"]
}- Restart OpenCode.
What it supports
openai/gpt-5.4- all OpenCode agents that use
openai/gpt-5.4 - persisted state in
~/.config/opencode/fastmode.json
What it does not support
/fastinside OpenCode- prompt status line indicators
- OpenCode model
controlsmetadata - upstream migration behavior or compatibility aliases
State file
~/.config/opencode/fastmode.json is the shared state between the CLI and the OpenCode plugin.
oc-fast on|off|togglewrites to this file- the plugin reads this file for every
chat.paramscall - if the file is missing, fast mode defaults to
OFF
Example:
{
"models": {
"openai/gpt-5.4": {
"enabled": true
}
}
}Yes, it is still needed in the current design. It is what makes the toggle persistent without requiring a model message, a slash command, or an OpenCode restart for every change.
If you delete it, the package will simply recreate default state on the next CLI write.
Install
1. Install the package for the CLI
After publishing to npm:
npm install -g opencode-fastmodeFor local development:
npm install -g /absolute/path/to/opencode-fastmode2. Load the plugin in OpenCode
After publishing to npm, add it to ~/.config/opencode/opencode.jsonc:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-fastmode"]
}For local development before publishing, you can load the repo directly from a global plugin file:
export { FastmodePlugin, default } from "/absolute/path/to/opencode-fastmode/index.js"Place that file in ~/.config/opencode/plugins/ and restart OpenCode.
CLI usage
oc-fast on
oc-fast off
oc-fast toggle
oc-fast status
oc-fast pathExample output:
Fast mode enabled for openai/gpt-5.4Use this for current state feedback:
oc-fast statusHow it works
When fast mode is enabled, the plugin checks each model call in chat.params.
If the current model is openai/gpt-5.4, it sets:
{
"serviceTier": "priority"
}No reasoning or verbosity settings are modified.
This mirrors the manual options.serviceTier = "priority" workaround people have discussed for OpenCode config overrides.
Verify it is active
- run
oc-fast status - make sure your active model is
openai/gpt-5.4 - restart OpenCode after changing plugin installation/config
Development
Run tests:
npm testPublish
- Create a GitHub repo
- Push this project
- Publish to npm:
npm publishThen switch your OpenCode config to the npm package name and remove any local plugin shim.
