@renjfk/opencode-voice
v0.1.4
Published
Speech-to-text and text-to-speech for OpenCode. Record voice prompts with whisper-cpp, hear responses via Piper TTS, with LLM normalization through any OpenAI-compatible endpoint.
Downloads
591
Maintainers
Readme
opencode-voice
Speech-to-text and text-to-speech plugin for OpenCode.
Record voice prompts with local whisper transcription, hear assistant responses spoken aloud via Piper TTS. Both directions use an LLM to normalize text for natural speech (fixing homophones, splitting camelCase identifiers, summarizing code-heavy responses, etc.).
Install
Add to your tui.json (create at ~/.config/opencode/tui.json if it doesn't exist):
{
"$schema": "https://opencode.ai/tui.json",
"plugin": ["@renjfk/opencode-voice"]
}Prerequisites
Speech-to-text
brew install whisper-cpp soxDownload a whisper model to ~/.local/share/whisper-cpp/:
mkdir -p ~/.local/share/whisper-cpp
curl -L -o ~/.local/share/whisper-cpp/ggml-large-v3-turbo-q5_0.bin \
https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3-turbo-q5_0.binText-to-speech
Install Piper:
uv tool install piper-ttsOr with pip:
pip install piper-ttsDownload a voice model to ~/.local/share/piper-voices/:
mkdir -p ~/.local/share/piper-voices
curl -L -o ~/.local/share/piper-voices/en_US-ryan-high.onnx \
https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/ryan/high/en_US-ryan-high.onnx
curl -L -o ~/.local/share/piper-voices/en_US-ryan-high.onnx.json \
https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/ryan/high/en_US-ryan-high.onnx.jsonLLM endpoint
An OpenAI-compatible LLM endpoint is required for text normalization. For speech-to-text it cleans up whisper output (punctuation, filler words, software engineering homophones). For text-to-speech it converts markdown into natural spoken text.
By default uses Anthropic's OpenAI compatibility layer with claude-haiku-4-5.
Requires ANTHROPIC_API_KEY in your environment.
Set defaults in tui.json via plugin options:
{
"plugin": [
[
"@renjfk/opencode-voice",
{
"endpoint": "https://api.anthropic.com/v1",
"model": "claude-haiku-4-5",
"apiKeyEnv": "ANTHROPIC_API_KEY",
"maxTokens": 2048
}
]
]
}Any OpenAI-compatible endpoint works (Ollama, vLLM, LM Studio, etc.).
Custom prompts
The LLM system prompts used for normalization can be fully replaced by pointing to your own prompt files. This lets you fine-tune how transcriptions are cleaned up or how responses are spoken.
{
"plugin": [
[
"@renjfk/opencode-voice",
{
"sttPrompt": "~/.config/opencode/stt-prompt.md",
"ttsAutoPrompt": "~/.config/opencode/tts-auto-prompt.md",
"ttsManualPrompt": "~/.config/opencode/tts-manual-prompt.md"
}
]
]
}sttPrompt- system prompt for cleaning up whisper transcriptionsttsAutoPrompt- system prompt for auto-speaking assistant responsesttsManualPrompt- system prompt for manually reading responses aloud
If a path is not set, the built-in default prompt is used.
Commands
Speech-to-text
| Command | Keybind | Description |
| ------------- | -------- | --------------------------------- |
| /stt-record | ctrl+r | Start/stop recording + transcribe |
| /stt-stop | | Cancel recording |
| /stt-model | | Select whisper model |
| /stt-mic | | Select microphone |
Text-to-speech
The leader key in OpenCode is ctrl+x. So leader+s means press ctrl+x
then s.
| Command | Keybind | Description |
| ------------ | ---------- | ------------------------ |
| /tts-speak | leader+s | Read last response aloud |
| /tts-mode | leader+v | Toggle auto TTS on/off |
| /tts-stop | escape | Stop playback |
| /tts-voice | | Select TTS voice |
How it works
STT pipeline
soxrecords audio from your microphonewhisper-clitranscribes locally using a ggml model- LLM normalizes the transcription: fixes punctuation, removes filler words, corrects software engineering homophones ("Jason" to "JSON", "bullion" to "boolean", etc.)
- Cleaned text is appended to the OpenCode prompt
TTS pipeline
- When the assistant finishes responding (or on manual trigger), the response text is sent to the LLM for speech normalization
- The LLM decides how to handle it: narrate simple answers, summarize code-heavy responses, or briefly notify for confirmations
- Piper synthesizes speech locally, piped through sox for playback
Auto TTS
When enabled (/tts-mode), the plugin automatically speaks:
- Assistant responses when a session goes idle after work
- Permission requests
- Questions that need your answer
Contributing
opencode-voice is open to contributions and ideas!
Issue conventions
Format: type: brief description
feat:new features or functionalityfix:bug fixesenhance:improvements to existing featureschore:maintenance tasks, dependencies, cleanupdocs:documentation updatesbuild:build system, CI/CD changes
Development
npm run check # lint + fmt
npm run lint # oxlint
npm run fmt # oxfmt --check
npm run fmt:fix # oxfmt --writeRelease process
Manual releases via opencode; see RELEASE_PROCESS.md.
License
This project is licensed under the MIT License.
