@canva/opencode-plugin-llmproxy
v0.20260407.0
Published
OpenCode plugin for Canva LLMProxy (AWS Bedrock, OpenAI, Google)
Readme
opencode-plugin-llmproxy
OpenCode plugin for accessing LLMs via Canva's internal LLMProxy. Supports:
- AWS Bedrock (Anthropic Claude models)
- Google AI (Gemini models)
- OpenAI (GPT models)
Installation
1. Add the Canva npm registry
Add to ~/.npmrc:
@canva:registry=https://depot.canva-internal.com/v1/npm/public/2. Configure opencode.json
Add to ~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"share": "disabled",
"enabled_providers": [
"amazon-bedrock",
"google",
"openai"
],
"provider": {
"amazon-bedrock": {
"models": {
"anthropic.claude-opus-4-6-v1": {
"limit": { "context": 200000, "output": 128000 }
},
"us.anthropic.claude-opus-4-6-v1": {
"limit": { "context": 200000, "output": 128000 }
},
"global.anthropic.claude-opus-4-6-v1": {
"limit": { "context": 200000, "output": 128000 }
},
"eu.anthropic.claude-opus-4-6-v1": {
"limit": { "context": 200000, "output": 128000 }
}
}
}
},
"plugin": [
"@canva/opencode-plugin-llmproxy"
]
}Note: The Claude Opus 4.6 context window override fixes an upstream models.dev bug that incorrectly reports 1M tokens instead of the Bedrock limit of 200K.
The plugin is installed automatically by opencode using Bun on first startup. No cloning or building required. The package is cached at ~/.cache/opencode/node_modules/.
3. Configure auth.json
On VPN (non-Coder), add to ~/.local/share/opencode/auth.json:
{
"amazon-bedrock": { "type": "api", "key": "llmproxy" },
"google": { "type": "api", "key": "llmproxy" },
"openai": { "type": "api", "key": "llmproxy" }
}On Coder devboxes, omit the amazon-bedrock key:
{
"google": { "type": "api", "key": "llmproxy" },
"openai": { "type": "api", "key": "llmproxy" }
}Why: OpenCode only calls the plugin's
auth.loaderfor providers that have an auth.json entry. Without anamazon-bedrockentry on VPN, OpenCode finds no credentials at startup and fails before the plugin can inject the bearer token. The"key": "llmproxy"value is a placeholder — the plugin replaces it with a real token fetched viaotter.On Coder, do NOT add the
amazon-bedrockkey. The plugin uses AWS IMDS credentials (SigV4) instead of bearer tokens, and having a placeholder key would break that path.
Available Models
Bedrock (Claude)
Use the full Bedrock model ID:
amazon-bedrock/anthropic.claude-haiku-4-5-20251001-v1:0amazon-bedrock/anthropic.claude-sonnet-4-5-20250929-v1:0amazon-bedrock/anthropic.claude-opus-4-5-20251101-v1:0
Cross-region inference is supported with global. prefix for Claude 4.5+ models:
amazon-bedrock/global.anthropic.claude-sonnet-4-5-20250929-v1:0
Bedrock (Third-Party — US-only)
These models are available in US regions only and do not use inference profile prefixes:
amazon-bedrock/moonshotai.kimi-k2.5— Kimi K2.5 (Moonshot AI)amazon-bedrock/qwen.qwen3-coder-next— Qwen3 Coder Next (Qwen)amazon-bedrock/nvidia.nemotron-nano-3-30b— Nemotron Nano 3 30B (NVIDIA)
Note: GLM 4.7 models (
zai.glm-4.7,zai.glm-4.7-flash) are available on Bedrock but have a known tool-calling bug where the Converse API rejects tool result messages. They work for basic text generation but are not usable for agentic workflows.
Google (Gemini)
google/gemini-2.5-flashgoogle/gemini-2.5-progoogle/gemini-3-flash-preview
OpenAI
openai/gpt-4oopenai/gpt-5.3-codex
How It Works
Bedrock
- Fetches bearer tokens via VPN (fast, 2s timeout) or
otterCLI (fallback) - Sets
AWS_BEARER_TOKEN_BEDROCKandAWS_REGIONenvironment variables - Refreshes tokens automatically before each LLM call
OpenAI
- Coder (devbox): Fetches the real API key from AWS Secrets Manager (
/devbox/forge/openai-api-key) and contacts OpenAI directly. Falls back to mTLS LLMProxy if the secret is unavailable. - VPN: Routes through the VPN LLMProxy endpoint with a dummy key placeholder.
- Detects Coder environment and uses mTLS client certificates
- Routes requests through LLMProxy endpoints
- Authentication is handled by mTLS (no API keys needed)
Environment Detection
The plugin automatically detects the environment:
- Coder (devbox):
- Bedrock: Uses AWS IMDS credentials (when
CODER=trueandCODER_AGENT_AUTH=aws-instance-identity) - Google/OpenAI: Uses mTLS with client certificates from
~/.pki/canva/
- Bedrock: Uses AWS IMDS credentials (when
- VPN: Uses VPN endpoints (requires network access)
Required Settings
share: "disabled"- Prevents creating publicly accessible session URLsenabled_providers- Restricts to vetted providers only
Prerequisites
- Coder environment OR VPN connection to Canva network
- For Bedrock on VPN:
otterCLI (otter bedrock-bearer-tokenfor authentication) - For Bedrock on Coder: AWS instance identity auth (automatic via IMDS)
Troubleshooting
Token errors (Bedrock)
otter bedrock-bearer-token --force-refreshCheck plugin logs
tail -f ~/.local/share/opencode/log/*.log | grep canva-llmproxyProviderModelNotFoundError (all providers)
If you see ProviderModelNotFoundError with suggestions: [] for all providers, the plugin failed to load entirely. Check:
Plugin installed? Verify the package is in the opencode cache:
ls ~/.cache/opencode/node_modules/@canva/opencode-plugin-llmproxy/dist/index.bundle.jsIf missing, force a reinstall by removing the cache and restarting opencode:
rm -rf ~/.cache/opencode/node_modules/@canvaauth.json correct? OpenCode needs auth entries to call the plugin's auth.loader:
cat ~/.local/share/opencode/auth.jsonOn VPN should contain:
{"amazon-bedrock":{"type":"api","key":"llmproxy"},"google":{"type":"api","key":"llmproxy"},"openai":{"type":"api","key":"llmproxy"}}On Coder should contain:{"google":{"type":"api","key":"llmproxy"},"openai":{"type":"api","key":"llmproxy"}}Registry configured? Verify
~/.npmrccontains the Canva depot registry line:@canva:registry=https://depot.canva-internal.com/v1/npm/public/
Google model not found
Make sure you're using the google/ prefix (not google-generative-ai/):
- Correct:
google/gemini-3-flash-preview - Wrong:
google-generative-ai/gemini-3-flash-preview
Development
npm install
npm run build
npm test
npm run typecheckTesting
# Unit tests
npm test
# Functional tests — opencode run auto-attaches to an existing server,
# so use opencode serve + --attach to test the dev build in isolation:
cat > /tmp/opencode-dev-test.json << 'EOF'
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["/path/to/opencode-plugin-llmproxy/dist"]
}
EOF
OPENCODE_CONFIG=/tmp/opencode-dev-test.json opencode serve --port 14001 --print-logs &
opencode run --attach http://127.0.0.1:14001 -m "amazon-bedrock/global.anthropic.claude-opus-4-6-v1" "say hello"
opencode run --attach http://127.0.0.1:14001 -m "openai/gpt-4o-mini" "say hello"
opencode run --attach http://127.0.0.1:14001 -m "google/gemini-2.0-flash" "say hello"
kill %1