npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@canva/opencode-plugin-llmproxy

v0.20260407.0

Published

OpenCode plugin for Canva LLMProxy (AWS Bedrock, OpenAI, Google)

Readme

opencode-plugin-llmproxy

OpenCode plugin for accessing LLMs via Canva's internal LLMProxy. Supports:

  • AWS Bedrock (Anthropic Claude models)
  • Google AI (Gemini models)
  • OpenAI (GPT models)

Installation

1. Add the Canva npm registry

Add to ~/.npmrc:

@canva:registry=https://depot.canva-internal.com/v1/npm/public/

2. Configure opencode.json

Add to ~/.config/opencode/opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "share": "disabled",
  "enabled_providers": [
    "amazon-bedrock",
    "google",
    "openai"
  ],
  "provider": {
    "amazon-bedrock": {
      "models": {
        "anthropic.claude-opus-4-6-v1": {
          "limit": { "context": 200000, "output": 128000 }
        },
        "us.anthropic.claude-opus-4-6-v1": {
          "limit": { "context": 200000, "output": 128000 }
        },
        "global.anthropic.claude-opus-4-6-v1": {
          "limit": { "context": 200000, "output": 128000 }
        },
        "eu.anthropic.claude-opus-4-6-v1": {
          "limit": { "context": 200000, "output": 128000 }
        }
      }
    }
  },
  "plugin": [
    "@canva/opencode-plugin-llmproxy"
  ]
}

Note: The Claude Opus 4.6 context window override fixes an upstream models.dev bug that incorrectly reports 1M tokens instead of the Bedrock limit of 200K.

The plugin is installed automatically by opencode using Bun on first startup. No cloning or building required. The package is cached at ~/.cache/opencode/node_modules/.

3. Configure auth.json

On VPN (non-Coder), add to ~/.local/share/opencode/auth.json:

{
  "amazon-bedrock": { "type": "api", "key": "llmproxy" },
  "google": { "type": "api", "key": "llmproxy" },
  "openai": { "type": "api", "key": "llmproxy" }
}

On Coder devboxes, omit the amazon-bedrock key:

{
  "google": { "type": "api", "key": "llmproxy" },
  "openai": { "type": "api", "key": "llmproxy" }
}

Why: OpenCode only calls the plugin's auth.loader for providers that have an auth.json entry. Without an amazon-bedrock entry on VPN, OpenCode finds no credentials at startup and fails before the plugin can inject the bearer token. The "key": "llmproxy" value is a placeholder — the plugin replaces it with a real token fetched via otter.

On Coder, do NOT add the amazon-bedrock key. The plugin uses AWS IMDS credentials (SigV4) instead of bearer tokens, and having a placeholder key would break that path.

Available Models

Bedrock (Claude)

Use the full Bedrock model ID:

  • amazon-bedrock/anthropic.claude-haiku-4-5-20251001-v1:0
  • amazon-bedrock/anthropic.claude-sonnet-4-5-20250929-v1:0
  • amazon-bedrock/anthropic.claude-opus-4-5-20251101-v1:0

Cross-region inference is supported with global. prefix for Claude 4.5+ models:

  • amazon-bedrock/global.anthropic.claude-sonnet-4-5-20250929-v1:0

Bedrock (Third-Party — US-only)

These models are available in US regions only and do not use inference profile prefixes:

  • amazon-bedrock/moonshotai.kimi-k2.5 — Kimi K2.5 (Moonshot AI)
  • amazon-bedrock/qwen.qwen3-coder-next — Qwen3 Coder Next (Qwen)
  • amazon-bedrock/nvidia.nemotron-nano-3-30b — Nemotron Nano 3 30B (NVIDIA)

Note: GLM 4.7 models (zai.glm-4.7, zai.glm-4.7-flash) are available on Bedrock but have a known tool-calling bug where the Converse API rejects tool result messages. They work for basic text generation but are not usable for agentic workflows.

Google (Gemini)

  • google/gemini-2.5-flash
  • google/gemini-2.5-pro
  • google/gemini-3-flash-preview

OpenAI

  • openai/gpt-4o
  • openai/gpt-5.3-codex

How It Works

Bedrock

  1. Fetches bearer tokens via VPN (fast, 2s timeout) or otter CLI (fallback)
  2. Sets AWS_BEARER_TOKEN_BEDROCK and AWS_REGION environment variables
  3. Refreshes tokens automatically before each LLM call

OpenAI

  • Coder (devbox): Fetches the real API key from AWS Secrets Manager (/devbox/forge/openai-api-key) and contacts OpenAI directly. Falls back to mTLS LLMProxy if the secret is unavailable.
  • VPN: Routes through the VPN LLMProxy endpoint with a dummy key placeholder.

Google

  1. Detects Coder environment and uses mTLS client certificates
  2. Routes requests through LLMProxy endpoints
  3. Authentication is handled by mTLS (no API keys needed)

Environment Detection

The plugin automatically detects the environment:

  • Coder (devbox):
    • Bedrock: Uses AWS IMDS credentials (when CODER=true and CODER_AGENT_AUTH=aws-instance-identity)
    • Google/OpenAI: Uses mTLS with client certificates from ~/.pki/canva/
  • VPN: Uses VPN endpoints (requires network access)

Required Settings

  • share: "disabled" - Prevents creating publicly accessible session URLs
  • enabled_providers - Restricts to vetted providers only

Prerequisites

  • Coder environment OR VPN connection to Canva network
  • For Bedrock on VPN: otter CLI (otter bedrock-bearer-token for authentication)
  • For Bedrock on Coder: AWS instance identity auth (automatic via IMDS)

Troubleshooting

Token errors (Bedrock)

otter bedrock-bearer-token --force-refresh

Check plugin logs

tail -f ~/.local/share/opencode/log/*.log | grep canva-llmproxy

ProviderModelNotFoundError (all providers)

If you see ProviderModelNotFoundError with suggestions: [] for all providers, the plugin failed to load entirely. Check:

  1. Plugin installed? Verify the package is in the opencode cache:

    ls ~/.cache/opencode/node_modules/@canva/opencode-plugin-llmproxy/dist/index.bundle.js

    If missing, force a reinstall by removing the cache and restarting opencode:

    rm -rf ~/.cache/opencode/node_modules/@canva
  2. auth.json correct? OpenCode needs auth entries to call the plugin's auth.loader:

    cat ~/.local/share/opencode/auth.json

    On VPN should contain: {"amazon-bedrock":{"type":"api","key":"llmproxy"},"google":{"type":"api","key":"llmproxy"},"openai":{"type":"api","key":"llmproxy"}} On Coder should contain: {"google":{"type":"api","key":"llmproxy"},"openai":{"type":"api","key":"llmproxy"}}

  3. Registry configured? Verify ~/.npmrc contains the Canva depot registry line:

    @canva:registry=https://depot.canva-internal.com/v1/npm/public/

Google model not found

Make sure you're using the google/ prefix (not google-generative-ai/):

  • Correct: google/gemini-3-flash-preview
  • Wrong: google-generative-ai/gemini-3-flash-preview

Development

npm install
npm run build
npm test
npm run typecheck

Testing

# Unit tests
npm test

# Functional tests — opencode run auto-attaches to an existing server,
# so use opencode serve + --attach to test the dev build in isolation:
cat > /tmp/opencode-dev-test.json << 'EOF'
{
  "$schema": "https://opencode.ai/config.json",
  "plugin": ["/path/to/opencode-plugin-llmproxy/dist"]
}
EOF

OPENCODE_CONFIG=/tmp/opencode-dev-test.json opencode serve --port 14001 --print-logs &
opencode run --attach http://127.0.0.1:14001 -m "amazon-bedrock/global.anthropic.claude-opus-4-6-v1" "say hello"
opencode run --attach http://127.0.0.1:14001 -m "openai/gpt-4o-mini" "say hello"
opencode run --attach http://127.0.0.1:14001 -m "google/gemini-2.0-flash" "say hello"
kill %1