@kimchitest/opencode-otel-plugin
v1.0.9
Published
OpenCode OTEL plugin for AI Enabler - sends usage telemetry
Downloads
53
Maintainers
Readme
OpenCode OTEL Plugin for AI Enabler
Sends usage telemetry from OpenCode to the AI Enabler service.
OpenCode version 1.2.20+
Installation
Option 1: NPM package (recommended)
Add to your ~/.config/opencode/opencode.json:
{
"plugin": ["@kimchitest/opencode-otel-plugin@latest"]
}To pin to a specific version:
{
"plugin": ["@kimchitest/[email protected]"]
}Set environment variables (see Configuration section below), then restart OpenCode.
Option 2: Local plugin
- Copy
plugin.tsto your OpenCode plugins directory:
mkdir -p ~/.config/opencode/plugins
cp plugin.ts ~/.config/opencode/plugins/otel.tsSet environment variables (see Configuration section below)
Restart OpenCode.
Option 3: Project-level plugin
- Create the plugins directory in your project:
mkdir -p .opencode/pluginsCopy
plugin.tsto.opencode/plugins/otel.tsSet the environment variables (same as above)
Configuration
Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| OPENCODE_ENABLE_TELEMETRY | Yes | Set to 1 to enable telemetry |
| OPENCODE_OTLP_ENDPOINT | Yes | AI Enabler logs ingest endpoint URL |
| OPENCODE_OTLP_HEADERS | Yes | Authorization header with your AI Enabler API key |
Example Environment Variables
Add these to your shell config (~/.zshrc, ~/.bashrc, etc.):
# Enable the plugin
export OPENCODE_ENABLE_TELEMETRY=1
# AI Enabler endpoint for log ingestion
export OPENCODE_OTLP_ENDPOINT=https://api.cast.ai/ai-optimizer/v1beta/logs:ingest
# Authorization header with your AI Enabler API key
export OPENCODE_OTLP_HEADERS="Authorization=Bearer YOUR_API_KEY"After adding, restart your shell or run source ~/.zshrc.
Provider Configuration
The plugin reads provider information from your OpenCode config (~/.config/opencode/opencode.json). Ensure your provider key matches one of the valid values below.
Valid Provider Values
| Provider Key | Description |
|--------------|-------------|
| openai | OpenAI |
| anthropic | Anthropic (Claude) |
| azure | Azure OpenAI |
| azure_ai | Azure AI |
| gemini | Google Gemini |
| vertex_ai-language-models | Vertex AI Gemini |
| vertex_ai-anthropic_models | Vertex AI Anthropic |
| groq | Groq |
| mistral | Mistral |
| codestral | Codestral |
| cohere_chat | Cohere |
| anyscale | Anyscale |
| openrouter | OpenRouter |
| databricks | Databricks |
| perplexity | Perplexity |
| hosted_vllm | Hosted vLLM |
| bedrock | AWS Bedrock |
| ai-enabler | AI Enabler (serverless models) |
Important: If your provider key does not match one of the valid values above, the request will be rejected. You will see an error toast notification in OpenCode.
Example OpenCode Config
{
"model": "ai-enabler/glm-5-fp8",
"provider": {
"ai-enabler": {
"npm": "@ai-sdk/openai-compatible",
"name": "AI Enabler",
"options": {
"baseURL": "https://llm.cast.ai/openai/v1",
"apiKey": "your-api-key"
},
"models": {
"glm-5-fp8": {
"name": "glm-5-fp8",
"tool_call": true
}
}
}
}
}Data Sent
The plugin sends api_request events for each completed assistant message with token usage.
API Request Attributes
| Attribute | Description |
|-----------|-------------|
| model | Model identifier |
| provider | Provider identifier |
| input_tokens | Number of input tokens |
| output_tokens | Number of output tokens |
| cost_usd | Cost in USD |
| duration_ms | Request duration in milliseconds (always 0 - OpenCode doesn't expose this data) |
Troubleshooting
The plugin shows error notifications via OpenCode toasts when issues occur.
Common Errors
- "Invalid provider" - Your provider key is not recognized. Update your
opencode.jsonto use a valid provider key from the list above. - "Telemetry error" - Other errors (network issues, auth failures, etc.)
Debugging Steps
- Verify environment variables are set correctly
- Check that your AI Enabler API key is valid
- Ensure the provider key in your OpenCode config matches a valid value
- Verify the endpoint URL is correct
