@enatlas/sidecar
v0.1.3
Published
EnAtlas Sidecar — AI Spend Monitoring Proxy
Maintainers
Readme
@enatlas/sidecar
EnAtlas Sidecar — A local AI spend monitoring & optimization proxy.
Drop it in front of your OpenAI-compatible provider. It monitors every request, caches duplicates, routes between models, and reports telemetry to your EnAtlas dashboard — all without touching your prompts or API keys.
Quick Start
# 1. Set up your config interactively
npx @enatlas/sidecar init
# 2. Start the proxy
npx @enatlas/sidecar startThe init wizard will ask for your AI provider URL, API key, and your EnAtlas workspace credentials (found in your dashboard).
How It Works
The sidecar runs on http://localhost:4100 by default. Point your application's baseURL to the sidecar instead of your provider:
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'http://localhost:4100/v1', // ← EnAtlas sidecar
apiKey: process.env.OPENAI_API_KEY,
});
// Use as normal — the sidecar proxies to your real provider
const res = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }],
});Features
- Zero-code monitoring — Latency, tokens, cost, errors tracked automatically
- Exact-match caching — Identical prompts served from memory in <1ms
- Request coalescing — Concurrent duplicate requests collapsed into one upstream call
- Model routing — Downgrade models based on dashboard policies
- Resilience routing — Automatic failover to backup provider on 429/5xx
- Async telemetry — Never blocks your API calls
Configuration
The init command creates a .enatlas.env file. You can also set environment variables directly:
| Variable | Description | Default |
|----------|-------------|---------|
| UPSTREAM_BASE_URL | Your AI provider URL | — |
| UPSTREAM_API_KEY | Your provider API key (stays local) | — |
| CLOUD_INGEST_URL | EnAtlas telemetry endpoint | — |
| WORKSPACE_ID | Your workspace ID | — |
| WORKSPACE_INGEST_KEY | Your ingest key | — |
| INTEGRATION_NAME | Provider name | openai |
| PORT | Sidecar port | 4100 |
| HOST | Sidecar host | 0.0.0.0 |
Trust Model
- Your API keys never leave your machine.
- Prompts and completions are never sent to EnAtlas cloud.
- Only metadata (model, tokens, latency, cost) is sent as telemetry.
- Telemetry is async and never blocks your API calls.
License
MIT
