@conclave-ai/observability-langfuse
v0.6.0
Published
Conclave AI observability sink — forwards efficiency-gate metrics to Langfuse (self-hosted per decision #13).
Readme
@conclave-ai/observability-langfuse
Langfuse sink for Conclave AI's efficiency-gate metrics. Forwards every LLM call's cost / tokens / latency / cache-hit to Langfuse for trace inspection.
Self-hosted is the intended deployment target per decision #13. Cloud works identically — pass the cloud base URL.
Install
pnpm add @conclave-ai/observability-langfuse @conclave-ai/coreUsage
import { EfficiencyGate, MetricsRecorder } from "@conclave-ai/core";
import { LangfuseMetricsSink } from "@conclave-ai/observability-langfuse";
const sink = new LangfuseMetricsSink({
publicKey: process.env.LANGFUSE_PUBLIC_KEY,
secretKey: process.env.LANGFUSE_SECRET_KEY,
baseUrl: process.env.LANGFUSE_BASEURL, // self-hosted URL (optional for cloud)
});
const gate = new EfficiencyGate({
metrics: new MetricsRecorder({ sink }),
});Call sink.shutdown() at process exit to flush pending events.
Env variables
| Variable | Required | Purpose |
|---|---|---|
| LANGFUSE_PUBLIC_KEY | ✓ | Public key from your Langfuse project |
| LANGFUSE_SECRET_KEY | ✓ | Secret key from your Langfuse project |
| LANGFUSE_BASEURL | optional | Self-hosted URL (defaults to Langfuse cloud) |
