@countly/ai-sdk-mastra
v0.0.2
Published
Countly AI observability adapter for Mastra
Readme
@countly/ai-sdk-mastra
Countly AI observability adapter for Mastra.
Part of the Countly AI SDK — provider-agnostic LLM observability for every AI stack.
Install
npm install @countly/ai-sdk-mastra@countly/ai-sdk-core is pulled in automatically.
Peer dependencies
@mastra/core >= 1.0.0
@mastra/observability >= 0.1.0Quick Start
Note: the exporter must be wrapped in
new Observability({...})and passed via Mastra'sobservability:field. Passing it directly asexporters:on Mastra won't work — no spans reach the exporter.
import { Mastra } from "@mastra/core/mastra";
import { Observability } from "@mastra/observability";
import { CountlyMastraExporter } from "@countly/ai-sdk-mastra";
new Mastra({
observability: new Observability({
configs: {
default: {
serviceName: "my-ai-app",
exporters: [
new CountlyMastraExporter({
appKey: "YOUR_APP_KEY",
url: "https://your-countly-server.com",
requestContextDeviceIdKey: "countlyDeviceId",
}),
],
},
},
}),
});In your request handler:
import { RequestContext } from "@mastra/core/request-context";
app.post("/chat", async (req, res) => {
const ctx = new RequestContext();
ctx.set("countlyDeviceId", req.user.id);
await mastra.getAgent("intent").stream(messages, { requestContext: ctx });
});How the user ID reaches the event
The bridge is Mastra's runtime, not our SDK. We read a public field Mastra puts on every exported span:
Your handler Mastra runtime @countly/ai-sdk-mastra
───────────── ────────────── ──────────────────────
ctx = new RequestContext()
ctx.set("countlyDeviceId", id)
run scope carries ctx
agent.stream(msg, { ↓
requestContext: ctx spans created during run
}) (AGENT_RUN, MODEL_GEN, TOOL_CALL)
↓
ExportedSpan.requestContext = {
countlyDeviceId: id adapter reads
} span.requestContext
↓
event.deviceId = id
POST /i?device_id=id- Rename the key via
requestContextDeviceIdKey(e.g."myUserId") - Set to
nullto disable — falls back togetDeviceId()/deviceId/ process UUID - No Mastra version requirement beyond
>=1.0— stable v1 observability surface
What's captured
- Per-event tracing via
exportTracingEvent(span_started,span_updated,span_ended) - Automatic trace completion when the root span ends (no
parentSpanId) - Token usage, cost, and latency aggregated across all spans of a trace
- Tool calls within agent workflows (
function_callandmcp_tool_calltypes) - Error reporting for failed traces
- Per-user aggregation across agent runs
Configuration
All adapters accept the same CountlyAIConfig object:
| Field | Default | Description |
|-------|---------|-------------|
| appKey | required | Countly app key |
| url | required | Countly server URL |
| requestContextDeviceIdKey | "countlyDeviceId" | Key to read from Mastra's requestContext. Set to null to disable. |
| observabilityLevel | 0 | 0 = metrics only, 1 = + tool calls, 2 = + text previews |
| tags | [] | Labels for cost attribution and filtering |
| environment | "production" | Environment tag |
| costModel | — | Custom pricing overrides |
| flushInterval | 10000 | Buffer flush interval in ms |
| maxBatchSize | 20 | Max events before auto-flush |
| debug | false | Log transport errors |
| disabled | false | Disable all telemetry |
Full documentation
See the Countly AI SDK repository for the unified data model, observability levels, cost calculation, privacy controls, and Countly plugin integration (Drill, Funnels, Cohorts, APM, Crash Analytics).
License
MIT
